About Brent Huston

I am the CEO of MicroSolved, Inc. and a security evangelist. I have spent the last 20+ years working to make the Internet safer for everyone on a global scale. I believe the Internet has the capability to contribute to the next great leap for mankind, and I want to help make that happen!

Petya/PetyaWrap Threat Info

As we speak, there is a global ransomware outbreak spreading. The infosec community is working together, in the open, on Twitter and mailing lists sharing information with each other and the world about the threat. 

The infector is called “Petya”/“PetyaWrap” and it appears to use psexec to execute the EternalBlue exploits from the NSA.

The current infector has the following list of target file extensions in the current (as of an hour ago) release. https://twitter.com/bry_campbell/status/879702644394270720/photo/1

Those with robust networks will likely find containment a usual activity, while those who haven’t implement defense in depth and a holistic enclaving strategy are likely in trouble.

Here are the exploits it is using: CVE-2017-0199 and MS17-010, so make sure you have these patched on all systems. Make sure you find anything that is outside the usual patch cycle, like HVAC, elevators, network cameras, ATMs, IoT devices, printers and copiers, ICS components, etc. Note that this a combination of a client-side attack and a network attack, so likely very capable of spreading to internal systems… Client side likely to yield access to internals pretty easily.

May only be affecting the MBR, so check that to see if it is true for you. Some chatter about multiple variants. If you can open a command prompt, bootrec may help. Booting from a CD/USB or using a drive rescue tool may be of use. Restore/rebuild the MBR seems to be successful for some victims. >>  “bootrec /RebuildBcd bootrec /fixMbr bootrec /fixboot” (untested)

New Petrwrap/Petya ransomware has a fake Microsoft digital signature appended. Copied from Sysinternals Utils. – https://t.co/JooBu8lb9e

Lastline indicated this hash as an IOC: 027cc450ef5f8c5f653329641ec1fed91f694e0d229928963b30f6b0d7d3a745 – They also found these activities: https://pbs.twimg.com/media/DDVj-llVYAAHqk4.jpg

Eternal Blue detection rules are firing in several detection products, ET Rules firing on that Petya 71b6a493388e7d0b40c83ce903bc6b04  (drops 7e37ab34ecdcc3e77e24522ddfd4852d ) – https://twitter.com/kafeine/status/879711519038210048

Make sure Office updates are applied, in addition to OS updates for Windows. <<Office updates needed to be immune to CVE-2017-0199.

Now is a great time to ensure you have backups that work for critical systems and that your restore processes are functional.

Chatter about wide scale spread to POS systems across europe. Many industries impacted so far.

Bitdefender initial analysis – https://labs.bitdefender.com/2017/06/massive-goldeneye-ransomware-campaign-slams-worldwide-users/?utm_source=SMGlobal&utm_medium=Twitter&utm_campaign=labs

Stay safe out there! 

 

3:48pm Eastern

Update: Lots of great info on detection, response, spread and prevention can be found here: https://securelist.com/schroedingers-petya/78870/

Also, this is the last update to this post unless something significant changes. Follow me on Twitter for more info: @lbhuston 

State Of Security Podcast Episode 13 Is Out

Hey there! I hope your week is off to a great start.

Here is Episode 13 of the State of Security Podcast. This new “tidbit” format comes in under 35 minutes and features some pointers on unusual security questions you should be asking cloud service providers. 

I also provide a spring update about my research, where it is going and what I have been up to over the winter.

Check it out and let me know what you think via Twitter.

SilentTiger Targeted Threat Intelligence Update

Just a quick update on SilentTiger™, our passive security assessment and intelligence engine. 

We have released a new version of the platform to our internal team, and this new version automatically builds the SilentTiger configuration for our analysts. That means that clients using our SilentTiger offering will no longer have to provide any more information than the list of domain names to engage the process. 

This update also now includes a host inventory mechanism, and a new data point – who runs the IP addresses identified. This is very useful for finding out the cloud providers that a given set of targets are using and makes it much easier to find industry clusters of service providers that could be a risk to the supply chain.

For more information about using SilentTiger to perform ongoing assessments for your organization, your M&A prospects, your supply chain or as a form of industry intelligence, simply get in touch. Clients ranging from global to SMB and across a wide variety of industries are already taking advantage of the capability. Give us 20 minutes, and we’ll be happy to explain! 

Revisiting Nuance Detection

The core of nuance detection is to extend alerting capabilities into finding situations that specifically should not exist, and if they happen, would indicate a significant security failure. A simple, elegant example would be a motion sensor on a safe in your home, combined with something like your home alarm system.
 
A significant failure state would be for the motion sensor inside the safe to trigger while the home alarm system is set in away mode. When the alarm is in away mode, there should be no condition that triggers motion inside the safe. If motion is detected, anytime, you might choose to alert in a minor way. But, if the alarm is set to away mode, you might signal all kinds of calamity and flashing lights, bells and whistles, for example.
 
This same approach can apply to your network environment, applications or data systems. Define what a significant failure state looks like, and then create detection and alerting mechanisms, even if conditional, for the indicators of that state. It can be easy. 
 
I remember thinking more deeply about this for the first time when I saw Marcus Ranum give his network burglar alarm speech at Defcon, what seems like a 1000 years ago now. That moment changed my life forever. Since then, I have always wanted to work on small detections. The most nuanced of fail states. The deepest signs of compromise. HoneyPoint™ came from that line of thinking, albeit, many years later. (Thanks, Marcus, you are amazing! BTW.) 🙂
 
I’ve written about approaches to it in the past, too. Things like detecting web shells, detection in depth techniques and such. I even made some nice maturity and deployment models.
 
This month, I will be revisiting nuance detection more deeply. Creating some more content around it, and speaking about it more openly. I’ll also cover how we have extended HoneyPoint with the Handler portion of HoneyPoint Agent. in order to fully support event management and data handling into your security alerting systems from basic scripts and simple tools you can create yourself. 
 
Stay tuned, and in the meantime, drop me a line on Twitter (@lbhuston) and let me know more about nuance detections you can think of or have implemented. I’d love to hear more about it. 

Segmenting With MSI MachineTruth

Many organizations struggle to implement network segmentation and secure network enclaves for servers, industrial controls, SCADA or regulated data. MicroSolved, Inc. (“MSI”) has been helping clients solve information security challenges for nearly twenty-five years on a global scale. In helping our clients segment their networks and protect their traffic flows, we identified a better approach to solving this often untenable problem.

That approach, called MachineTruth™, leverages our proprietary machine learning and data analytics platform to support our industry leading team of experts throughout the process. Our team leverages offline analysis of configuration files, net flow and traffic patterns to simplify the challenge. Instead of manual review by teams of network and systems administrators, MachineTruth takes automated deep dives into the data to provide real insights into how to segment, where to segment, what filtering rules need to be established and how those rules are functioning as they come online.

Our experts then work with your network and security teams, or one of our select MachineTruth Implementation Partners, to guide them through the process of installing and configuring filtering devices, detection tools and applications needed to support the segmentation changes. As the enclaves start to take shape, ongoing oversight is performed by the MSI team, via continual analytics and modeling throughout the segmentation effort. As the data analysis and implementation processes proceed, the controls and rules are optimized and transitioned to steady state maintenance.

Lastly, the MSI team works with the segmentation stakeholders to document, socialize and transfer knowledge to those who will manage and support the newly segmented network and its various enclaves for the long term. This last step is critical to ensuring that the network changes and segmentation initiatives remain in place in the future.

This data-focused, machine learning-based approach enables segmentation for even the most complex of environments. It has been used to successfully save hundreds of man-years of labor and millions of dollars in overhead costs. It has reduced the time to segment international networks from years to months, while significantly raising the quality and security of the new environments. It has accomplished these feats, all while reducing network downtime, outages and potentially dangerous misconfiguration issues.

If your organization is considering or in the process of performing network segmentation for your critical data, you should take a look at the MachineTruth approach from MSI. It could mean the difference between success and struggle for this critical initiative.


Network Segmentation Month

February is Network Segmentation Month at MSI. During February, our blog and social media content will focus on network segmentation initiatives. A how, why, when, what and who –  kind of look at creating secure enclaves within your network.

These enclaves could be based on risk zones, types of systems, types of access, business process, regulatory requirements or many other meta factors. 

We will discuss different reasons for segmenting, approaches to segmentation, some of the lessons we’ve learned from segmenting some of the largest and most complex environments in our 25 year history. It won’t all be positive – we’ll also share some of the ways that segmentation fails, some of the challenges and some of the drawbacks of segmenting networks.

So, strap in and stay tuned for a month of content focused on using segmentation to better secure your environment.

As always, if you have stories to share or want to discuss a specific segmentation question, you can do that via email (info@microsolved.com) or via Twitter to @microsolved or to me personally. (@lbhuston) MSI is always available to help you with segmentation projects, be that planning, implementation, oversight or attestation. We have a proprietary, data-centric approach to this work which we have been using for several years. You can learn more about it here – MachineTruth. We look forward to hearing from you!

Last Quick and Dirty Log Tip for the Week

OK, so this week I posted two other blog posts about doing quick and dirty log analysis and some of the techniques I use. This one also covers converting column logs to CSV.

After the great response, I wanted to drop one last tip for the week. 

Several folks asked me about re-sorting and processing the column-based data in different ways and to achieve different analytical views. 

Let me re-introduce you to my friend and yours, sort.

In this case, instead of using the sort -n -r like before (numeric sort, reverse order), we can use:

  • sort -k# -n input_file (where # is the number of the column you’d like to sort by and the input file is the name of the file to sort)
    • You can use this inline by leveraging the pipe (|) again – i.e.: cat input.txt | sort -k3 -n (this types the input file and sends it to sort for sorting on the third column in numeric order) (-r would of course, reverse it…)
    • You can write the output of this to a file with redirects “> filename.txt”, i.e.: cat input.txt | sort -k3 -n -r > output.txt
      • You could also use “>>” as the redirect in order to create a file if it doesn’t exist OR append to a file if it does exist… i.e..:  cat input.txt | sort -k3 -n -r >> appended_output.txt

That’s it! It’s been a fun week sharing some simple command line processing tips for log files. Drop me a line on Twitter (@lbhuston) and let me know what you used them for, or which ones are your favorite. As always, thanks and have a great weekend! 

Quick And Dirty Log Analysis Followup

Earlier this week, I posted some tips for doing Quick and Dirty PA Firewall Log Analysis.

After I posted this, I got a very common question, and I wanted to answer it here.

The question is something along the lines of “When I use the techniques from your post, the outputs of the commands are column separated data. I need them to be CSV to use with my (tool/SEIM/Aunt Gracie/whatever). How can I convert them?” Sound familiar?

OK, so how do we accomplish this feat of at the command line without all of the workarounds that people posted, and without EVER loading Excel? Thankfully we can use awk again for this.

We can use:

  • awk ‘BEGIN { OFS = “,”} ; {print $1,$2,$3}’
    • Basically, take an input of column data, and print out the columns we want (can be any, in this case I want the first 3 columns), and make the outputs comma delimited.
    • We can just append this to our other command stacks with another pipe (|) to get our output CSV
  • Example: cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $8,$9}’ | sort -n | uniq -c | sort -n -r | awk ‘BEGIN { OFS = “,”} ; {print $1,$2,$3}’
    • In this example, the source IP and destination IP will be analyzed, and the reduced to unique pairs, along with the number of times that that pair is duplicated in the input log (I use this as a “hit rate” as I described earlier
      • A common question, why do I ask for two columns in the first awk and then ask for three columns in the second awk?
        • The answer of course, is that the first awk prints the unique pairs, but it also adds a column of the “hit rate”, so to get the output appropriately, I need all three fields.

So, once again, get to know awk. It is your friend.:)

PS – Yes, I know, there are hundreds of other ways to get this same data, in the same format, using other command line text processing tools. Many may even be less redundant than the commands above. BUT, this is how I did it. I think it makes it easy for people to get started and play with the data. Post your ways to Twitter or share with the community. Exploration is awesome, so it will encourage users to play more. Cool! Hit me on Twitter if you wanna share some or talk more about this approach (@lbhuston).

Thanks for reading!

Quick & Dirty Palo Alto Log Analysis

OK, so I needed to do some quick and dirty traffic analysis on Palo Alto text logs for a project I was working on. The Palo Alto is great and their console tools are nice. Panorama is not too shabby. But, when I need quick and dirty analysis and want to play with data, I dig into the logs. 
 
That said, for my quick analysis, I needed to analyze a bunch of text logs and model the traffic flows. To do that, I used simple command line text processing in Unix (Mac OS, but with tweaks also works in Linux, etc.)
 
I am sharing some of my notes and some of the useful command lines to help others who might be facing a similar need.
 
First, for my project, I made use of the following field #’s in the text analysis, pulled from the log header for sequence:
  • $8 (source IP) 
  • $9 (dest IP)
  • $26 (dest port)
  • $15 (AppID)
  • $32 (bytes)
 
Once, I knew the fields that corresponded to values I wanted to study, I started using the core power of command line text processing. And in this case, the power I needed was:
  • cat
  • grep
    • Including, the ever useful grep -v (inverse grep, show me the lines that don’t match my pattern)
  • awk
    • particularly: awk ‘BEGIN { FS = “,”} ; {print $x, $y}’ which prints specific columns in CSV files 
  • sort
    • sort -n (numeric sort)
    • sort -r (reverse sort, descending)
  • uniq
    • uniq -c (count the numbers of duplicates, used for determining “hit rates” or frequency, etc.)
 
Of course, to learn more about these commands, simply man (command name) and read the details. 😃 
 
OK, so I will get you started, here are a few of the more useful command lines I used for my quick and dirty analysis:
  • cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $8,$9,$26}’ | sort | uniq -c | sort -n -r > hitrate_by_rate.txt
    • this one produces a list of Source IP/Dest IP/Dest Port unique combinations, sorted in descending order by the number of times they appear in the log
  • cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $8,$9}’ | sort -n | uniq -c | sort -n -r > uniqpairs_by_hitrate.txt
    • this one produces a list of the uniq Source & Destination IP addresses, in descending order by how many times they talk to each other in the log (note that their reversed pairings will be separate, if they are present – that is if A talks to B, there will be an entry for that, but if B initiates conversations with A, that will be a separate line in this data set)
  • cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $15}’ | sort | uniq -c | sort -n -r > appID_by_hitrate.txt
    • this one uses the same exact techniques, but now we are looking at what applications have been identified by the firewall, in descending order by number of times that application identifier appears in the log
 
Again, these are simple examples, but you can tweak and expand as you need. This trivial approach to command line text analysis certainly helps with logs and traffic data. You can use those same commands to do a wondrous amount of textual analysis and processing. Learn them, live them, love them. 😃 
 
If you have questions, or want to share some of the ways you use those commands, please drop us a line on Twitter (@microsolved) or hit me up personally for other ideas (@lbhuston). As always, thanks for reading and stay safe out there!