Are You Seeing This? Join a Threat Sharing Group!

Just a quick note today about threat sharing groups. 

I am talking to more and more companies and organizations that are putting together local, regional or vertical market threat sharing groups. These are often adhoc and usually driven by security practitioners, who are helping each other with cooperative defenses and sharing of new tactics and threat patterns (think TTPs (tactics, techniques & procedures)) or indicators of compromise (IOCs). Many times, these are informal email lists or RSS feeds that the technicians subscribe to and share what they are seeing in the trenches. 

A few folks have tried to commercialize them, but in most cases, these days, the sharing is simply free and open. 

If you get a chance to participate in one or more of these open source networks, you might want to check it out. Many of our clients are saying great things about the data they get via the networks and often they have helped contain incidents and breaches in a rapid fashion.

If you want to discuss your network, or if you have one that you’d like me to help promote, hit me up on Twitter (@lbhuston). If you are looking for one to join, check Twitter and I’ll share as folks allow, or I’ll make private connections as possible. 

As always, thanks for reading, and until next time, stay safe out there! 

Petya/PetyaWrap Threat Info

As we speak, there is a global ransomware outbreak spreading. The infosec community is working together, in the open, on Twitter and mailing lists sharing information with each other and the world about the threat. 

The infector is called “Petya”/“PetyaWrap” and it appears to use psexec to execute the EternalBlue exploits from the NSA.

The current infector has the following list of target file extensions in the current (as of an hour ago) release. https://twitter.com/bry_campbell/status/879702644394270720/photo/1

Those with robust networks will likely find containment a usual activity, while those who haven’t implement defense in depth and a holistic enclaving strategy are likely in trouble.

Here are the exploits it is using: CVE-2017-0199 and MS17-010, so make sure you have these patched on all systems. Make sure you find anything that is outside the usual patch cycle, like HVAC, elevators, network cameras, ATMs, IoT devices, printers and copiers, ICS components, etc. Note that this a combination of a client-side attack and a network attack, so likely very capable of spreading to internal systems… Client side likely to yield access to internals pretty easily.

May only be affecting the MBR, so check that to see if it is true for you. Some chatter about multiple variants. If you can open a command prompt, bootrec may help. Booting from a CD/USB or using a drive rescue tool may be of use. Restore/rebuild the MBR seems to be successful for some victims. >>  “bootrec /RebuildBcd bootrec /fixMbr bootrec /fixboot” (untested)

New Petrwrap/Petya ransomware has a fake Microsoft digital signature appended. Copied from Sysinternals Utils. – https://t.co/JooBu8lb9e

Lastline indicated this hash as an IOC: 027cc450ef5f8c5f653329641ec1fed91f694e0d229928963b30f6b0d7d3a745 – They also found these activities: https://pbs.twimg.com/media/DDVj-llVYAAHqk4.jpg

Eternal Blue detection rules are firing in several detection products, ET Rules firing on that Petya 71b6a493388e7d0b40c83ce903bc6b04  (drops 7e37ab34ecdcc3e77e24522ddfd4852d ) – https://twitter.com/kafeine/status/879711519038210048

Make sure Office updates are applied, in addition to OS updates for Windows. <<Office updates needed to be immune to CVE-2017-0199.

Now is a great time to ensure you have backups that work for critical systems and that your restore processes are functional.

Chatter about wide scale spread to POS systems across europe. Many industries impacted so far.

Bitdefender initial analysis – https://labs.bitdefender.com/2017/06/massive-goldeneye-ransomware-campaign-slams-worldwide-users/?utm_source=SMGlobal&utm_medium=Twitter&utm_campaign=labs

Stay safe out there! 

 

3:48pm Eastern

Update: Lots of great info on detection, response, spread and prevention can be found here: https://securelist.com/schroedingers-petya/78870/

Also, this is the last update to this post unless something significant changes. Follow me on Twitter for more info: @lbhuston 

SilentTiger Targeted Threat Intelligence Update

Just a quick update on SilentTiger™, our passive security assessment and intelligence engine. 

We have released a new version of the platform to our internal team, and this new version automatically builds the SilentTiger configuration for our analysts. That means that clients using our SilentTiger offering will no longer have to provide any more information than the list of domain names to engage the process. 

This update also now includes a host inventory mechanism, and a new data point – who runs the IP addresses identified. This is very useful for finding out the cloud providers that a given set of targets are using and makes it much easier to find industry clusters of service providers that could be a risk to the supply chain.

For more information about using SilentTiger to perform ongoing assessments for your organization, your M&A prospects, your supply chain or as a form of industry intelligence, simply get in touch. Clients ranging from global to SMB and across a wide variety of industries are already taking advantage of the capability. Give us 20 minutes, and we’ll be happy to explain! 

Last Quick and Dirty Log Tip for the Week

OK, so this week I posted two other blog posts about doing quick and dirty log analysis and some of the techniques I use. This one also covers converting column logs to CSV.

After the great response, I wanted to drop one last tip for the week. 

Several folks asked me about re-sorting and processing the column-based data in different ways and to achieve different analytical views. 

Let me re-introduce you to my friend and yours, sort.

In this case, instead of using the sort -n -r like before (numeric sort, reverse order), we can use:

  • sort -k# -n input_file (where # is the number of the column you’d like to sort by and the input file is the name of the file to sort)
    • You can use this inline by leveraging the pipe (|) again – i.e.: cat input.txt | sort -k3 -n (this types the input file and sends it to sort for sorting on the third column in numeric order) (-r would of course, reverse it…)
    • You can write the output of this to a file with redirects “> filename.txt”, i.e.: cat input.txt | sort -k3 -n -r > output.txt
      • You could also use “>>” as the redirect in order to create a file if it doesn’t exist OR append to a file if it does exist… i.e..:  cat input.txt | sort -k3 -n -r >> appended_output.txt

That’s it! It’s been a fun week sharing some simple command line processing tips for log files. Drop me a line on Twitter (@lbhuston) and let me know what you used them for, or which ones are your favorite. As always, thanks and have a great weekend! 

Quick And Dirty Log Analysis Followup

Earlier this week, I posted some tips for doing Quick and Dirty PA Firewall Log Analysis.

After I posted this, I got a very common question, and I wanted to answer it here.

The question is something along the lines of “When I use the techniques from your post, the outputs of the commands are column separated data. I need them to be CSV to use with my (tool/SEIM/Aunt Gracie/whatever). How can I convert them?” Sound familiar?

OK, so how do we accomplish this feat of at the command line without all of the workarounds that people posted, and without EVER loading Excel? Thankfully we can use awk again for this.

We can use:

  • awk ‘BEGIN { OFS = “,”} ; {print $1,$2,$3}’
    • Basically, take an input of column data, and print out the columns we want (can be any, in this case I want the first 3 columns), and make the outputs comma delimited.
    • We can just append this to our other command stacks with another pipe (|) to get our output CSV
  • Example: cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $8,$9}’ | sort -n | uniq -c | sort -n -r | awk ‘BEGIN { OFS = “,”} ; {print $1,$2,$3}’
    • In this example, the source IP and destination IP will be analyzed, and the reduced to unique pairs, along with the number of times that that pair is duplicated in the input log (I use this as a “hit rate” as I described earlier
      • A common question, why do I ask for two columns in the first awk and then ask for three columns in the second awk?
        • The answer of course, is that the first awk prints the unique pairs, but it also adds a column of the “hit rate”, so to get the output appropriately, I need all three fields.

So, once again, get to know awk. It is your friend.:)

PS – Yes, I know, there are hundreds of other ways to get this same data, in the same format, using other command line text processing tools. Many may even be less redundant than the commands above. BUT, this is how I did it. I think it makes it easy for people to get started and play with the data. Post your ways to Twitter or share with the community. Exploration is awesome, so it will encourage users to play more. Cool! Hit me on Twitter if you wanna share some or talk more about this approach (@lbhuston).

Thanks for reading!

Quick & Dirty Palo Alto Log Analysis

OK, so I needed to do some quick and dirty traffic analysis on Palo Alto text logs for a project I was working on. The Palo Alto is great and their console tools are nice. Panorama is not too shabby. But, when I need quick and dirty analysis and want to play with data, I dig into the logs. 
 
That said, for my quick analysis, I needed to analyze a bunch of text logs and model the traffic flows. To do that, I used simple command line text processing in Unix (Mac OS, but with tweaks also works in Linux, etc.)
 
I am sharing some of my notes and some of the useful command lines to help others who might be facing a similar need.
 
First, for my project, I made use of the following field #’s in the text analysis, pulled from the log header for sequence:
  • $8 (source IP) 
  • $9 (dest IP)
  • $26 (dest port)
  • $15 (AppID)
  • $32 (bytes)
 
Once, I knew the fields that corresponded to values I wanted to study, I started using the core power of command line text processing. And in this case, the power I needed was:
  • cat
  • grep
    • Including, the ever useful grep -v (inverse grep, show me the lines that don’t match my pattern)
  • awk
    • particularly: awk ‘BEGIN { FS = “,”} ; {print $x, $y}’ which prints specific columns in CSV files 
  • sort
    • sort -n (numeric sort)
    • sort -r (reverse sort, descending)
  • uniq
    • uniq -c (count the numbers of duplicates, used for determining “hit rates” or frequency, etc.)
 
Of course, to learn more about these commands, simply man (command name) and read the details. 😃 
 
OK, so I will get you started, here are a few of the more useful command lines I used for my quick and dirty analysis:
  • cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $8,$9,$26}’ | sort | uniq -c | sort -n -r > hitrate_by_rate.txt
    • this one produces a list of Source IP/Dest IP/Dest Port unique combinations, sorted in descending order by the number of times they appear in the log
  • cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $8,$9}’ | sort -n | uniq -c | sort -n -r > uniqpairs_by_hitrate.txt
    • this one produces a list of the uniq Source & Destination IP addresses, in descending order by how many times they talk to each other in the log (note that their reversed pairings will be separate, if they are present – that is if A talks to B, there will be an entry for that, but if B initiates conversations with A, that will be a separate line in this data set)
  • cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $15}’ | sort | uniq -c | sort -n -r > appID_by_hitrate.txt
    • this one uses the same exact techniques, but now we are looking at what applications have been identified by the firewall, in descending order by number of times that application identifier appears in the log
 
Again, these are simple examples, but you can tweak and expand as you need. This trivial approach to command line text analysis certainly helps with logs and traffic data. You can use those same commands to do a wondrous amount of textual analysis and processing. Learn them, live them, love them. 😃 
 
If you have questions, or want to share some of the ways you use those commands, please drop us a line on Twitter (@microsolved) or hit me up personally for other ideas (@lbhuston). As always, thanks for reading and stay safe out there! 

Pay Attention to Egress Anomalies on Weekends

Just a quick note to pay careful attention to egress anomalies when the majority of your employees are not likely to be using the network. Most organizations, even those that are 24/7, experience reduced network egress to the Internet during nights and weekends. This is the perfect time to look for anomalies and to take advantage of the reduced traffic levels to perform deeper analysis such as a traffic level monitoring, average session/connection sizes, anomalies in levels of blocked egress ports, new and never before seen DNS resolutions, etc. 

If you can baseline traffic, even using something abstract like net flow, you may find some amazing stuff. Check it out! 

From Dark Net Research to Real World Safety Issue

On a recent engagement by the MSI Intelligence team, our client had us researching the dark net to discover threats against their global brands. This is a normal and methodology-driven process for the team and the TigerTrax™ platform has been optimized for this work for several years.

We’ve seen plenty of physical threats against clients before. In particular, our threat intelligence and brand monitoring services for professional sports teams have identified several significant threats of violence in the last few years. Unfortunately, this is much more common for high visibility brands and organizations than you might otherwise assume.

In this particular instance, conversations were flagged by TigerTrax from underground forums that were discussing physical attacks against the particular brand. The descriptions were detailed, politically motivated and threatened harm to employees and potentially the public. We immediately reported the issue and provided the captured data to the client. The client reviewed the conversations and correlated them with other physical security occurrences that had been reported by their employees. In today’s world, such threats require vigilant attention and a rapid response.

In this case, the client was able to turn our identified data into insights by using it to gain context from their internal security issue reporting system. From those insights, they were able to quickly launch an awareness campaign for their employees in the areas identified, report the issue to localized law enforcement and invest in additional fire and safety controls for their locations. We may never know if these efforts were truly effective, but if they prevented even a single occurrence of violence or saved a single human life, then that is a strong victory.

Security is often about working against things so that they don’t happen – making it abstract, sometimes frustrating and difficult to explain to some audiences. But, when you can act on binary data as intelligence and use it to prevent violence in the kinetic world, that is the highest of security goals! That is the reason we built TigerTrax and offer the types of intelligence services we do to mature organizations. We believe that insights like these can make a difference and we are proud to help our clients achieve them.

3 Reasons You Need Customized Threat Intelligence

Many clients have been asking us about our customized threat intelligence services and how to best use the data that we can provide.

1. Using HoneyPoint™, we can deploy fake systems and applications, both internally and in key external situations that allow you to generate real-time, specific to your organization, indicators of compromise (IoC) data – including a wide variety of threat source information for blacklisting, baseline metrics to make it easy to measure changes in the levels of threat actions against your organization up to the moment, and a wide variety of scenarios for application and attack surface hardening.

2. Our SilentTiger™ passive assessments, can help you provide a wider lens for vulnerability assessment visibility than your perimeter, specifically. It can be used to assess, either single instance or ongoing, the security posture of locations where your brand is extended to business partners, cloud providers, supply chain vendors, critical dependency API and data flows and other systems well beyond your perimeter. Since the testing is passive, you don’t need permission, contract language or control of the systems being assessed. You can get the data in a stable, familiar format – very similar to vulnerability scanning reports or via customized data feeds into your SEIM/GRC/Ticketing tools or the like. This means you can be more vigilant against more attack surfaces without more effort and more resources.

3. Our customized TigerTrax™ Targeted Threat Intelligence (TTI) offerings can be used for brand specific monitoring around the world, answering specific research questions based on industry / geographic / demographic / psychographic profiles or even products / patents or economic threat research. If you want to know how your brand is being perceived, discussed or threatened around the world, this service can provide that either as a one-time deliverable, or as an ongoing periodic service. If you want our intelligence analysts to look at industry trends, fraud, underground economics, changing activist or attacker tactics and the way they collide with your industry or organization – this is the service that can provide that data to you in a clear and concise manner that lets you take real-world actions.

We have been offering many of these services to select clients for the last several years. Only recently have we decided to offer them to our wider client and reader base. If you’d like to learn how others are using the data or how they are actively hardening their environments and operations based on real-world data and trends, let us know. We’d love to discuss it with you! 

Getting Smart with Mobile App GeoLocation to Fight Fraud

If your mobile application includes purchases with credit cards, and a pickup of the merchandise, then you should pay attention to this.

Recently, in our testing lab and during an intelligence engagement, we identified a fraud mechanism where stolen credit cards were being used via the mobile app in question, to fraudulently purchase goods. In fact, the attackers were selling the purchase of the goods as a service on auction and market sites on the dark web.

The scam works like this. The bad guys have stolen credit cards (track data, likely from dumps), which they use to make a purchase for their client remotely. The bad guys use their stolen track data as a card not present transaction, which is standard for mobile apps. The bad guys have access to huge numbers of stolen cards, so they can burn them at a substantial rate without impacting their inventory to a large extent. The bad guy’s customer spends $25 in bitcoins to get up to $100 in merchandise. The bad guy takes the order from the dark net, uses the mobile app to place the order, and then delivers the receipt and/or pickup information to the bad guys customer. The customer then walks into the retailer and shows the receipt for their mobile order, picking up the merchandise and leaving.

The bad guy gets paid via the bitcoins. For them, this is an extremely low risk way to convert stolen credit card info to cash. It is significantly less risky for them than doing physical card replication, ATM use or other conversion methods that have a requirement for physical interaction.

The bad guy’s customer gets paid by picking up the merchandise. They get up to $100 value for a cost of $25. They take on some risk, but if performed properly, the scam is low risk to them, or so they believe. In the odd event, they simply leave the store after making their demands for satisfaction. There is little risk of arrest or prosecution, it would seem, especially at the low rate of $100 – or at least that was how the bad guy was pitching it to their prospective customers…

The credit card issuer or the merchant gets stuck. They are out the merchandise and/or the money, depending on their location in the world, and the merchant agreement/charge back/PCI compliance issues they face.

Understanding the fraud and motivations of the bad guys is critical for securing the systems in play. Organizations could up their validation techniques and vigilance for mobile orders. They could add additional fraudulent transaction heuristics to their capability. They could also implement geo-location on the mobile apps as a control – i.e.. If the order is being physically placed on a device in Ukraine, and pick up is in New York, there is a higher level of risk associated with that transaction. Identifying ways  to leverage the sensors and data points from a mobile device, and rolling it into fraud detection heuristics and machine learning analytics is the next wave of security for some of these applications. We are pleased to be helping clients get there…

To hear more about modern fraud techniques, application security testing or targeted threat intelligence like what we discussed above, drop us a line (info at microsolved dot com) or via Twitter (@lbhuston). We look forward to discussing it with your team.