The Biggest Challenges to Firms using Cyber Threat Intelligence

Cyber threat intelligence is one of the hottest topics in cybersecurity today. Many firms are investing heavily in developing and deploying solutions to identify and respond to cyber threats. But despite the hype surrounding cyber threat intelligence, many firms still struggle to make sense of the data they collect.

Why are firms struggling to make sense of their data, and how they can overcome this challenge? We asked around. It looks like three key challenges emerged, and here they are:

1. Data quality – How do we know if our data is accurate?

2. Data volume – How much data do we need to store?

3. Data integration – How do we combine multiple sources of data?

We’re working on ideas around these 3 most common problems. We’re working with firms of all sizes to help solve them. When we get to firm, across-the-board answers, we’ll post them. In the meantime, knowing the most common issues firms are facing in the threat intelligence arena gives us all a good place to start.

Got workarounds or solutions to these issues? Drop me a line on Twitter (@lbhuston) and let me know how you’re doing it. We’ll share the great ideas as they are proven out.

Saved By Ransomware Presentation Now Available

I recently spoke at ISSA Charlotte, and had a great crowd via Zoom. 

Here is the presentation deck and MP3 of the event. In it, I shared a story about an incident I worked around the start of Covid, where a client was literally saved from significant data breach and lateral spread from a simple compromise. What saved them, you might ask? Ransomware. 

That’s right. In this case, ransomware rescued the customer organization from significant damage and a potential loss of human life. 

Check out the story. I think you’ll find it very interesting. 

Let me know if you have questions – hit me up the social networks as @lbhuston.

Thanks for reading and listening! 

Deck: https://media.microsolved.com/SavedByRansomware.pdf

MP3: https://media.microsolved.com/SavedByRansomware.mp3

PS – I miss telling you folks stories, in person, so I hope you enjoy this virtual format as much as I did creating it! 

3 Threats We Are Modeling for Clients These Days

Just a quick post today to discuss three threat scenarios we are modeling frequently with clients these days. #ThreatModeling

1) Ransomeware or other malware infection sourced from managed service providers – this scenario is become a very common issue, so common that DHS and several other organizations have released advisories. Attacker campaigns against managed services providers have been identified and many have yielded some high value breaches. The most common threat is spear phishing into a MSP, with the attackers eventually gaining access to the capability to push software to the clients. They then push a command and control malware or a ransomware infection down the pipe. Often, it is quite some time before the source of the event is traced back to the MSP. The defenses here are somewhat limited, but the scenario definitely should be practiced at the tabletop level. Often, these MSPs have successfully passed a SOC audit, but have very little security maturity beyond the baselines.

2) Successful credential stuffing attacks against Office 365 implementations leading to wire/ACH/AP fraud – This is another very common scenario, not just for banks and credit unions, but a lot of small and mid-size organizations have fallen victim to it as well via account payable attacks. In the scenario, either a user is phished into giving up credentials, or a leaked set of credentials is leveraged to gain access to the Office 365 mail and chat system. The attackers then leverage this capability to perform their fraud, appearing to come from internal email accounts and chats. They often make use of stored forms and phish their way to other internal users in the approval chain to get the money to actually move. Once they have their cash, they often use these email accounts to spread malware and ransomware to other victims inside the organization or in business partners – continuing the chain over and over again. The defenses here are to MFA, limited access to the O365 environment to require VPN or other IP-specifc filtering, hardening the O365 environment and enabling many of the detection and prevention controls that are off by default. 

3) Voicemail hacking and dial-system fraud – I know, I know, it’s 2020… But, this remains an incredibly impactful attack, especially against key management employees or employees who traffic in highly confidential data. Often this is accessed and then either used for profit via trading (think M&A info) or as ransom/blackmail types of social engineering. Just like above, the attackers often hack one account and then use social engineering to get other users to follow instructions around fraud or change their voicemail password to a given number, etc. Larger corporations where social familiarity of employees and management is low are a common attack target. Dial system fraud for outbound long distance remains pretty common, especially over long weekends and holidays. Basically, the attackers hack an account and use call forwarding to send calls to a foreign number – then sell access to the hacked voicemail line, changing the destination number for each caller. Outbound dial tone is also highly regarded here and quite valuable on the underground markets. Often the fraud goes undetected for 60-90 days until the audit process kicks in, leaving the victim several thousand dollars in debt from the illicit activity. The defenses here are voicemail and phone system auditing, configuration reviews, hardening and lowering lockout thresholds on password attempts. 

We can help with all of these issues and defenses, but we love to help organizations with threat scenario generation, threat modeling and attack surface mapping. If you need some insights into outside the box attacks and fraud potential, give us a call. Our engagements in this space are informative, useful and affordable.

Thanks for reading, and until next time, stay safe out there! 

Introducing ClawBack :: Data Leak Detection Powered By MicroSolved

Cb 10We’ve worked with our clients and partners to put together a world-class data leak detection platform that is so easy to use that most security teams have it up and running in less than five minutes. No hardware appliance or software agent to deploy, no console to manage and, best of all, affordable for organizations of any size.

In short, ClawBack is data leak detection done right.

There’s a lot more to the story, and that’s why we put together this short (3 minute) video to describe ClawBack, its capabilities and why we created it. Once you check it out, we think you’ll see just how ClawBack fits the mission of MSI to make the online world safer for all of us.

View the video here.

You can also learn a lot more about ClawBack, its use cases and some of the ways we hope it can help you here. On that page, you can also find pricing for three different levels of service, more videos walking you through how to sign up and a video demo of the platform.

Lastly, if you’d like to just get started, you can visit the ClawBack Portal, and select Register to sign up and put ClawBack to work immediately on providing detection for your leaked data.

In the coming weeks, we’ll be talking more about what drove us to develop ClawBack, the success stories we’ve had just while building and testing the platform, and provide some more specifics about how to make the most of ClawBack’s capabilities. In the meantime, thanks for reading, check it out and if you have any questions, drop us a line.

Are You Seeing This? Join a Threat Sharing Group!

Just a quick note today about threat sharing groups. 

I am talking to more and more companies and organizations that are putting together local, regional or vertical market threat sharing groups. These are often adhoc and usually driven by security practitioners, who are helping each other with cooperative defenses and sharing of new tactics and threat patterns (think TTPs (tactics, techniques & procedures)) or indicators of compromise (IOCs). Many times, these are informal email lists or RSS feeds that the technicians subscribe to and share what they are seeing in the trenches. 

A few folks have tried to commercialize them, but in most cases, these days, the sharing is simply free and open. 

If you get a chance to participate in one or more of these open source networks, you might want to check it out. Many of our clients are saying great things about the data they get via the networks and often they have helped contain incidents and breaches in a rapid fashion.

If you want to discuss your network, or if you have one that you’d like me to help promote, hit me up on Twitter (@lbhuston). If you are looking for one to join, check Twitter and I’ll share as folks allow, or I’ll make private connections as possible. 

As always, thanks for reading, and until next time, stay safe out there! 

Petya/PetyaWrap Threat Info

As we speak, there is a global ransomware outbreak spreading. The infosec community is working together, in the open, on Twitter and mailing lists sharing information with each other and the world about the threat. 

The infector is called “Petya”/“PetyaWrap” and it appears to use psexec to execute the EternalBlue exploits from the NSA.

The current infector has the following list of target file extensions in the current (as of an hour ago) release. https://twitter.com/bry_campbell/status/879702644394270720/photo/1

Those with robust networks will likely find containment a usual activity, while those who haven’t implement defense in depth and a holistic enclaving strategy are likely in trouble.

Here are the exploits it is using: CVE-2017-0199 and MS17-010, so make sure you have these patched on all systems. Make sure you find anything that is outside the usual patch cycle, like HVAC, elevators, network cameras, ATMs, IoT devices, printers and copiers, ICS components, etc. Note that this a combination of a client-side attack and a network attack, so likely very capable of spreading to internal systems… Client side likely to yield access to internals pretty easily.

May only be affecting the MBR, so check that to see if it is true for you. Some chatter about multiple variants. If you can open a command prompt, bootrec may help. Booting from a CD/USB or using a drive rescue tool may be of use. Restore/rebuild the MBR seems to be successful for some victims. >>  “bootrec /RebuildBcd bootrec /fixMbr bootrec /fixboot” (untested)

New Petrwrap/Petya ransomware has a fake Microsoft digital signature appended. Copied from Sysinternals Utils. – https://t.co/JooBu8lb9e

Lastline indicated this hash as an IOC: 027cc450ef5f8c5f653329641ec1fed91f694e0d229928963b30f6b0d7d3a745 – They also found these activities: https://pbs.twimg.com/media/DDVj-llVYAAHqk4.jpg

Eternal Blue detection rules are firing in several detection products, ET Rules firing on that Petya 71b6a493388e7d0b40c83ce903bc6b04  (drops 7e37ab34ecdcc3e77e24522ddfd4852d ) – https://twitter.com/kafeine/status/879711519038210048

Make sure Office updates are applied, in addition to OS updates for Windows. <<Office updates needed to be immune to CVE-2017-0199.

Now is a great time to ensure you have backups that work for critical systems and that your restore processes are functional.

Chatter about wide scale spread to POS systems across europe. Many industries impacted so far.

Bitdefender initial analysis – https://labs.bitdefender.com/2017/06/massive-goldeneye-ransomware-campaign-slams-worldwide-users/?utm_source=SMGlobal&utm_medium=Twitter&utm_campaign=labs

Stay safe out there! 

 

3:48pm Eastern

Update: Lots of great info on detection, response, spread and prevention can be found here: https://securelist.com/schroedingers-petya/78870/

Also, this is the last update to this post unless something significant changes. Follow me on Twitter for more info: @lbhuston 

SilentTiger Targeted Threat Intelligence Update

Just a quick update on SilentTiger™, our passive security assessment and intelligence engine. 

We have released a new version of the platform to our internal team, and this new version automatically builds the SilentTiger configuration for our analysts. That means that clients using our SilentTiger offering will no longer have to provide any more information than the list of domain names to engage the process. 

This update also now includes a host inventory mechanism, and a new data point – who runs the IP addresses identified. This is very useful for finding out the cloud providers that a given set of targets are using and makes it much easier to find industry clusters of service providers that could be a risk to the supply chain.

For more information about using SilentTiger to perform ongoing assessments for your organization, your M&A prospects, your supply chain or as a form of industry intelligence, simply get in touch. Clients ranging from global to SMB and across a wide variety of industries are already taking advantage of the capability. Give us 20 minutes, and we’ll be happy to explain! 

Last Quick and Dirty Log Tip for the Week

OK, so this week I posted two other blog posts about doing quick and dirty log analysis and some of the techniques I use. This one also covers converting column logs to CSV.

After the great response, I wanted to drop one last tip for the week. 

Several folks asked me about re-sorting and processing the column-based data in different ways and to achieve different analytical views. 

Let me re-introduce you to my friend and yours, sort.

In this case, instead of using the sort -n -r like before (numeric sort, reverse order), we can use:

  • sort -k# -n input_file (where # is the number of the column you’d like to sort by and the input file is the name of the file to sort)
    • You can use this inline by leveraging the pipe (|) again – i.e.: cat input.txt | sort -k3 -n (this types the input file and sends it to sort for sorting on the third column in numeric order) (-r would of course, reverse it…)
    • You can write the output of this to a file with redirects “> filename.txt”, i.e.: cat input.txt | sort -k3 -n -r > output.txt
      • You could also use “>>” as the redirect in order to create a file if it doesn’t exist OR append to a file if it does exist… i.e..:  cat input.txt | sort -k3 -n -r >> appended_output.txt

That’s it! It’s been a fun week sharing some simple command line processing tips for log files. Drop me a line on Twitter (@lbhuston) and let me know what you used them for, or which ones are your favorite. As always, thanks and have a great weekend! 

Quick And Dirty Log Analysis Followup

Earlier this week, I posted some tips for doing Quick and Dirty PA Firewall Log Analysis.

After I posted this, I got a very common question, and I wanted to answer it here.

The question is something along the lines of “When I use the techniques from your post, the outputs of the commands are column separated data. I need them to be CSV to use with my (tool/SEIM/Aunt Gracie/whatever). How can I convert them?” Sound familiar?

OK, so how do we accomplish this feat of at the command line without all of the workarounds that people posted, and without EVER loading Excel? Thankfully we can use awk again for this.

We can use:

  • awk ‘BEGIN { OFS = “,”} ; {print $1,$2,$3}’
    • Basically, take an input of column data, and print out the columns we want (can be any, in this case I want the first 3 columns), and make the outputs comma delimited.
    • We can just append this to our other command stacks with another pipe (|) to get our output CSV
  • Example: cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $8,$9}’ | sort -n | uniq -c | sort -n -r | awk ‘BEGIN { OFS = “,”} ; {print $1,$2,$3}’
    • In this example, the source IP and destination IP will be analyzed, and the reduced to unique pairs, along with the number of times that that pair is duplicated in the input log (I use this as a “hit rate” as I described earlier
      • A common question, why do I ask for two columns in the first awk and then ask for three columns in the second awk?
        • The answer of course, is that the first awk prints the unique pairs, but it also adds a column of the “hit rate”, so to get the output appropriately, I need all three fields.

So, once again, get to know awk. It is your friend.:)

PS – Yes, I know, there are hundreds of other ways to get this same data, in the same format, using other command line text processing tools. Many may even be less redundant than the commands above. BUT, this is how I did it. I think it makes it easy for people to get started and play with the data. Post your ways to Twitter or share with the community. Exploration is awesome, so it will encourage users to play more. Cool! Hit me on Twitter if you wanna share some or talk more about this approach (@lbhuston).

Thanks for reading!

Quick & Dirty Palo Alto Log Analysis

OK, so I needed to do some quick and dirty traffic analysis on Palo Alto text logs for a project I was working on. The Palo Alto is great and their console tools are nice. Panorama is not too shabby. But, when I need quick and dirty analysis and want to play with data, I dig into the logs. 
 
That said, for my quick analysis, I needed to analyze a bunch of text logs and model the traffic flows. To do that, I used simple command line text processing in Unix (Mac OS, but with tweaks also works in Linux, etc.)
 
I am sharing some of my notes and some of the useful command lines to help others who might be facing a similar need.
 
First, for my project, I made use of the following field #’s in the text analysis, pulled from the log header for sequence:
  • $8 (source IP) 
  • $9 (dest IP)
  • $26 (dest port)
  • $15 (AppID)
  • $32 (bytes)
 
Once, I knew the fields that corresponded to values I wanted to study, I started using the core power of command line text processing. And in this case, the power I needed was:
  • cat
  • grep
    • Including, the ever useful grep -v (inverse grep, show me the lines that don’t match my pattern)
  • awk
    • particularly: awk ‘BEGIN { FS = “,”} ; {print $x, $y}’ which prints specific columns in CSV files 
  • sort
    • sort -n (numeric sort)
    • sort -r (reverse sort, descending)
  • uniq
    • uniq -c (count the numbers of duplicates, used for determining “hit rates” or frequency, etc.)
 
Of course, to learn more about these commands, simply man (command name) and read the details. 😃 
 
OK, so I will get you started, here are a few of the more useful command lines I used for my quick and dirty analysis:
  • cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $8,$9,$26}’ | sort | uniq -c | sort -n -r > hitrate_by_rate.txt
    • this one produces a list of Source IP/Dest IP/Dest Port unique combinations, sorted in descending order by the number of times they appear in the log
  • cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $8,$9}’ | sort -n | uniq -c | sort -n -r > uniqpairs_by_hitrate.txt
    • this one produces a list of the uniq Source & Destination IP addresses, in descending order by how many times they talk to each other in the log (note that their reversed pairings will be separate, if they are present – that is if A talks to B, there will be an entry for that, but if B initiates conversations with A, that will be a separate line in this data set)
  • cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $15}’ | sort | uniq -c | sort -n -r > appID_by_hitrate.txt
    • this one uses the same exact techniques, but now we are looking at what applications have been identified by the firewall, in descending order by number of times that application identifier appears in the log
 
Again, these are simple examples, but you can tweak and expand as you need. This trivial approach to command line text analysis certainly helps with logs and traffic data. You can use those same commands to do a wondrous amount of textual analysis and processing. Learn them, live them, love them. 😃 
 
If you have questions, or want to share some of the ways you use those commands, please drop us a line on Twitter (@microsolved) or hit me up personally for other ideas (@lbhuston). As always, thanks for reading and stay safe out there!