Worm detection with HoneyPoint Security Server (HPSS): A real world example

This post describes a malware detection event that I actually experienced a few short years ago.

My company (Company B) had been acquired by a much larger organization (Company A) with a very large internal employee desktop-space. A desktop-space larger than national boundaries.

We had all migrated to Company A  laptops – but our legacy responsibilities required us to maintain systems in the original IP-space of company B.  We used legacy Company B VPN for that.

I had installed the HPSS honeypoint agent on my Company A laptop prior to our migration into their large desktop space.  After migration I was routinely VPN’ed into legacy Company B space, so a regular pathway for alerts to reach the console existed.

After a few months, the events shown in the diagram below occurred.

I started to receive email alerts directed to my Company B legacy email account. The alerts described TCP 1433 scans that my Company A laptop was receiving.  The alerts were all being thrown by the MSSQL (TCP 1433 – Microsoft SQL Server) HoneyPoint listener on my laptop.

I was confused – partly because I had become absorbed in post-acquisition activities and had largely forgotten about the HPSS agent running on my laptop.

After looking at the emails and realizing what was happening, I got on the HPSS console and used the HPSS event viewer to get details. I learned that the attackers were internal within Company A space. Courtesy of HPSS I had their source IP addresses and the common payload they all delivered.  Within Company A I gathered information via netbios scans of the source IPs.  The infected machines were all Company A laptops belonging to various non-technical staff on the East Coast of the U.S.

All of that got passed on to the Company A CIO office. IDS signatures were generated, tweaked, and eventually the alerts stopped.  I provided payload and IP information from HPSS throughout the process.

I came away from the experience with a firm belief that company laptops, outfitted with HoneyPoint agents, are an excellent way of getting meaningful detection out into the field.

I strongly recommend you consider something similar. Your organization’s company laptops are unavoidably on the front-line of modern attacks.

Use them to your advantage.

 

 

Revisiting Nuance Detection

The core of nuance detection is to extend alerting capabilities into finding situations that specifically should not exist, and if they happen, would indicate a significant security failure. A simple, elegant example would be a motion sensor on a safe in your home, combined with something like your home alarm system.
 
A significant failure state would be for the motion sensor inside the safe to trigger while the home alarm system is set in away mode. When the alarm is in away mode, there should be no condition that triggers motion inside the safe. If motion is detected, anytime, you might choose to alert in a minor way. But, if the alarm is set to away mode, you might signal all kinds of calamity and flashing lights, bells and whistles, for example.
 
This same approach can apply to your network environment, applications or data systems. Define what a significant failure state looks like, and then create detection and alerting mechanisms, even if conditional, for the indicators of that state. It can be easy. 
 
I remember thinking more deeply about this for the first time when I saw Marcus Ranum give his network burglar alarm speech at Defcon, what seems like a 1000 years ago now. That moment changed my life forever. Since then, I have always wanted to work on small detections. The most nuanced of fail states. The deepest signs of compromise. HoneyPoint™ came from that line of thinking, albeit, many years later. (Thanks, Marcus, you are amazing! BTW.) 🙂
 
I’ve written about approaches to it in the past, too. Things like detecting web shells, detection in depth techniques and such. I even made some nice maturity and deployment models.
 
This month, I will be revisiting nuance detection more deeply. Creating some more content around it, and speaking about it more openly. I’ll also cover how we have extended HoneyPoint with the Handler portion of HoneyPoint Agent. in order to fully support event management and data handling into your security alerting systems from basic scripts and simple tools you can create yourself. 
 
Stay tuned, and in the meantime, drop me a line on Twitter (@lbhuston) and let me know more about nuance detections you can think of or have implemented. I’d love to hear more about it. 

Last Quick and Dirty Log Tip for the Week

OK, so this week I posted two other blog posts about doing quick and dirty log analysis and some of the techniques I use. This one also covers converting column logs to CSV.

After the great response, I wanted to drop one last tip for the week. 

Several folks asked me about re-sorting and processing the column-based data in different ways and to achieve different analytical views. 

Let me re-introduce you to my friend and yours, sort.

In this case, instead of using the sort -n -r like before (numeric sort, reverse order), we can use:

  • sort -k# -n input_file (where # is the number of the column you’d like to sort by and the input file is the name of the file to sort)
    • You can use this inline by leveraging the pipe (|) again – i.e.: cat input.txt | sort -k3 -n (this types the input file and sends it to sort for sorting on the third column in numeric order) (-r would of course, reverse it…)
    • You can write the output of this to a file with redirects “> filename.txt”, i.e.: cat input.txt | sort -k3 -n -r > output.txt
      • You could also use “>>” as the redirect in order to create a file if it doesn’t exist OR append to a file if it does exist… i.e..:  cat input.txt | sort -k3 -n -r >> appended_output.txt

That’s it! It’s been a fun week sharing some simple command line processing tips for log files. Drop me a line on Twitter (@lbhuston) and let me know what you used them for, or which ones are your favorite. As always, thanks and have a great weekend! 

Quick And Dirty Log Analysis Followup

Earlier this week, I posted some tips for doing Quick and Dirty PA Firewall Log Analysis.

After I posted this, I got a very common question, and I wanted to answer it here.

The question is something along the lines of “When I use the techniques from your post, the outputs of the commands are column separated data. I need them to be CSV to use with my (tool/SEIM/Aunt Gracie/whatever). How can I convert them?” Sound familiar?

OK, so how do we accomplish this feat of at the command line without all of the workarounds that people posted, and without EVER loading Excel? Thankfully we can use awk again for this.

We can use:

  • awk ‘BEGIN { OFS = “,”} ; {print $1,$2,$3}’
    • Basically, take an input of column data, and print out the columns we want (can be any, in this case I want the first 3 columns), and make the outputs comma delimited.
    • We can just append this to our other command stacks with another pipe (|) to get our output CSV
  • Example: cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $8,$9}’ | sort -n | uniq -c | sort -n -r | awk ‘BEGIN { OFS = “,”} ; {print $1,$2,$3}’
    • In this example, the source IP and destination IP will be analyzed, and the reduced to unique pairs, along with the number of times that that pair is duplicated in the input log (I use this as a “hit rate” as I described earlier
      • A common question, why do I ask for two columns in the first awk and then ask for three columns in the second awk?
        • The answer of course, is that the first awk prints the unique pairs, but it also adds a column of the “hit rate”, so to get the output appropriately, I need all three fields.

So, once again, get to know awk. It is your friend.:)

PS – Yes, I know, there are hundreds of other ways to get this same data, in the same format, using other command line text processing tools. Many may even be less redundant than the commands above. BUT, this is how I did it. I think it makes it easy for people to get started and play with the data. Post your ways to Twitter or share with the community. Exploration is awesome, so it will encourage users to play more. Cool! Hit me on Twitter if you wanna share some or talk more about this approach (@lbhuston).

Thanks for reading!

Quick & Dirty Palo Alto Log Analysis

OK, so I needed to do some quick and dirty traffic analysis on Palo Alto text logs for a project I was working on. The Palo Alto is great and their console tools are nice. Panorama is not too shabby. But, when I need quick and dirty analysis and want to play with data, I dig into the logs. 
 
That said, for my quick analysis, I needed to analyze a bunch of text logs and model the traffic flows. To do that, I used simple command line text processing in Unix (Mac OS, but with tweaks also works in Linux, etc.)
 
I am sharing some of my notes and some of the useful command lines to help others who might be facing a similar need.
 
First, for my project, I made use of the following field #’s in the text analysis, pulled from the log header for sequence:
  • $8 (source IP) 
  • $9 (dest IP)
  • $26 (dest port)
  • $15 (AppID)
  • $32 (bytes)
 
Once, I knew the fields that corresponded to values I wanted to study, I started using the core power of command line text processing. And in this case, the power I needed was:
  • cat
  • grep
    • Including, the ever useful grep -v (inverse grep, show me the lines that don’t match my pattern)
  • awk
    • particularly: awk ‘BEGIN { FS = “,”} ; {print $x, $y}’ which prints specific columns in CSV files 
  • sort
    • sort -n (numeric sort)
    • sort -r (reverse sort, descending)
  • uniq
    • uniq -c (count the numbers of duplicates, used for determining “hit rates” or frequency, etc.)
 
Of course, to learn more about these commands, simply man (command name) and read the details. 😃 
 
OK, so I will get you started, here are a few of the more useful command lines I used for my quick and dirty analysis:
  • cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $8,$9,$26}’ | sort | uniq -c | sort -n -r > hitrate_by_rate.txt
    • this one produces a list of Source IP/Dest IP/Dest Port unique combinations, sorted in descending order by the number of times they appear in the log
  • cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $8,$9}’ | sort -n | uniq -c | sort -n -r > uniqpairs_by_hitrate.txt
    • this one produces a list of the uniq Source & Destination IP addresses, in descending order by how many times they talk to each other in the log (note that their reversed pairings will be separate, if they are present – that is if A talks to B, there will be an entry for that, but if B initiates conversations with A, that will be a separate line in this data set)
  • cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $15}’ | sort | uniq -c | sort -n -r > appID_by_hitrate.txt
    • this one uses the same exact techniques, but now we are looking at what applications have been identified by the firewall, in descending order by number of times that application identifier appears in the log
 
Again, these are simple examples, but you can tweak and expand as you need. This trivial approach to command line text analysis certainly helps with logs and traffic data. You can use those same commands to do a wondrous amount of textual analysis and processing. Learn them, live them, love them. 😃 
 
If you have questions, or want to share some of the ways you use those commands, please drop us a line on Twitter (@microsolved) or hit me up personally for other ideas (@lbhuston). As always, thanks for reading and stay safe out there! 

Introducing AirWasp from MSI!

NewImage

For over a decade, HoneyPoint has been proving that passive detection works like a charm. Our users have successfully identified millions of scans, probes and malware infections by simply putting “fake stuff” in their networks, industrial control environments and other strategic locations. 

 

Attackers have taken the bait too; giving HoneyPoint users rapid detection of malicious activity AND the threat intelligence they need to shut down the attacker and isolate them from other network assets.

 

HoneyPoint users have been asking us about manageable ways to detect and monitor for new WiFi networks and we’ve come up with a solution. They wanted something distributed and effective, yet easy to use and affordable. They wanted a tool that would follow the same high signal, low noise detection approach that they brag about from their HoneyPoint deployments. That’s exactly what AirWasp does.

 

We created AirWasp to answer these WiFi detection needs. AirWasp scans for and profiles WiFi access points from affordable deck-of-cards-sized appliances. It alerts on any detected access points through the same HoneyPoint Console in use today, minimizing new cost and management overhead. It also includes traditional HoneyPoints on the same hardware to help secure the wired network too!

 

Plus, our self-tuning white list approach means you are only alerted once a new access point is detected – virtually eliminating the noise of ongoing monitoring. 

 

Just drop the appliance into your network and forget about it. It’ll be silent, passive and vigilant until the day comes when it has something urgent for you to act upon. No noise, just detection when you need it most.

 

Use Cases:

 

  • Monitor multiple remote sites and even employee home networks for new Wifi access points, especially those configured to trick users
  • Inventory site WiFi footprints from a central location by rotating the appliance between sites periodically
  • Detect scans, probes and worms targeting your systems using our acclaimed HoneyPoint detection and black hole techniques
  • Eliminate monitoring hassles with our integration capabilities to open tickets, send data to the SIEM, disable switch ports or blacklist hosts using your existing enterprise products and workflows

More Information

 

To learn how to bring the power and flexibility of HoneyPoint and AirWasp to your network, simply contact us via email (info@microsolved.com) or phone (614) 351-1237.


 

We can’t wait to help you protect your network, data and users!


HoneyPoint Security Server Allows Easy, Scalable Deception & Detection

Want to easily build out a scalable, customizable, easily managed, distributed honey pot sensor array? You can do it in less than a couple of hours with our HoneyPoint Security Server platform.

This enterprise ready, mature & dependable solution has been in use around the world since 2006. For more than a decade, customers have been leveraging it to deceive, detect and respond to attackers in and around their networks. With “fake” implementations at the system, application, user and document levels, it is one the most capable tool sets on the market. Running across multiple operating systems (Linux/Windows/OS X), and scattered throughout network and cloud environments, it provides incredible visibility not available anywhere else.

The centralized Console is designed for safe, effective, efficient and easy management of the data provided by the sensors. The Console also features simple integration with ticketing systems, SEIM and other data analytics/management tools.

If you’d like to take it for a spin in our cloud environment, or check out our localized, basic Personal Edition, give us a call, or drop us a line via info (at) microsolved (dot) com. Thanks for reading! 

Are you hacking!? There’s no hacking in baseball!

My Dad called me earlier this week to ask if I heard about the FBI’s investigation of the St. Louis Cardinals. My initial reaction was that the investigation must be related to some sort of steroid scandal or gambling allegations. I was wrong. The Cardinals are being investigated for allegedly hacking into the network of a rival team to steal confidential information. Could the same team that my Grandparents took me to see play as a kid really be responsible for this crime?

After I had time to read a few articles about the alleged hack, I called my Dad back. He immediately asked me if the Astros could have prevented it. From what I have read, this issue could have been prevented (or at least detected) by implementing a few basic information security controls around the Astros’ proprietary application. Unfortunately, it appears the attack was not discovered until confidential information was leaked onto a pastebin site.

The aforementioned controls include but are not limited to:

  1. Change passwords on a regular basis – It has been alleged that Astros system was accessed by using the same password that was used when a similar system was deployed within the St. Louis Cardinals’ network. Passwords should be changed on a regular basis.
  2. Do not share passwords between individuals – Despite the fact that creating separate usernames and passwords for each individual with access to a system can be inconvenient, it reduces a lot of risk associated with deploying an application. For example, if each member of the Astros front office was required to have a separate password to their proprietary application, the Cardinals staff would not have been able to successfully use the legacy password from when the application was deployed in St. Louis. The Astros would also have gained the ability to log and track each individual user’s actions within the application.
  3. Review logs for anomalies on a regular basis – Most likely, the Astros were not reviewing any kind of security logs surrounding this application. If they were, they might have noticed failed login attempts into the application prior to the Cardinals’ alleged successful attempt. They also might have noticed that the application was accessed by an unknown or suspicious IP address.
  4. Leverage the use of honeypot technology – By implementing HoneyPot technology, the Astros could have deployed a fake version of this application. This could have allowed them to detect suspicious activity from within their network prior to the attackers gaining access to their confidential information. This strategy could have included leveraging MSI’s HoneyPoint Security Server to stand up a fake version of their proprietary application along with deploying a variety of fake documents within the Astros’ network. If an attacker accessed the fake application or document, the Astros would have been provided with actionable intelligence which could have allowed them to prevent the breach of one of their critical systems.
  5. Do not expose unnecessary applications or services to the internet – At this point, I do not know whether or not the Astros deployed this system within their internal network or exposed it to the internet. Either way, it’s always important to consider whether or not it is necessary to expose a system or service to the internet. Something as simple as requiring a VPN to access an application can go a long way to securing the confidential data.
  6. Leverage the use of network segmentation or IP address filtering – If the application was deployed from within the Astros internal network, was it necessary that all internal systems had access to the application? It’s always worthwhile to limit network access to a particular system or network segment as much as possible.

Honestly, I hope these allegations aren’t true. I have fond memories of watching the Cardinals win the World Series in 2006 and 2011. I would really hate to see those victories tarnished by the actions of a few individuals. However, it’s important that we all learn a lesson from this..whether it’s your email or favorite team’s playbook…don’t overlook the basic steps when attempting to secure confidential information.

Daily Log Monitoring and Increased Third Party Security Responsibilities: Here They Come!

For years now we at MSI have extoled the security benefits of daily log monitoring and reciprocal security practices between primary and third party entities present on computer networks. It is constantly being proven true that security incidents could be prevented, or at least quickly detected, if system logs were properly monitored and interpreted. It is also true that many serious information security incidents are the result of cyber criminals compromising third party service provider systems to gain indirect access to private networks. 

I think that most large-network CISOs are well aware of these facts. So why aren’t these common security practices right now? The problem is that implementing effective log monitoring and third party security practices is plagued with difficulties. In fact, implementation has proven to be so difficult that organizations would rather suffer the security consequences than put these security controls in place. After all, it is cheaper and easier – usually – unless you are one of the companies that get pwned! Right now, organizations are gambling that they won’t be among the unfortunate – like Target. A fools’ paradise at best! 

But there are higher concerns in play here than mere money and efficiency. What really is at stake is the privacy and security of all the system users – which one way or another means each and every one of us. None of us likes to know our private financial or medical or personal information has been exposed to public scrutiny or compromise, not to mention identity theft and ruined credit ratings. And what about utilities and manufacturing concerns? Failure to implement the best security measures among power concerns, for example, can easily lead to real disasters and even loss of human life. Which all means that it behooves us to implement controls like effective monitoring and vendor security management. There is no doubt about it. Sooner or later we are going to have to bite the bullet. 

Unfortunately, private concerns are not going to change without prodding. That is where private and governmental regulatory bodies are going to come into play. They are going to have to force us to implement better information security. And it looks like one of the first steps in this process is being taken by the PCI Security Standards Council. Topics for their special interest group projects in 2015 are going to be daily log monitoring and shared security responsibilities for third party service providers.

That means that all those organizations out there that foster the use of or process credit cards are going to see new requirements in these fields in the next couple of years. Undoubtedly similar requirements for increased security measures will be seen in the governmental levels as well. So why wait until the last minute? If you start now implementing not only effective monitoring and 3rd party security, but other “best practices” security measures, it will be much less painful and more cost effective for you. You will also be helping us all by coming up with new ways to practically and effectively detect security incidents through system monitoring. How about increasing the use of low noise anomaly detectors such as honey pots? What about concentrating more on monitoring information leaving the network than what comes in? How about breaking massive networks into smaller parts that are easier monitor and secure? What ideas can you come up with to explore?

This post written by John Davis.

My Time as a HoneyPoint Client

Prior to joining MicroSolved as an Intelligence Engineer, I was the Information Security Officer and Infrastructure Manager for a medical management company.  My company provided medical care and disease management services to over 2 million individuals.  Throughout my tenure at the medical management organization, I kept a piece of paper on my bulletin board that said “$100,000,000”.

 

Why “$100,000,000”?  At the time, several studies demonstrated that the average “street value” of a stolen medical identity was $50.  If each record was worth $50, that meant I was responsible for protecting $100,000,000 worth of information from attackers.  Clearly, this wasn’t a task I could accomplish alone.

 

Enter: MicroSolved & HoneyPoint

 

Through my membership with the Central Ohio Information Systems Security Association, I met several members of the MicroSolved team.  I engaged them to see if they could help me protect my organization from the aforementioned attackers.  They guided me through HIPPA/HITECH laws and helped me gain a further understanding of how I could protect our customers.  We worked together to come up with innovative solutions that helped my team mitigate a lot of the risks associated with handling/processing 2 million health care records.

 

A core part of our solution was to leverage the use of HoneyPoint Security Server.  By using HoneyPoint, I was able to quickly gain visibility into areas of our network that I was often logically and physically separated from.  I couldn’t possibly defend our company against every 0-day attack.  However, with HoneyPoint, I knew I could quickly identify any attackers that had penetrated our network.

 

Working for a SMB, I wore many hats.  This meant that I didn’t have time to manage another appliance that required signature updates.  I quickly found out that HoneyPoint didn’t require much upkeep at all.  A majority of my administrative tasks surrounding HoneyPoint were completed when I deployed agents throughout our LAN segments that mimicked existing applications and services.  I quickly gained the real-time threat analysis that I was looking for.

 

If you need any assistance securing your environment or if you have any questions about HoneyPoint Security Server, feel free to contact us by sending an email to: info@microsolved.com.

 

This post contributed by Adam Luck.