Network Segmentation Month

February is Network Segmentation Month at MSI. During February, our blog and social media content will focus on network segmentation initiatives. A how, why, when, what and who –  kind of look at creating secure enclaves within your network.

These enclaves could be based on risk zones, types of systems, types of access, business process, regulatory requirements or many other meta factors. 

We will discuss different reasons for segmenting, approaches to segmentation, some of the lessons we’ve learned from segmenting some of the largest and most complex environments in our 25 year history. It won’t all be positive – we’ll also share some of the ways that segmentation fails, some of the challenges and some of the drawbacks of segmenting networks.

So, strap in and stay tuned for a month of content focused on using segmentation to better secure your environment.

As always, if you have stories to share or want to discuss a specific segmentation question, you can do that via email (info@microsolved.com) or via Twitter to @microsolved or to me personally. (@lbhuston) MSI is always available to help you with segmentation projects, be that planning, implementation, oversight or attestation. We have a proprietary, data-centric approach to this work which we have been using for several years. You can learn more about it here – MachineTruth. We look forward to hearing from you!

Last Quick and Dirty Log Tip for the Week

OK, so this week I posted two other blog posts about doing quick and dirty log analysis and some of the techniques I use. This one also covers converting column logs to CSV.

After the great response, I wanted to drop one last tip for the week. 

Several folks asked me about re-sorting and processing the column-based data in different ways and to achieve different analytical views. 

Let me re-introduce you to my friend and yours, sort.

In this case, instead of using the sort -n -r like before (numeric sort, reverse order), we can use:

  • sort -k# -n input_file (where # is the number of the column you’d like to sort by and the input file is the name of the file to sort)
    • You can use this inline by leveraging the pipe (|) again – i.e.: cat input.txt | sort -k3 -n (this types the input file and sends it to sort for sorting on the third column in numeric order) (-r would of course, reverse it…)
    • You can write the output of this to a file with redirects “> filename.txt”, i.e.: cat input.txt | sort -k3 -n -r > output.txt
      • You could also use “>>” as the redirect in order to create a file if it doesn’t exist OR append to a file if it does exist… i.e..:  cat input.txt | sort -k3 -n -r >> appended_output.txt

That’s it! It’s been a fun week sharing some simple command line processing tips for log files. Drop me a line on Twitter (@lbhuston) and let me know what you used them for, or which ones are your favorite. As always, thanks and have a great weekend! 

Quick And Dirty Log Analysis Followup

Earlier this week, I posted some tips for doing Quick and Dirty PA Firewall Log Analysis.

After I posted this, I got a very common question, and I wanted to answer it here.

The question is something along the lines of “When I use the techniques from your post, the outputs of the commands are column separated data. I need them to be CSV to use with my (tool/SEIM/Aunt Gracie/whatever). How can I convert them?” Sound familiar?

OK, so how do we accomplish this feat of at the command line without all of the workarounds that people posted, and without EVER loading Excel? Thankfully we can use awk again for this.

We can use:

  • awk ‘BEGIN { OFS = “,”} ; {print $1,$2,$3}’
    • Basically, take an input of column data, and print out the columns we want (can be any, in this case I want the first 3 columns), and make the outputs comma delimited.
    • We can just append this to our other command stacks with another pipe (|) to get our output CSV
  • Example: cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $8,$9}’ | sort -n | uniq -c | sort -n -r | awk ‘BEGIN { OFS = “,”} ; {print $1,$2,$3}’
    • In this example, the source IP and destination IP will be analyzed, and the reduced to unique pairs, along with the number of times that that pair is duplicated in the input log (I use this as a “hit rate” as I described earlier
      • A common question, why do I ask for two columns in the first awk and then ask for three columns in the second awk?
        • The answer of course, is that the first awk prints the unique pairs, but it also adds a column of the “hit rate”, so to get the output appropriately, I need all three fields.

So, once again, get to know awk. It is your friend.:)

PS – Yes, I know, there are hundreds of other ways to get this same data, in the same format, using other command line text processing tools. Many may even be less redundant than the commands above. BUT, this is how I did it. I think it makes it easy for people to get started and play with the data. Post your ways to Twitter or share with the community. Exploration is awesome, so it will encourage users to play more. Cool! Hit me on Twitter if you wanna share some or talk more about this approach (@lbhuston).

Thanks for reading!

Quick & Dirty Palo Alto Log Analysis

OK, so I needed to do some quick and dirty traffic analysis on Palo Alto text logs for a project I was working on. The Palo Alto is great and their console tools are nice. Panorama is not too shabby. But, when I need quick and dirty analysis and want to play with data, I dig into the logs. 
 
That said, for my quick analysis, I needed to analyze a bunch of text logs and model the traffic flows. To do that, I used simple command line text processing in Unix (Mac OS, but with tweaks also works in Linux, etc.)
 
I am sharing some of my notes and some of the useful command lines to help others who might be facing a similar need.
 
First, for my project, I made use of the following field #’s in the text analysis, pulled from the log header for sequence:
  • $8 (source IP) 
  • $9 (dest IP)
  • $26 (dest port)
  • $15 (AppID)
  • $32 (bytes)
 
Once, I knew the fields that corresponded to values I wanted to study, I started using the core power of command line text processing. And in this case, the power I needed was:
  • cat
  • grep
    • Including, the ever useful grep -v (inverse grep, show me the lines that don’t match my pattern)
  • awk
    • particularly: awk ‘BEGIN { FS = “,”} ; {print $x, $y}’ which prints specific columns in CSV files 
  • sort
    • sort -n (numeric sort)
    • sort -r (reverse sort, descending)
  • uniq
    • uniq -c (count the numbers of duplicates, used for determining “hit rates” or frequency, etc.)
 
Of course, to learn more about these commands, simply man (command name) and read the details. 😃 
 
OK, so I will get you started, here are a few of the more useful command lines I used for my quick and dirty analysis:
  • cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $8,$9,$26}’ | sort | uniq -c | sort -n -r > hitrate_by_rate.txt
    • this one produces a list of Source IP/Dest IP/Dest Port unique combinations, sorted in descending order by the number of times they appear in the log
  • cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $8,$9}’ | sort -n | uniq -c | sort -n -r > uniqpairs_by_hitrate.txt
    • this one produces a list of the uniq Source & Destination IP addresses, in descending order by how many times they talk to each other in the log (note that their reversed pairings will be separate, if they are present – that is if A talks to B, there will be an entry for that, but if B initiates conversations with A, that will be a separate line in this data set)
  • cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $15}’ | sort | uniq -c | sort -n -r > appID_by_hitrate.txt
    • this one uses the same exact techniques, but now we are looking at what applications have been identified by the firewall, in descending order by number of times that application identifier appears in the log
 
Again, these are simple examples, but you can tweak and expand as you need. This trivial approach to command line text analysis certainly helps with logs and traffic data. You can use those same commands to do a wondrous amount of textual analysis and processing. Learn them, live them, love them. 😃 
 
If you have questions, or want to share some of the ways you use those commands, please drop us a line on Twitter (@microsolved) or hit me up personally for other ideas (@lbhuston). As always, thanks for reading and stay safe out there! 

An Exercise to Increase IT/OT Engagement & Cooperation

Just a quick thought on an exercise to increase the cooperation, trust and engagement between traditional IT and OT (operational technology – (ICS/SCADA tech)) teams. Though it likely applies to just about any two technical teams, including IT and development, etc.

Here’s the idea: Host a Hack-a-thon!

It might look something like this:

  • Invest in some abundant kits of LittleBits. These are like Legos with electronics, mechanical circuits and even Arduino/Cloud controllers built in. Easy, safe, smart and fun!
  • Put all of the technical staff in a room together for a day. Physically together. Ban all cell phones, calls, emails, etc. for the day – get people to engage – cater in meals so they can eat together and develop rapport
  • Split the folks into two or more teams of equal size, mixing IT and OT team members (each team will need both skill sets – digital and mechanical knowledge) anyway.
  • Create a mission – over the next 8 hours, each team will compete to see who can use their smart bits set to design, program and proto-type a solution to a significant problem faced in their everyday work environments.
  • Provide a prize for 1st and 2nd place team. Reach deep – really motivate them!
  • Let the teams go through the process of discussing their challenges to find the right problem, then have them use draw out their proposed solution.
  • After lunch, have the teams discuss the problems they chose and their suggested fix.Then have them build it with the LittleBits. 
  • Right before the end of the day, have a judging and award the prizes.

Then, 30 days later, have a conference call with the group. Have them again discuss the challenges they face together, and see if common solutions emerge. If so, implement them.

Do this a couple times a year, maybe using something like Legos, Raspberry Pis, Arduinos or just whiteboards and markers. Let them have fun, vent their frustrations and actively engage with one another. The results will likely astound you.

How does your company further IT/OT engagement? Let us know on Twitter (@microsolved) or drop me a line personally (@lbhuston). Thanks for reading! 

My 3 Favorite Podcast Episodes (So Far…)

The State of Security Podcast has been a fun endeavor and I am committed to continue working on it. I am currently working on raising it to multiple episodes per month, so as I was reflecting, I thought I would share my 3 favorite episodes so far. There are so many great moments, and so much generosity from my guests, I am certainly thrilled with all of them – but everyone has to have favorites… 🙂 

#1 – Episode 1 – This one holds a special place in my heart. Thanks to the wonderful Dave Rose and the absolutely brilliant Helen Patton, they made this interview segment much more comfortable than it should have been. If you can get past my stumbling and bumbling, they share some pure magic with the audience. I hopefully have improved as an interviewer, but much thanks to them for helping SoS get off to a roaring start! 

#2 – Episode 6 – One of the most personal episodes ever, an anonymous friend shares a tale of what it is like to work for over year on a major breach. There is heartbreak and pain here, well beyond infosec. I still get chills every time I listen to it.

#3 – Episode 9 – This one is so personal to me, I get butterflies when people tell me they listened to it. Adam Luck interviews me, and the questions get very personal, very fast. We cover some personal history, why I am an infosec professional and some of the amazing friendships I have enjoyed over the years. Stark and raw, this is worth dealing with the crappy audio, or at least people tell me it is. (This episode is also why we hired audio professionals for our episodes.)

Those are my 3. What are yours? Hit me up on Twitter (@lbhuston) or @microsolved and let us know. Thanks for listening!

State of Security Episode 12 Now Aavailable

We’ve just released episode 12 of the State Of Security Podcast. This time around, I answer questions from listeners. Things like the idea of a “Great Firewall” for the USA, the hack of the DNC, questions about launching products, working with mentees and even what I read in 2016. 

There’s some good stuff in here, and the podcast is just less than an hour. 

Check it out and let me know on Twitter what you think (@lbhuston) or drop @microsolved a line. 

Happy New Year, folks, and thanks for listening! 

Introducing AirWasp from MSI!

NewImage

For over a decade, HoneyPoint has been proving that passive detection works like a charm. Our users have successfully identified millions of scans, probes and malware infections by simply putting “fake stuff” in their networks, industrial control environments and other strategic locations. 

 

Attackers have taken the bait too; giving HoneyPoint users rapid detection of malicious activity AND the threat intelligence they need to shut down the attacker and isolate them from other network assets.

 

HoneyPoint users have been asking us about manageable ways to detect and monitor for new WiFi networks and we’ve come up with a solution. They wanted something distributed and effective, yet easy to use and affordable. They wanted a tool that would follow the same high signal, low noise detection approach that they brag about from their HoneyPoint deployments. That’s exactly what AirWasp does.

 

We created AirWasp to answer these WiFi detection needs. AirWasp scans for and profiles WiFi access points from affordable deck-of-cards-sized appliances. It alerts on any detected access points through the same HoneyPoint Console in use today, minimizing new cost and management overhead. It also includes traditional HoneyPoints on the same hardware to help secure the wired network too!

 

Plus, our self-tuning white list approach means you are only alerted once a new access point is detected – virtually eliminating the noise of ongoing monitoring. 

 

Just drop the appliance into your network and forget about it. It’ll be silent, passive and vigilant until the day comes when it has something urgent for you to act upon. No noise, just detection when you need it most.

 

Use Cases:

 

  • Monitor multiple remote sites and even employee home networks for new Wifi access points, especially those configured to trick users
  • Inventory site WiFi footprints from a central location by rotating the appliance between sites periodically
  • Detect scans, probes and worms targeting your systems using our acclaimed HoneyPoint detection and black hole techniques
  • Eliminate monitoring hassles with our integration capabilities to open tickets, send data to the SIEM, disable switch ports or blacklist hosts using your existing enterprise products and workflows

More Information

 

To learn how to bring the power and flexibility of HoneyPoint and AirWasp to your network, simply contact us via email (info@microsolved.com) or phone (614) 351-1237.


 

We can’t wait to help you protect your network, data and users!


Election Hacking

There has been a lot of talk in the news lately about election hacking, especially about the Russia government possibly attempting to subvert the upcoming presidential election. And I think that in a lot of ways it is good that this has come up. After all, voting systems are based on networked computer systems. Private election and campaign information is stored and transmitted on networked computer systems. That means that hacking can indeed be a factor in elections, and the public should be made well aware of it. We are always being told by ‘authorities’ and ‘pundits’ what is and is not possible. And generally we are gullible enough to swallow it. But history has a lot of lessons to teach us, and one of the most important is that the ‘impossible’ has a nasty way of just happening.

Authorities are saying now that because of the distributed nature of voting systems and redundancies in voting record-keeping that it would be virtually impossible for an outside party to rig the numbers in the election. But that is just a direct method of affecting an election. What about the indirect methods? What would happen, for instance, if hackers could just cause delays and confusion on Election Day? If they could cause long lines in certain voting districts and smooth sailing in other voting districts, couldn’t they affect the number of Democratic Votes versus Republican votes? We all know that if there is a hassle at the polls that a lot of people will just give up and go back home again. And this is just one way that elections could be affected by hacking. There are bound to be plenty of others.

With this in mind, isn’t it wise to err on the side of caution? Shouldn’t we as a people insist that our voting systems are secured as well as is possible? Don’t we want to consider these systems to be ‘vital infrastructure’? These are the reasons I advocate instituting best practices as the guidance to be used when securing electronic voting systems. Systems should be configured as securely as possible, associated communications systems should be robust and highly encrypted, risk should be assessed and addressed before the election, monitoring efforts should be strictly followed and incident response plans should be practiced and ready to go. These efforts would be one good way to help ensure a fair and ‘hacker free’ election.