Network Knowledge and Segmentation

If you look at most cutting-edge network information security guidance, job #1 can be paraphrased as “Know Thy Network.” It is firmly recommended (and in much regulatory guidance demanded) that organizations keep up-to-date inventories of hardware and software assets present on their computer networks. In fact, the most current recommendation is that organizations utilize software suites that not only keep track of inventories, but monitor all critical network entities with the aim of detecting any hardware or software applications that should not be there.

Another part of network knowledge is mapping data flows and trust relationships on networks, and mapping which business entities use which IT resources and information. For this knowledge, I like to go to my favorite risk management tool: the Business Impact Analysis (BIA). In this process, input comes from personnel across the enterprise detailing what they do, how they do it, what resources they need, what input they need, what output they produce and more (see MSI blog archives for more information about BIA and what it can do for your information security program).

About now, you are probably asking what all this has to do with network segmentation. The answer is that you simply must know where all your network assets are, who needs access to them and how they move before you can segment the network intelligently and effectively. It can all be summed up with one phrase: Need to Know. Need to know is the very basis of access control, and access control is what network segmentation is all about. You do not want anyone on your network to “see” information assets that they do not need to see in order to properly perform their business functions. And by the same token, you do not want network personnel to be cut off from information assets that they do need to perform their jobs. These are the reasons network knowledge and network segmentation go hand-in-hand.

Proper network knowledge becomes even more important when you take the next step in network segmentation: enclaving. I will discuss segmentation versus enclaving in my next blog later this month.

Why Segment Your Network?

Network segmentation is the practice of splitting your computer network into subnetworks or network segments (also known as zoning). This is typically done using combinations of firewalls, VLANs, access controls and policies & procedures. Implementing network segmentation requires planning and effort, and it can entail some teething problems along the way as well. So why should it be done?

The number one reason is to protect the security of your network resources and information. When people first started to defend their homes and enterprises from attack, they built perimeter walls and made sure everything important was inside of those walls. They figured doing this would keep their enemies outside where they couldn’t possibly do any damage. This was a great idea, but unfortunately it had problems in the real world.

People found that the enemy only had to make one small hole in their perimeter defenses to be able to get at all of their valuables. They also realized that their perimeter defense didn’t stop evil insiders from wreaking havoc on their valuables. To deal with these problems, people started to add additional layers of protection inside of their outer walls. They walled off enclaves inside the outer defenses and added locks and guards to their individual buildings to thwart attacks.

This same situation exists now in the world of network protection. As network security assessors and advisors, we see that most networks we deal with are still “flat;” they are not really segmented and access controls alone are left to prevent successful attacks from occurring. But in the real world, hacking into a computer network is all about gaining a tiny foothold on the network, then leveraging that access to navigate around the network. The harder it is for these attackers to see the resources they want and navigate to them, the safer those resources are. In addition, the more protections that hackers need to circumvent during their attacks, the more likely they are to be detected. It should also be noted that network segmentation works just as well against the internal threat; it is just as difficult for an employee to gain access to a forbidden network segment as it is for an Internet-based attacker.

Increased security is not the only advantage of network segmentation. Instead of making network congestions worse, well implemented segmentation can actually reduce network congestion. This is because there are fewer hosts, thus less local traffic per segment. In addition, segmentation can help you contain network problems by limiting the effects of local failures that occur on other parts of the network.

The business reasons for implementing network segmentation are becoming more apparent every day. Increasingly, customers are demanding better information security from the businesses they employ. If the customer has a choice between two very similar companies, they will almost assuredly pick the company with better security. Simply being able to say to your customers that your network is fully segmented and controlled can improve your chances of success radically.

Segmenting With MSI MachineTruth

Many organizations struggle to implement network segmentation and secure network enclaves for servers, industrial controls, SCADA or regulated data. MicroSolved, Inc. (“MSI”) has been helping clients solve information security challenges for nearly twenty-five years on a global scale. In helping our clients segment their networks and protect their traffic flows, we identified a better approach to solving this often untenable problem.

That approach, called MachineTruth™, leverages our proprietary machine learning and data analytics platform to support our industry leading team of experts throughout the process. Our team leverages offline analysis of configuration files, net flow and traffic patterns to simplify the challenge. Instead of manual review by teams of network and systems administrators, MachineTruth takes automated deep dives into the data to provide real insights into how to segment, where to segment, what filtering rules need to be established and how those rules are functioning as they come online.

Our experts then work with your network and security teams, or one of our select MachineTruth Implementation Partners, to guide them through the process of installing and configuring filtering devices, detection tools and applications needed to support the segmentation changes. As the enclaves start to take shape, ongoing oversight is performed by the MSI team, via continual analytics and modeling throughout the segmentation effort. As the data analysis and implementation processes proceed, the controls and rules are optimized and transitioned to steady state maintenance.

Lastly, the MSI team works with the segmentation stakeholders to document, socialize and transfer knowledge to those who will manage and support the newly segmented network and its various enclaves for the long term. This last step is critical to ensuring that the network changes and segmentation initiatives remain in place in the future.

This data-focused, machine learning-based approach enables segmentation for even the most complex of environments. It has been used to successfully save hundreds of man-years of labor and millions of dollars in overhead costs. It has reduced the time to segment international networks from years to months, while significantly raising the quality and security of the new environments. It has accomplished these feats, all while reducing network downtime, outages and potentially dangerous misconfiguration issues.

If your organization is considering or in the process of performing network segmentation for your critical data, you should take a look at the MachineTruth approach from MSI. It could mean the difference between success and struggle for this critical initiative.


Last Quick and Dirty Log Tip for the Week

OK, so this week I posted two other blog posts about doing quick and dirty log analysis and some of the techniques I use. This one also covers converting column logs to CSV.

After the great response, I wanted to drop one last tip for the week. 

Several folks asked me about re-sorting and processing the column-based data in different ways and to achieve different analytical views. 

Let me re-introduce you to my friend and yours, sort.

In this case, instead of using the sort -n -r like before (numeric sort, reverse order), we can use:

  • sort -k# -n input_file (where # is the number of the column you’d like to sort by and the input file is the name of the file to sort)
    • You can use this inline by leveraging the pipe (|) again – i.e.: cat input.txt | sort -k3 -n (this types the input file and sends it to sort for sorting on the third column in numeric order) (-r would of course, reverse it…)
    • You can write the output of this to a file with redirects “> filename.txt”, i.e.: cat input.txt | sort -k3 -n -r > output.txt
      • You could also use “>>” as the redirect in order to create a file if it doesn’t exist OR append to a file if it does exist… i.e..:  cat input.txt | sort -k3 -n -r >> appended_output.txt

That’s it! It’s been a fun week sharing some simple command line processing tips for log files. Drop me a line on Twitter (@lbhuston) and let me know what you used them for, or which ones are your favorite. As always, thanks and have a great weekend! 

Quick And Dirty Log Analysis Followup

Earlier this week, I posted some tips for doing Quick and Dirty PA Firewall Log Analysis.

After I posted this, I got a very common question, and I wanted to answer it here.

The question is something along the lines of “When I use the techniques from your post, the outputs of the commands are column separated data. I need them to be CSV to use with my (tool/SEIM/Aunt Gracie/whatever). How can I convert them?” Sound familiar?

OK, so how do we accomplish this feat of at the command line without all of the workarounds that people posted, and without EVER loading Excel? Thankfully we can use awk again for this.

We can use:

  • awk ‘BEGIN { OFS = “,”} ; {print $1,$2,$3}’
    • Basically, take an input of column data, and print out the columns we want (can be any, in this case I want the first 3 columns), and make the outputs comma delimited.
    • We can just append this to our other command stacks with another pipe (|) to get our output CSV
  • Example: cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $8,$9}’ | sort -n | uniq -c | sort -n -r | awk ‘BEGIN { OFS = “,”} ; {print $1,$2,$3}’
    • In this example, the source IP and destination IP will be analyzed, and the reduced to unique pairs, along with the number of times that that pair is duplicated in the input log (I use this as a “hit rate” as I described earlier
      • A common question, why do I ask for two columns in the first awk and then ask for three columns in the second awk?
        • The answer of course, is that the first awk prints the unique pairs, but it also adds a column of the “hit rate”, so to get the output appropriately, I need all three fields.

So, once again, get to know awk. It is your friend.:)

PS – Yes, I know, there are hundreds of other ways to get this same data, in the same format, using other command line text processing tools. Many may even be less redundant than the commands above. BUT, this is how I did it. I think it makes it easy for people to get started and play with the data. Post your ways to Twitter or share with the community. Exploration is awesome, so it will encourage users to play more. Cool! Hit me on Twitter if you wanna share some or talk more about this approach (@lbhuston).

Thanks for reading!

Quick & Dirty Palo Alto Log Analysis

OK, so I needed to do some quick and dirty traffic analysis on Palo Alto text logs for a project I was working on. The Palo Alto is great and their console tools are nice. Panorama is not too shabby. But, when I need quick and dirty analysis and want to play with data, I dig into the logs. 
 
That said, for my quick analysis, I needed to analyze a bunch of text logs and model the traffic flows. To do that, I used simple command line text processing in Unix (Mac OS, but with tweaks also works in Linux, etc.)
 
I am sharing some of my notes and some of the useful command lines to help others who might be facing a similar need.
 
First, for my project, I made use of the following field #’s in the text analysis, pulled from the log header for sequence:
  • $8 (source IP) 
  • $9 (dest IP)
  • $26 (dest port)
  • $15 (AppID)
  • $32 (bytes)
 
Once, I knew the fields that corresponded to values I wanted to study, I started using the core power of command line text processing. And in this case, the power I needed was:
  • cat
  • grep
    • Including, the ever useful grep -v (inverse grep, show me the lines that don’t match my pattern)
  • awk
    • particularly: awk ‘BEGIN { FS = “,”} ; {print $x, $y}’ which prints specific columns in CSV files 
  • sort
    • sort -n (numeric sort)
    • sort -r (reverse sort, descending)
  • uniq
    • uniq -c (count the numbers of duplicates, used for determining “hit rates” or frequency, etc.)
 
Of course, to learn more about these commands, simply man (command name) and read the details. 😃 
 
OK, so I will get you started, here are a few of the more useful command lines I used for my quick and dirty analysis:
  • cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $8,$9,$26}’ | sort | uniq -c | sort -n -r > hitrate_by_rate.txt
    • this one produces a list of Source IP/Dest IP/Dest Port unique combinations, sorted in descending order by the number of times they appear in the log
  • cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $8,$9}’ | sort -n | uniq -c | sort -n -r > uniqpairs_by_hitrate.txt
    • this one produces a list of the uniq Source & Destination IP addresses, in descending order by how many times they talk to each other in the log (note that their reversed pairings will be separate, if they are present – that is if A talks to B, there will be an entry for that, but if B initiates conversations with A, that will be a separate line in this data set)
  • cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $15}’ | sort | uniq -c | sort -n -r > appID_by_hitrate.txt
    • this one uses the same exact techniques, but now we are looking at what applications have been identified by the firewall, in descending order by number of times that application identifier appears in the log
 
Again, these are simple examples, but you can tweak and expand as you need. This trivial approach to command line text analysis certainly helps with logs and traffic data. You can use those same commands to do a wondrous amount of textual analysis and processing. Learn them, live them, love them. 😃 
 
If you have questions, or want to share some of the ways you use those commands, please drop us a line on Twitter (@microsolved) or hit me up personally for other ideas (@lbhuston). As always, thanks for reading and stay safe out there! 

Election Hacking

There has been a lot of talk in the news lately about election hacking, especially about the Russia government possibly attempting to subvert the upcoming presidential election. And I think that in a lot of ways it is good that this has come up. After all, voting systems are based on networked computer systems. Private election and campaign information is stored and transmitted on networked computer systems. That means that hacking can indeed be a factor in elections, and the public should be made well aware of it. We are always being told by ‘authorities’ and ‘pundits’ what is and is not possible. And generally we are gullible enough to swallow it. But history has a lot of lessons to teach us, and one of the most important is that the ‘impossible’ has a nasty way of just happening.

Authorities are saying now that because of the distributed nature of voting systems and redundancies in voting record-keeping that it would be virtually impossible for an outside party to rig the numbers in the election. But that is just a direct method of affecting an election. What about the indirect methods? What would happen, for instance, if hackers could just cause delays and confusion on Election Day? If they could cause long lines in certain voting districts and smooth sailing in other voting districts, couldn’t they affect the number of Democratic Votes versus Republican votes? We all know that if there is a hassle at the polls that a lot of people will just give up and go back home again. And this is just one way that elections could be affected by hacking. There are bound to be plenty of others.

With this in mind, isn’t it wise to err on the side of caution? Shouldn’t we as a people insist that our voting systems are secured as well as is possible? Don’t we want to consider these systems to be ‘vital infrastructure’? These are the reasons I advocate instituting best practices as the guidance to be used when securing electronic voting systems. Systems should be configured as securely as possible, associated communications systems should be robust and highly encrypted, risk should be assessed and addressed before the election, monitoring efforts should be strictly followed and incident response plans should be practiced and ready to go. These efforts would be one good way to help ensure a fair and ‘hacker free’ election.

How to Build an Information Security Program

Organizations have a lot of trouble with information security programs:

  • They don’t really understand the reasons why modern concerns must have effective information security programs or how to properly integrate them into their present business models.
  • They don’t truly understand the complexities of modern computer and communications systems and so have no gut instinct how to properly secure them. They therefore must trust information security pundits and service providers even though they get lots of contradictory and confusing advice.
  • They spend a lot of money buying all kinds of security devices and services and they find that their information security program is still full of holes and problems.
  • And after all of this, they find that they are constantly being asked for even more money to buy even more devices and services.

Sound familiar? Who wouldn’t become frustrated and cynical?! So my advice is: whenever a problem becomes seemingly too complex to tackle, go back to the beginning and start from first principles.

What exactly are you trying to protect? Have you identified and prioritized the business functions, information, devices and infrastructure that you need in order to run smoothly as an organization? If not, that should be your first priority. You should record and prioritize every business function needed to run your organization. You should also ensure that you keep accurate inventories of critical software applications and hardware devices. In addition, you should know exactly how information flows into, out of and around your network and what trusts what. If you don’t know exactly what you have, how can you protect it effectively, and what is more, economically?

Do you have effective mechanisms in place to limit access to your systems and information? You need to limit access to only those individuals who have a real need for that access (something you have just quantified by taking care of the first step outlined above). That means that you must configure your systems correctly to require user authentication, you must properly enroll and disenroll users correctly, you must properly identify those seeking access and you must have access management plans in place to oversee the whole process.

Have you leveraged your most valuable information security asset: your employees? Machines can only aid people in information security matters, they can never replace them. If you properly train, and what is even more important, enfranchise your employee personnel in the information security program, the return will astound you. Make them understand how valuable they are to the organization and ask for their help in security matters. Make information security training a fun thing and pass out kudos and small rewards to those who help the program. This can save you big money on automated security systems that just don’t perform as advertised.

Are you storing and transmitting information securely? For most organizations, information is their most valuable asset. If this is true of your organization, you should ensure that you properly protect it when it is moving or just sitting in storage. You should classify information for type and sensitivity and protect it accordingly. Spare no expense in protecting the really important info, but why waste time and money encrypting or otherwise protecting minor information that is of little consequence if revealed?

Do you know what is happening on your systems? Computer networks and the processes and people controlling them must be effectively monitored. Organizations should employ effective tools to monitor, parse and consolidate events and log data on their networks. But these should only be tools to aid humans in making this task manageable; they can never actually replace the human element. In addition, management personnel at all levels of the organization should have processes in place to ensure that security policies and procedures are current, effective and enforced. If you perform these tasks correctly, the most difficult part of incident response – incident identification – is also taken care of.

Do you test your security measures? You can never really tell how effective an information security program is without testing it. There are many tools available that test your network for security vulnerabilities such as configuration errors, access holes and out of date systems. You should employ these mechanisms regularly and patch the holes they uncover in a logical and hierarchical manner. You should also consider other kinds of security tests such as penetration testing, application testing and social engineering exercises. These will help you understand not only where the holes are, but how well your systems and personnel are coping with them.

These processes are the foundation of an effective information security program. If you build these strongly, other information security processes such as incident response, business continuity and vendor management will be well supported. If you skimp on these most basic steps, then your information security program will likely collapse of its own weight.

Incident Response & Business Continuity – Planning and Practice Make Perfect

Computer systems and networks are irrevocably woven into the fabric of business practices around the world; we quite literally cannot do without them. What’s more, our lives and our business practices become more dependent on these devices every day. Unfortunately, this makes computer networks the number one criminal playground in the modern world.

Although computer security technology and processes are becoming increasingly effective, cyber-criminals have more than kept pace. Every year the number of computer security compromises is increasing. Cyber-attacks are becoming more sophisticated and can originate from anywhere that has Internet connectivity. It should also be remembered that cyber-criminals only have to be successful in one of their attacks to win, while businesses must successfully defend against every attack, every time to win the game. The upshot of all this is that every business is increasingly liable to experience some kind of cyber-attack. That is the reason why regulators and security professionals have been pushing businesses to increase the scope and effectiveness of their incident response capabilities in recent years.

To help counter modern cyber-incidents effectively, organizations must respond to them quickly and in an accurate, pre-determined manner. IR teams must determine and document specific actions to be taken in the event common information security events occur. Responsibilities for performing these incident response “procedures” should be assigned to specific team members. Once detailed procedures for addressing common security incidents have been completed, the IR team should review them and role play response scenarios on a recurring basis (at least twice annually is recommended). It is an unfortunate truth that incident response is a perishable skill and must be regularly practiced to be effective.

This same advice also applies to business continuity/disaster recovery plans – functionally, they are really the same thing as incident response. Whether your business is facing a flood, a tornado, a cyber-attack or even an employee error, they all have negative effects that can be lessened if you have effective, pre-planned responses in place that everyone involved is familiar with and has practiced regularly. So why not practice IR and BC/DR together? It can minimize the time personnel are away from their regular business duties and maximize the effectiveness of their training.

Hurricane Matthew Should Remind You to Check Your DR/BC Plans

The news is full of tragedy from Hurricane Matthew at the moment, and our heart goes out to those being impacted by the storm and its aftermath.

This storm is a powerful hit on much of the South East US, and should serve as a poignant reminder to practice, review and triple check your organization’s DR and BC plans. You should have a process and procedure review yearly, with an update at least quarterly and anytime major changes to your operations or environment occur. Most organization’s seem to practice these events on a quarterly or at least 2x per year cycle. They often use a full test once a year, and table top exercises for the others. 

This seems to be an effective cycle and approach. 

We hope that everyone stays safe from the hurricane and we are hoping for minimal impacts, but we also hope that organizations take a look at their plans and give them a once over. You never know when you just might need to be better prepared.