Segmenting With MSI MachineTruth

Many organizations struggle to implement network segmentation and secure network enclaves for servers, industrial controls, SCADA or regulated data. MicroSolved, Inc. (“MSI”) has been helping clients solve information security challenges for nearly twenty-five years on a global scale. In helping our clients segment their networks and protect their traffic flows, we identified a better approach to solving this often untenable problem.

That approach, called MachineTruth™, leverages our proprietary machine learning and data analytics platform to support our industry leading team of experts throughout the process. Our team leverages offline analysis of configuration files, net flow and traffic patterns to simplify the challenge. Instead of manual review by teams of network and systems administrators, MachineTruth takes automated deep dives into the data to provide real insights into how to segment, where to segment, what filtering rules need to be established and how those rules are functioning as they come online.

Our experts then work with your network and security teams, or one of our select MachineTruth Implementation Partners, to guide them through the process of installing and configuring filtering devices, detection tools and applications needed to support the segmentation changes. As the enclaves start to take shape, ongoing oversight is performed by the MSI team, via continual analytics and modeling throughout the segmentation effort. As the data analysis and implementation processes proceed, the controls and rules are optimized and transitioned to steady state maintenance.

Lastly, the MSI team works with the segmentation stakeholders to document, socialize and transfer knowledge to those who will manage and support the newly segmented network and its various enclaves for the long term. This last step is critical to ensuring that the network changes and segmentation initiatives remain in place in the future.

This data-focused, machine learning-based approach enables segmentation for even the most complex of environments. It has been used to successfully save hundreds of man-years of labor and millions of dollars in overhead costs. It has reduced the time to segment international networks from years to months, while significantly raising the quality and security of the new environments. It has accomplished these feats, all while reducing network downtime, outages and potentially dangerous misconfiguration issues.

If your organization is considering or in the process of performing network segmentation for your critical data, you should take a look at the MachineTruth approach from MSI. It could mean the difference between success and struggle for this critical initiative.


Quick & Dirty Palo Alto Log Analysis

OK, so I needed to do some quick and dirty traffic analysis on Palo Alto text logs for a project I was working on. The Palo Alto is great and their console tools are nice. Panorama is not too shabby. But, when I need quick and dirty analysis and want to play with data, I dig into the logs. 
 
That said, for my quick analysis, I needed to analyze a bunch of text logs and model the traffic flows. To do that, I used simple command line text processing in Unix (Mac OS, but with tweaks also works in Linux, etc.)
 
I am sharing some of my notes and some of the useful command lines to help others who might be facing a similar need.
 
First, for my project, I made use of the following field #’s in the text analysis, pulled from the log header for sequence:
  • $8 (source IP) 
  • $9 (dest IP)
  • $26 (dest port)
  • $15 (AppID)
  • $32 (bytes)
 
Once, I knew the fields that corresponded to values I wanted to study, I started using the core power of command line text processing. And in this case, the power I needed was:
  • cat
  • grep
    • Including, the ever useful grep -v (inverse grep, show me the lines that don’t match my pattern)
  • awk
    • particularly: awk ‘BEGIN { FS = “,”} ; {print $x, $y}’ which prints specific columns in CSV files 
  • sort
    • sort -n (numeric sort)
    • sort -r (reverse sort, descending)
  • uniq
    • uniq -c (count the numbers of duplicates, used for determining “hit rates” or frequency, etc.)
 
Of course, to learn more about these commands, simply man (command name) and read the details. 😃 
 
OK, so I will get you started, here are a few of the more useful command lines I used for my quick and dirty analysis:
  • cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $8,$9,$26}’ | sort | uniq -c | sort -n -r > hitrate_by_rate.txt
    • this one produces a list of Source IP/Dest IP/Dest Port unique combinations, sorted in descending order by the number of times they appear in the log
  • cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $8,$9}’ | sort -n | uniq -c | sort -n -r > uniqpairs_by_hitrate.txt
    • this one produces a list of the uniq Source & Destination IP addresses, in descending order by how many times they talk to each other in the log (note that their reversed pairings will be separate, if they are present – that is if A talks to B, there will be an entry for that, but if B initiates conversations with A, that will be a separate line in this data set)
  • cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $15}’ | sort | uniq -c | sort -n -r > appID_by_hitrate.txt
    • this one uses the same exact techniques, but now we are looking at what applications have been identified by the firewall, in descending order by number of times that application identifier appears in the log
 
Again, these are simple examples, but you can tweak and expand as you need. This trivial approach to command line text analysis certainly helps with logs and traffic data. You can use those same commands to do a wondrous amount of textual analysis and processing. Learn them, live them, love them. 😃 
 
If you have questions, or want to share some of the ways you use those commands, please drop us a line on Twitter (@microsolved) or hit me up personally for other ideas (@lbhuston). As always, thanks for reading and stay safe out there! 

3 Reasons You Need Customized Threat Intelligence

Many clients have been asking us about our customized threat intelligence services and how to best use the data that we can provide.

1. Using HoneyPoint™, we can deploy fake systems and applications, both internally and in key external situations that allow you to generate real-time, specific to your organization, indicators of compromise (IoC) data – including a wide variety of threat source information for blacklisting, baseline metrics to make it easy to measure changes in the levels of threat actions against your organization up to the moment, and a wide variety of scenarios for application and attack surface hardening.

2. Our SilentTiger™ passive assessments, can help you provide a wider lens for vulnerability assessment visibility than your perimeter, specifically. It can be used to assess, either single instance or ongoing, the security posture of locations where your brand is extended to business partners, cloud providers, supply chain vendors, critical dependency API and data flows and other systems well beyond your perimeter. Since the testing is passive, you don’t need permission, contract language or control of the systems being assessed. You can get the data in a stable, familiar format – very similar to vulnerability scanning reports or via customized data feeds into your SEIM/GRC/Ticketing tools or the like. This means you can be more vigilant against more attack surfaces without more effort and more resources.

3. Our customized TigerTrax™ Targeted Threat Intelligence (TTI) offerings can be used for brand specific monitoring around the world, answering specific research questions based on industry / geographic / demographic / psychographic profiles or even products / patents or economic threat research. If you want to know how your brand is being perceived, discussed or threatened around the world, this service can provide that either as a one-time deliverable, or as an ongoing periodic service. If you want our intelligence analysts to look at industry trends, fraud, underground economics, changing activist or attacker tactics and the way they collide with your industry or organization – this is the service that can provide that data to you in a clear and concise manner that lets you take real-world actions.

We have been offering many of these services to select clients for the last several years. Only recently have we decided to offer them to our wider client and reader base. If you’d like to learn how others are using the data or how they are actively hardening their environments and operations based on real-world data and trends, let us know. We’d love to discuss it with you! 

80/20 Rule of Information Security

After my earlier this post about the SDIM project, several people on Twitter also asked me to do the same for the 80/20 Rule of Information Security project we completed several years ago. 

It is a list of key security projects, their regulatory mappings, maturity models and such. Great for building a program or checking yours against an easy to use baseline.

Thanks for reading, and here is where you can learn more about the 80/20 project. Click here.

MSI’s Targeted Threat Intelligence is Adding Huge Value to M&A Due Diligence

Many of our clients have been using our Targeted Threat Intelligence service offerings to assist them with due diligence efforts around mergers and acquisitions activities. For many years, clients have leveraged MSI services during and after an acquisition, usually to perform security assessments, identify control gaps and validate remediations. Our network discovery and mapping tools, including MachineTruth, have been an excellent fit for helping them understand exactly what their new architectures look like and where it makes sense for interconnections and network hardening.

Now, with TigerTrax™ and MSI’s passive assessment platform, our threat intelligence and passive assessment capabilities are aiding clients in the due diligence process, making us an excellent partner throughout the M&A lifecycle! These new offerings allow us to add brand/trend data and cyber-security analysis to potential M&A targets, before they are even aware that they are prospects and without their knowledge or contractual engagement. It allows organizations more flexibility in identifying potential Intellectual Property leaks, poor security practices or other IT risks before approaching an acquisition target. The brand/trend reputational data is blended in, providing a new lens to look for potential issues around customer service, activism, impacts from poor online or data hygiene, etc.

While these same techniques have proven to be a boon for vendor supply chain security, they have been leveraged in M&A activity for a year longer. MSI has a strong history in this space and continues to innovate with new data sources, optimized processes and bleeding edge tools for making M&A safer, more efficient and more profitable. To learn more about our M&A offerings, hear about our work and research in the M&A space or discuss how we can assist your organization with M&A services, please drop us a line at info@microsolved.com, or give us a call at (614) 351-1237 today. We look forward to working with you! 

Hosting Providers Matter as Business Partners

Hosting providers seem to be an often overlooked exposure area for many small and mid-size organizations. In the last several weeks, as we have been growing the use of our passive assessment platform for supply chain assessments, we have identified several instances where the web site hosting company (or design/development company) is among the weakest links. Likely, this is due to the idea that these services are commodities and they are among the first areas where organizations look to lower costs.

The fall out of that issue, though, can be problematic. In some cases, organizations are finding themselves doing business with hosting providers who reduce their operational costs by failing to invest in information security.* Here are just a few of the most significant issues that we have seen in this space:
  • “PCI accredited” checkout pages hosted on the same server as other sites that are clearly under the control of an attacker
  • Exposed applications and services with default credentials on the same systems used to host web sites belonging to critical infrastructure organizations
  • Dangerous service exposures on hosted systems
  • Malware infested hosting provider ad pages, linked to hundreds or thousands of their client sites hosted with them
  • Poorly managed encryption that impacts hundreds or thousands of their hosted customer sites
  • An interesting correlation of blacklisted host density to geographic location and the targeted verticals that some hosting providers sell to
  • Pornography being distributed from the same physical and logical servers as traditional businesses and critical infrastructure organizations
  • A clear lack of DoS protection or monitoring
  • A clear lack of detection, investigation, incident response and recovery maturity on the part of many of the vendors 
It is very important that organizations realize that today, much of your risk extends well beyond the network and architectures under your direct control. Partners, and especially hosting companies and cloud providers, are part of your data footprint. They can represent significant portions of your risk, and yet, are areas where you may have very limited control. 
 
If you would like to learn more about using our passive assessment platform and our vendor supply chain security services to help you identify, manage and reduce your risk – please give us a call (614-351-1237) or drop us a line (info /at/ MicroSolved /dot/ com). We’d love to walk you through some of the findings we have identified and share some of the insights we have gleaned from our analysis.
 
Until next time, thanks for reading and stay safe out there!
 
*Caveat: This should not be taken that information security is correlated with cost. We have seen plenty of “high end”, high cost hosting companies with very poor security practices. The inverse is also true. Validation is the key…

What is MSI Passive Assessment & How Does it Empower Supply Chain Security

MSI’s passive assessment represents a new approach to understanding the security risks associated with an organization, be it yours or a vendor, prospect or business partner’s. MSI’s passive assessment leverages the unique power of the MSI TigerTrax™ analytics platform to perform automated research, intelligence gathering and correlation from hundreds of sources, both public and private, that describe the effective security posture of an organization.
 
The engine is able to combine the power of hundreds of existing tools to build the definitive profile of an organization’s security posture –  such as:
  • open source intelligence
  • corporate data analytics
  • honeypot sources
  • deep & dark net search engines
  • other data mining tools 
 
MSI’s passive assessment gives you current and historical information about the security posture of the target, such as:
  • Current IOCs associated with them or their hosted applications/systems (perfect for cloud environments!)
  • Historic campaigns, breaches or outbreaks that have been identified or reported in public and in our proprietary intelligence sources
  • Leaked credentials, account information or intellectual property associated with the target
  • Underground and dark net data associated with the target
  • Misconfigurations or risky exposures of systems and services that could empower attackers
  • Public vulnerabilities
  • Other relevant intelligence about their risks, threats and vulnerabilities – new sources added weekly…
 
Best of all, it gathers and correlates that data without touching the target’s network or systems directly in any way. That means you do not need the organization’s permission or knowledge of your research, so you can keep your interest private!
 
In the supply chain security use case, the tool can be run against organizations as a replacement for full risk assessment processes and used as an initial layer to identify and focus on vendors with identified security issues. You can find more information about it used in the following posts about creating a process for supply chain security initiatives:
 
Clients are currently using this service for M&A, vendor supply chain security management, risk assessment and to get an attacker’s eye view of their own networks or cloud deployments/hosted solutions.
 
To learn more about MSI’s passive assessment, please talk with your MSI account executive today!
 
 
 

Comparing 2 Models for DMZ Implementations

I recently had a discussion with another technician about the security of the two most popular DMZ implementation models. That is: 
  • The “3 Legged Model” or “single firewall” – where the DMZ segment(s) are connected via a dedicated interface (or interfaces) and a single firewall implements traffic control rules between all of the network segments (the firewall could be a traditional firewall simply enforcing interface to interface rules or a “next generation” firewall implementing virtualized “zones” or other logical object groupings)
  • The “Layered Model” or “dual firewall”- where the DMZ segment(s) are connected between two sets of firewalls, like a sandwich
 
Both approaches are clearly illustrated above, and explained in detail in the linked wikipedia article, so I won’t repeat that here. 
 
I fully believe that the “3 Legged Model” is a lower risk implementation than the layered model. This outright contradicts what the wikipedia article above states: 
 
     “The most secure approach, according to Stuart Jacobs, [1]is to use two firewalls to create a DMZ.” — wikipedia article above.
 
While the Layered model looks compelling at first blush, and seems to apply the concept of “more firewalls would need to be compromised to lead to internal network access”; I believe that, in fact, it reduces the overall security posture in the real world, and increases risk. Here’s why I feel that way. Two real-world issues that often make things that look great at first blush or that “just work” in the lab environment, have significant disadvantages in the real world are control complexity and entropy. Before we dig too deeply into those issues though, let’s talk about how the two models are similar. (Note that we are assuming that the firewalls themselves are equally hardened and monitored – IE, they have adequate and equal security postures both as an independent system and as a control set, in aggregate.)
 
Reviewing the Similarities
 
In both of the models, traffic from the DMZ segment(s) pass through the firewall(s) and traffic controls are applied. Both result in filtered access to the internal trusted network via an often complex set of rules. Since in both cases, traffic is appropriately filtered, authorization, logging and alerting can adequately occur in both models. 
 
Establishing Differences
 
Now the differences. In the 3 Legged model, the controls are contained in one place (assuming a high availability/failover pair counts as a single set of  synced controls), enforced in one place, managed and monitored in one place. The rule set does not have cascading dependencies on other implementations of firewalls, and if the rule set is well designed and implemented, analysis at a holistic level is less complex.
 
In the Layered model, the controls are contained across two separate instances, each with different goals, roles and enforcement requirements. However, the controls and rule sets are interdependent. The traffic must be controlled through a holistic approach spread across the devices, and failures at either firewall to adequately control traffic or adequately design the rule sets could cause cascading unintended results. The complexity of managing these rules across devices, with different rule sets, capabilities, goals and roles is significantly larger than in a single control instance. Many studies have shown that increased control complexity results in larger amounts of human error, which in turn contributes to higher levels of risk. 
 
Control Complexity Matters
 
Misconfigurations, human errors and outright mistakes are involved in a significant number (~95%) of compromises. How impactful are human mistakes on outright breaches? Well according to the 2015 Verizon DBIR:
 
“As with years past, errors made by internal staff, especially system administrators who were the prime actors in over 60% of incidents, represent a significant volume of breaches and records ,even with our strict definition of what an “error” is.” —DBIR
 
Specifically, misconfiguration of devices were involved in the cause of breaches directly in 3.6% of the breaches studied in the DBIR. That percentage may seem small, but the data set of 79,790 incidents resulting in 2,122 breaches that means a staggering number of 76 breaches of data were the result of misconfigurations.
 
This is exactly why control complexity matters. Since control complexity correlates with misconfiguration and human error directly, when complexity rises, so does risk – conversely, when controls are simplified, complexity falls and risk of misconfiguration and human error is reduced.
 
Not to beat on the wikipedia article and Stuart Jacob’s assertions, but further compounding the complexity of his suggestion is multiple types of firewalls, managed by multiple vendors. Talk about adding complexity, take an interdependent set of rules and spread them across devices, with differing roles and goals and you get complexity. Now make each part of the set a different device type with it’s own features, nuances, rule language, configuration mechanism and managed service vendor, and try to manage both of those vendors in sync to create a holistic implementation of a control function. What you have is a NIGHTMARE of complexity. At an enterprise scale, this implementation approach would scale in complexity, resources required and oversight needs logarthmically as new devices and alternate connections are added. 
 
So, which is less complex, a single implementation, on a single platform, with a unified rule set, managed, monitored and enforced in a single location – OR – a control implemented across multiple devices, with multiple rule sets that require monitoring, management and enforcement in interdependent deployments? I think the choice is obvious and rational.
 
Now Add Entropy
 
Ahh, entropy, our inevitable combatant and the age old foe of order. What can you say about the tendency for all things to break down? You know what I am about to point out though, right? Things that are complex, tend to break down more quickly. This applies to complex organisms, complex structures, complex machinery and complex processes. It also applies to complex controls.
 
In the case of our firewall implementation, both of our models will suffer entropy. Mistakes will be made. Firewall rules will be implemented that allow wider access than is needed. Over time, all controls lose efficiency and effectiveness. Many times this is referred to as “control drift” or “configuration drift”. In our case, the control drift over a single unified rule set would have a score of 1. Changes to the rule set, apply directly to behavior and effectiveness. However, in the case of the Layered model, the firewalls each have a distinct rule set, which will degrade – BUT – they are interdependent on each other – giving an effective score of 2 for each firewall. Thus, you can easily see, that as each firewall’s rule set degrades, the private network’s “view” of the risk increases significantly and at a more rapid pace. Simply put, entropy in the more complex implementation of multiple firewalls will occur faster, and is likely to result in more impact to risk. Again, add the additional complexity of different types of firewalls and distinct vendors for each, and the entropy will simply eat you alive…
 
Let’s Close with Threat Scenarios

Let’s discuss one last point – the actual threat scenarios involved in attacking the private network from the DMZ. In most cases, compromise of a DMZ host will give an attacker a foothold into the environment. From there, they will need to pivot to find a way to compromise internal network resources and establish a presence on the internal network. (Note that I am only focusing on this threat scenario, not the more common phishing/watering hole scenarios that don’t often involve the compromise of a DMZ host, except perhaps for exfiltration paths. But, this is outside our current scope.) If they get lucky, and the DMZ is poorly designed, they may find that their initially compromised host has some form of access to the internal network that they can exploit. But, in most cases, the attacker needs to perform lateral movement to compromise additional hosts, searching for a victim that has the capability to provide a launching point for attacks against the internal network.
 
In these cases, detection is the goal of the security team. Each attacker move and probe, should cause “friction” against the controls, thereby raising the alert and log levels and the amount of unusual activity. Ultimately, this should lead to the detection of the attacker presence and the incident response process engagement.
 
However, let’s say that you are the attacker, trying to find a host that can talk to the internal network from the DMZ in a manner that you can exploit. How likely are you to launch an attack against the firewalls themselves? After all, these are devices that are designed for security and detection. Most attackers, ignore the firewalls as a target, and continue to attempt to evade their detection capabilities. As such, in terms of the threat scenario, additional discreet firewall devices, offer little to no advantage – and the idea that the attacker would need to compromise more devices to gain access loses credibility. They aren’t usually looking to pop the firewall itself. They are looking for a pivot host that they can leverage for access through whatever firewalls are present to exploit internal systems. Thus, in this case, both deployment models are rationally equal in their control integrity and “strength” (for lack of a better term).
 
Wrapping This Up
 
So, we have established that the Layered model is more complex than the 3 Legged model, and that it suffers from higher entropy. We also established that in terms of control integrity against the most common threat scenario, the implementation models are equal. Thus, to implement the Layered model over the 3 Legged model, is to increase risk, both initially, and at a more rapid pace over time for NO increase in capability or control “strength”. This supports my assertion that the 3 Legged model is, in fact, less risky than the Layered model of implementation.
 
As always, feel free to let me know your thoughts on social media. I can be found on Twitter at @lbhuston. Thanks for reading! 

Clients Finding New Ways to Leverage MSI Testing Labs

Just a reminder that MSI testing labs are seeing a LOT more usage lately. If you haven’t heard about some of the work we do in the labs, check it out here.

One of the ways that new clients are leveraging the labs is to have us mock up changes to their environments or new applications in HoneyPoint and publish them out to the web. We then monitor those fake implementations and measure the ways that attackers, malware and Internet background radiation interacts with them.

The clients use these insights to identify areas to focus on in their security testing, risk management and monitoring. A few clients have even done A/B testing using this approach, looking for the differences in risk and threat exposures via different options for deployment or development.

Let us know if you would like to discuss such an approach. The labs are a quickly growing and very powerful part of the many services and capabilities that we offer our clients around the world! 

MachineTruth As a Validation of Segmentation/Enclaving

If you haven’t heard about our MachineTruth™ offering yet, check it out here. It is a fantastic way for organizations to perform offline asset discovery, network mapping and architecture reviews. We also are using it heavily in our work with ICS/SCADA organizations to segment/enclave their networks.

Recently, one of our clients approached us with some ideas about using MachineTruth to PROVE that they had segmented their network. They wanted to reduce the impacts of several pieces of compliance regulation (CIP/PCI/etc.) and be able to prove that they had successfully implemented segmentation to their auditors.

The project is moving forward and we have discussed this use case with several other organizations to date. If you would like to talk with us about it, and learn more about MachineTruth and our new bleeding edge capabilities, give us a call at 614-351-1237 or drop us a line via info <at> microsolved <dot> com.