Example of Pole Mounted Device Threats Visualized

As a part of our threat modeling work, which we do sometimes as a stand-alone activity or as part of an deeper assessment, we often build simple mind maps of the high level threats we identify. Here is an example of a very simple diagram we did recently while working on a threat model for pole mounted environments (PME’s) for a utility client. 

This is only part of the work plan, but I am putting it forward as a sort of guideline to help folks understand our process. In most cases, we continually expand on the diagram throughout the engagement, often adding links to photos or videos of the testing and results. 

We find this a useful way to convey much of the engagement details with clients as we progress. 

Does your current assessment or threat modeling use visual tools like this? If not, why not? If so, drop me a line on Twitter (@lbhuston) and tell me about it. 

Thanks for reading! 

Pole Mounted Environment Threats

Comparing 2 Models for DMZ Implementations

I recently had a discussion with another technician about the security of the two most popular DMZ implementation models. That is: 
  • The “3 Legged Model” or “single firewall” – where the DMZ segment(s) are connected via a dedicated interface (or interfaces) and a single firewall implements traffic control rules between all of the network segments (the firewall could be a traditional firewall simply enforcing interface to interface rules or a “next generation” firewall implementing virtualized “zones” or other logical object groupings)
  • The “Layered Model” or “dual firewall”- where the DMZ segment(s) are connected between two sets of firewalls, like a sandwich
 
Both approaches are clearly illustrated above, and explained in detail in the linked wikipedia article, so I won’t repeat that here. 
 
I fully believe that the “3 Legged Model” is a lower risk implementation than the layered model. This outright contradicts what the wikipedia article above states: 
 
     “The most secure approach, according to Stuart Jacobs, [1]is to use two firewalls to create a DMZ.” — wikipedia article above.
 
While the Layered model looks compelling at first blush, and seems to apply the concept of “more firewalls would need to be compromised to lead to internal network access”; I believe that, in fact, it reduces the overall security posture in the real world, and increases risk. Here’s why I feel that way. Two real-world issues that often make things that look great at first blush or that “just work” in the lab environment, have significant disadvantages in the real world are control complexity and entropy. Before we dig too deeply into those issues though, let’s talk about how the two models are similar. (Note that we are assuming that the firewalls themselves are equally hardened and monitored – IE, they have adequate and equal security postures both as an independent system and as a control set, in aggregate.)
 
Reviewing the Similarities
 
In both of the models, traffic from the DMZ segment(s) pass through the firewall(s) and traffic controls are applied. Both result in filtered access to the internal trusted network via an often complex set of rules. Since in both cases, traffic is appropriately filtered, authorization, logging and alerting can adequately occur in both models. 
 
Establishing Differences
 
Now the differences. In the 3 Legged model, the controls are contained in one place (assuming a high availability/failover pair counts as a single set of  synced controls), enforced in one place, managed and monitored in one place. The rule set does not have cascading dependencies on other implementations of firewalls, and if the rule set is well designed and implemented, analysis at a holistic level is less complex.
 
In the Layered model, the controls are contained across two separate instances, each with different goals, roles and enforcement requirements. However, the controls and rule sets are interdependent. The traffic must be controlled through a holistic approach spread across the devices, and failures at either firewall to adequately control traffic or adequately design the rule sets could cause cascading unintended results. The complexity of managing these rules across devices, with different rule sets, capabilities, goals and roles is significantly larger than in a single control instance. Many studies have shown that increased control complexity results in larger amounts of human error, which in turn contributes to higher levels of risk. 
 
Control Complexity Matters
 
Misconfigurations, human errors and outright mistakes are involved in a significant number (~95%) of compromises. How impactful are human mistakes on outright breaches? Well according to the 2015 Verizon DBIR:
 
“As with years past, errors made by internal staff, especially system administrators who were the prime actors in over 60% of incidents, represent a significant volume of breaches and records ,even with our strict definition of what an “error” is.” —DBIR
 
Specifically, misconfiguration of devices were involved in the cause of breaches directly in 3.6% of the breaches studied in the DBIR. That percentage may seem small, but the data set of 79,790 incidents resulting in 2,122 breaches that means a staggering number of 76 breaches of data were the result of misconfigurations.
 
This is exactly why control complexity matters. Since control complexity correlates with misconfiguration and human error directly, when complexity rises, so does risk – conversely, when controls are simplified, complexity falls and risk of misconfiguration and human error is reduced.
 
Not to beat on the wikipedia article and Stuart Jacob’s assertions, but further compounding the complexity of his suggestion is multiple types of firewalls, managed by multiple vendors. Talk about adding complexity, take an interdependent set of rules and spread them across devices, with differing roles and goals and you get complexity. Now make each part of the set a different device type with it’s own features, nuances, rule language, configuration mechanism and managed service vendor, and try to manage both of those vendors in sync to create a holistic implementation of a control function. What you have is a NIGHTMARE of complexity. At an enterprise scale, this implementation approach would scale in complexity, resources required and oversight needs logarthmically as new devices and alternate connections are added. 
 
So, which is less complex, a single implementation, on a single platform, with a unified rule set, managed, monitored and enforced in a single location – OR – a control implemented across multiple devices, with multiple rule sets that require monitoring, management and enforcement in interdependent deployments? I think the choice is obvious and rational.
 
Now Add Entropy
 
Ahh, entropy, our inevitable combatant and the age old foe of order. What can you say about the tendency for all things to break down? You know what I am about to point out though, right? Things that are complex, tend to break down more quickly. This applies to complex organisms, complex structures, complex machinery and complex processes. It also applies to complex controls.
 
In the case of our firewall implementation, both of our models will suffer entropy. Mistakes will be made. Firewall rules will be implemented that allow wider access than is needed. Over time, all controls lose efficiency and effectiveness. Many times this is referred to as “control drift” or “configuration drift”. In our case, the control drift over a single unified rule set would have a score of 1. Changes to the rule set, apply directly to behavior and effectiveness. However, in the case of the Layered model, the firewalls each have a distinct rule set, which will degrade – BUT – they are interdependent on each other – giving an effective score of 2 for each firewall. Thus, you can easily see, that as each firewall’s rule set degrades, the private network’s “view” of the risk increases significantly and at a more rapid pace. Simply put, entropy in the more complex implementation of multiple firewalls will occur faster, and is likely to result in more impact to risk. Again, add the additional complexity of different types of firewalls and distinct vendors for each, and the entropy will simply eat you alive…
 
Let’s Close with Threat Scenarios

Let’s discuss one last point – the actual threat scenarios involved in attacking the private network from the DMZ. In most cases, compromise of a DMZ host will give an attacker a foothold into the environment. From there, they will need to pivot to find a way to compromise internal network resources and establish a presence on the internal network. (Note that I am only focusing on this threat scenario, not the more common phishing/watering hole scenarios that don’t often involve the compromise of a DMZ host, except perhaps for exfiltration paths. But, this is outside our current scope.) If they get lucky, and the DMZ is poorly designed, they may find that their initially compromised host has some form of access to the internal network that they can exploit. But, in most cases, the attacker needs to perform lateral movement to compromise additional hosts, searching for a victim that has the capability to provide a launching point for attacks against the internal network.
 
In these cases, detection is the goal of the security team. Each attacker move and probe, should cause “friction” against the controls, thereby raising the alert and log levels and the amount of unusual activity. Ultimately, this should lead to the detection of the attacker presence and the incident response process engagement.
 
However, let’s say that you are the attacker, trying to find a host that can talk to the internal network from the DMZ in a manner that you can exploit. How likely are you to launch an attack against the firewalls themselves? After all, these are devices that are designed for security and detection. Most attackers, ignore the firewalls as a target, and continue to attempt to evade their detection capabilities. As such, in terms of the threat scenario, additional discreet firewall devices, offer little to no advantage – and the idea that the attacker would need to compromise more devices to gain access loses credibility. They aren’t usually looking to pop the firewall itself. They are looking for a pivot host that they can leverage for access through whatever firewalls are present to exploit internal systems. Thus, in this case, both deployment models are rationally equal in their control integrity and “strength” (for lack of a better term).
 
Wrapping This Up
 
So, we have established that the Layered model is more complex than the 3 Legged model, and that it suffers from higher entropy. We also established that in terms of control integrity against the most common threat scenario, the implementation models are equal. Thus, to implement the Layered model over the 3 Legged model, is to increase risk, both initially, and at a more rapid pace over time for NO increase in capability or control “strength”. This supports my assertion that the 3 Legged model is, in fact, less risky than the Layered model of implementation.
 
As always, feel free to let me know your thoughts on social media. I can be found on Twitter at @lbhuston. Thanks for reading! 

More on MSI Lab Services Offerings

MSI has built a reputation that spans decades in and around testing hardware and software for information security. Our methodology, experience and capability provides for a unique value to our customers. World-class assessments from the chip and circuit levels all the way through protocol analysis, software design, configuration and implementation are what we bring to the table.

 

Some of the many types of systems that we have tested:

  • consumer electronics
  • home automation systems
  • voice over IP devices
  • home banking solutions
  • wire transfer infrastructures
  • mobile devices
  • mobile applications
  • enterprise networking devices (routers, switches, servers, gateways, firewalls, etc.)
  • entire operating systems
  • ICS and SCADA  devices, networks and implementations
  • smart grid technologies
  • gaming and lottery systems
  • identification management tools
  • security products
  • voting systems
  • industrial automation components
  • intelligence systems
  • weapon systems
  • safety and alerting tools
  • and much much more…

To find out more about our testing processes, lab infrastructure or methodologies, talk to your account executive today. They can schedule a no charge, no commitment, no pressure call with the testing engineer and a project manager to discuss how your organization might be able to benefit from our experience.

 

At A Glance Call Outs:

  • Deep security testing of hardware, software & web applications
  • 20+ year history of testing excellence
  • Committed to responsible vulnerability handling
  • Commercial & proprietary testing tools
  • Available for single test engagements
  • Can integrate fully into product lifecycle
  • Experience testing some of the most sensitive systems on the planet

Key Differentiators:

  • Powerful proprietary tools:
    • Proto-Predator™
    • HoneyPoint™
    • many more solution specific tools
  • Circuit & chip level testing
  • Proprietary protocol evaluation experience
  • Customized honeypot threat intelligence
  • Methodology-based testing for repeatable & defendable results

Other Relevant Content:

Project EVEREST Voting Systems Testing https://stateofsecurity.com/?p=184

Lab Services Blog Post https://stateofsecurity.com/?p=2794

Lab Services Audio Post  https://stateofsecurity.com/?p=2565



SDIM Project Update

Just a quick update on the Stolen Data Impact Model (SDIM) Project for today.

We are prepping to do the first beta unveiling of the project at the local ISSA chapter. It looks like that might be the June meeting, but we are still finalizing dates. Stay tuned for more on this one so you can get your first glimpse of the work as it is unveiled. We also submitted a talk at the ISSA International meeting for the year, later in the summer on the SDIM. We’ll let you know if we get accepted for presenting the project in Nashville.

The work is progressing. We have created several of the curve models now and are beginning to put them out to the beta group for review. This step continues for the next couple of weeks and we will be incorporating the feedback into the models and then releasing them publicly.

Work on phase 2 – that is the framework of questions designed to aid in the scoring of the impacts to generate the curve models has begun. This week, the proof of concept framework is being developed and then that will flow to the alpha group to build upon. Later, the same beta group will get to review and add commentary to the framework prior to its initial release to the public.

Generally speaking, the work on the project is going along as expected. We will have something to show you and a presentation to discuss the outcomes of the project shortly. Thanks to those who volunteered to work on the project and to review the framework. We appreciate your help, and thanks to those who have been asking about the project – your interest is what has kept us going and working on this problem.

As always, thanks for reading, and until next time – stay safe out there! 

MSI Launches New Threat Modeling Offering & Process

Yesterday, we were proud to announce a new service offering and process from MSI. This is a new approach to threat modeling that allows organizations to proactively model their threat exposures and the changes in their risk posture, before an infrastructure change is made, a new business operation is launched, a new application is deployed or other IT risk impacts occur.

Using our HoneyPoint technology, organizations can effectively model new business processes, applications or infrastructure changes and then deploy the emulated services in their real world risk environments. Now, for the first time ever, organizations can establish real-world threat models and risk conditions BEFORE they invest in application development, new products or make changes to their firewalls and other security tools.

Even more impressive is that the process generates real-world risk metrics that include frequency of interaction with services, frequency of interaction with various controls, frequency of interaction with emulated vulnerabilities, human attackers versus automated tools, insight into attacker capabilities, focus and intent! No longer will organizations be forced to guess at their threat models, now they can establish them with defendable, real world values!

Much of the data created by this process can be plugged directly into existing risk management systems, risk assessment tools and methodologies. Real-world values can be established for many of the variables and other metrics, that in the past have been decided by “estimation”.

Truly, if RISK = THREAT X VULNERABILITY, then this new process can establish that THREAT variable for you, even before typical security tools like scanners, code reviews and penetration testing have a rough implementation to work against to measure VULNERABILITY. Our new process can be used to model threats, even before a single line of real code has been written – while the project is still in the decision or concept phases!

We presented this material at the local ISSA chapter meeting yesterday. The slides are available here:

Threat Modeling Slides

Give us a call and schedule a time to discuss this new capability with an engineer. If your organization is ready to add some maturity and true insight into its risk management and risk assessment processes, then this just might be what you have been waiting for.