Comparing 2 Models for DMZ Implementations

I recently had a discussion with another technician about the security of the two most popular DMZ implementation models. That is: 
  • The “3 Legged Model” or “single firewall” – where the DMZ segment(s) are connected via a dedicated interface (or interfaces) and a single firewall implements traffic control rules between all of the network segments (the firewall could be a traditional firewall simply enforcing interface to interface rules or a “next generation” firewall implementing virtualized “zones” or other logical object groupings)
  • The “Layered Model” or “dual firewall”- where the DMZ segment(s) are connected between two sets of firewalls, like a sandwich
 
Both approaches are clearly illustrated above, and explained in detail in the linked wikipedia article, so I won’t repeat that here. 
 
I fully believe that the “3 Legged Model” is a lower risk implementation than the layered model. This outright contradicts what the wikipedia article above states: 
 
     “The most secure approach, according to Stuart Jacobs, [1]is to use two firewalls to create a DMZ.” — wikipedia article above.
 
While the Layered model looks compelling at first blush, and seems to apply the concept of “more firewalls would need to be compromised to lead to internal network access”; I believe that, in fact, it reduces the overall security posture in the real world, and increases risk. Here’s why I feel that way. Two real-world issues that often make things that look great at first blush or that “just work” in the lab environment, have significant disadvantages in the real world are control complexity and entropy. Before we dig too deeply into those issues though, let’s talk about how the two models are similar. (Note that we are assuming that the firewalls themselves are equally hardened and monitored – IE, they have adequate and equal security postures both as an independent system and as a control set, in aggregate.)
 
Reviewing the Similarities
 
In both of the models, traffic from the DMZ segment(s) pass through the firewall(s) and traffic controls are applied. Both result in filtered access to the internal trusted network via an often complex set of rules. Since in both cases, traffic is appropriately filtered, authorization, logging and alerting can adequately occur in both models. 
 
Establishing Differences
 
Now the differences. In the 3 Legged model, the controls are contained in one place (assuming a high availability/failover pair counts as a single set of  synced controls), enforced in one place, managed and monitored in one place. The rule set does not have cascading dependencies on other implementations of firewalls, and if the rule set is well designed and implemented, analysis at a holistic level is less complex.
 
In the Layered model, the controls are contained across two separate instances, each with different goals, roles and enforcement requirements. However, the controls and rule sets are interdependent. The traffic must be controlled through a holistic approach spread across the devices, and failures at either firewall to adequately control traffic or adequately design the rule sets could cause cascading unintended results. The complexity of managing these rules across devices, with different rule sets, capabilities, goals and roles is significantly larger than in a single control instance. Many studies have shown that increased control complexity results in larger amounts of human error, which in turn contributes to higher levels of risk. 
 
Control Complexity Matters
 
Misconfigurations, human errors and outright mistakes are involved in a significant number (~95%) of compromises. How impactful are human mistakes on outright breaches? Well according to the 2015 Verizon DBIR:
 
“As with years past, errors made by internal staff, especially system administrators who were the prime actors in over 60% of incidents, represent a significant volume of breaches and records ,even with our strict definition of what an “error” is.” —DBIR
 
Specifically, misconfiguration of devices were involved in the cause of breaches directly in 3.6% of the breaches studied in the DBIR. That percentage may seem small, but the data set of 79,790 incidents resulting in 2,122 breaches that means a staggering number of 76 breaches of data were the result of misconfigurations.
 
This is exactly why control complexity matters. Since control complexity correlates with misconfiguration and human error directly, when complexity rises, so does risk – conversely, when controls are simplified, complexity falls and risk of misconfiguration and human error is reduced.
 
Not to beat on the wikipedia article and Stuart Jacob’s assertions, but further compounding the complexity of his suggestion is multiple types of firewalls, managed by multiple vendors. Talk about adding complexity, take an interdependent set of rules and spread them across devices, with differing roles and goals and you get complexity. Now make each part of the set a different device type with it’s own features, nuances, rule language, configuration mechanism and managed service vendor, and try to manage both of those vendors in sync to create a holistic implementation of a control function. What you have is a NIGHTMARE of complexity. At an enterprise scale, this implementation approach would scale in complexity, resources required and oversight needs logarthmically as new devices and alternate connections are added. 
 
So, which is less complex, a single implementation, on a single platform, with a unified rule set, managed, monitored and enforced in a single location – OR – a control implemented across multiple devices, with multiple rule sets that require monitoring, management and enforcement in interdependent deployments? I think the choice is obvious and rational.
 
Now Add Entropy
 
Ahh, entropy, our inevitable combatant and the age old foe of order. What can you say about the tendency for all things to break down? You know what I am about to point out though, right? Things that are complex, tend to break down more quickly. This applies to complex organisms, complex structures, complex machinery and complex processes. It also applies to complex controls.
 
In the case of our firewall implementation, both of our models will suffer entropy. Mistakes will be made. Firewall rules will be implemented that allow wider access than is needed. Over time, all controls lose efficiency and effectiveness. Many times this is referred to as “control drift” or “configuration drift”. In our case, the control drift over a single unified rule set would have a score of 1. Changes to the rule set, apply directly to behavior and effectiveness. However, in the case of the Layered model, the firewalls each have a distinct rule set, which will degrade – BUT – they are interdependent on each other – giving an effective score of 2 for each firewall. Thus, you can easily see, that as each firewall’s rule set degrades, the private network’s “view” of the risk increases significantly and at a more rapid pace. Simply put, entropy in the more complex implementation of multiple firewalls will occur faster, and is likely to result in more impact to risk. Again, add the additional complexity of different types of firewalls and distinct vendors for each, and the entropy will simply eat you alive…
 
Let’s Close with Threat Scenarios

Let’s discuss one last point – the actual threat scenarios involved in attacking the private network from the DMZ. In most cases, compromise of a DMZ host will give an attacker a foothold into the environment. From there, they will need to pivot to find a way to compromise internal network resources and establish a presence on the internal network. (Note that I am only focusing on this threat scenario, not the more common phishing/watering hole scenarios that don’t often involve the compromise of a DMZ host, except perhaps for exfiltration paths. But, this is outside our current scope.) If they get lucky, and the DMZ is poorly designed, they may find that their initially compromised host has some form of access to the internal network that they can exploit. But, in most cases, the attacker needs to perform lateral movement to compromise additional hosts, searching for a victim that has the capability to provide a launching point for attacks against the internal network.
 
In these cases, detection is the goal of the security team. Each attacker move and probe, should cause “friction” against the controls, thereby raising the alert and log levels and the amount of unusual activity. Ultimately, this should lead to the detection of the attacker presence and the incident response process engagement.
 
However, let’s say that you are the attacker, trying to find a host that can talk to the internal network from the DMZ in a manner that you can exploit. How likely are you to launch an attack against the firewalls themselves? After all, these are devices that are designed for security and detection. Most attackers, ignore the firewalls as a target, and continue to attempt to evade their detection capabilities. As such, in terms of the threat scenario, additional discreet firewall devices, offer little to no advantage – and the idea that the attacker would need to compromise more devices to gain access loses credibility. They aren’t usually looking to pop the firewall itself. They are looking for a pivot host that they can leverage for access through whatever firewalls are present to exploit internal systems. Thus, in this case, both deployment models are rationally equal in their control integrity and “strength” (for lack of a better term).
 
Wrapping This Up
 
So, we have established that the Layered model is more complex than the 3 Legged model, and that it suffers from higher entropy. We also established that in terms of control integrity against the most common threat scenario, the implementation models are equal. Thus, to implement the Layered model over the 3 Legged model, is to increase risk, both initially, and at a more rapid pace over time for NO increase in capability or control “strength”. This supports my assertion that the 3 Legged model is, in fact, less risky than the Layered model of implementation.
 
As always, feel free to let me know your thoughts on social media. I can be found on Twitter at @lbhuston. Thanks for reading! 

Interesting Talk on Post Quantum Computing Impacts on Crypto

If you want to really get some great understanding of how the future of crypto is impacted by quantum computing, there is a fantastic talk embedded in this link
 
The talk really turns the high level math and theory of most of these discussions into knowledge you can parse and use. Take an hour and listen to it. I think you will find it most rewarding.
 
If you want to talk about your thoughts on the matter, hit us up on Twitter. (@microsolved)

3 Ways Clients are Benefiting from Our TigerTrax Platform Today

OK, so by now most folks know that we spent the last few years building out our own analytics platform, called TigerTrax™. Some folks know that we have been using it as a way to add impressive value to our traditional security offerings for the last couple of years. If you are a traditional assessment client, for example, you are likely seeing more threat data that is pinpoint accurate in your reports or you have been the beneficiary of some of the benefits of our passive technologies based on the platform, perhaps. If your organization hasn’t been briefed yet on our new capabilities and offerings, please let us know and we will book a time to sit down and walk you through what we believe is a game changing new approach to information security!

But, back to the message at hand. TigerTrax is already benefitting our clients in three very specific ways, and I wanted to take a moment to discuss them.

  • First, as I alluded to above, many clients are now leveraging our Targeted Threat Intelligence (TTI) offerings in a variety of ways. TTI engagements come in two flavors, Comprehensive and Baseline. You can think of this as a passive security assessment that identifies threats against your organization based on a variety of meta data analysis, tracks your brand presence across the online world and identifies where it might be present in a vulnerable state, correlates known and unknown attack campaigns against your online presence, and has been hugely successful in finding significant risks against networks/applications and intellectual property. The capability extends to findings across the spectrum of risks, threats and vulnerabilities – yet does the work without sending a single packet to the target network environments! That makes this offering hugely popular and successful in assisting organizations with supply chain, vendor management security validation and M&A research. In fact, some clients are actively using this technique across vendors on a global scale.
  • Second, TigerTrax has enabled MSI to offer security-focused monitoring of key employees and their online behaviors. From professional sports to futures/stock traders and even banking customer support teams – TigerTrax has been adapted to provide code of conduct monitoring, social media forensics and even customized mitigation training in near-real-time for the humans behind the keyboard. With so much attention to what your organization and your employees do online, how their stories spread and the customer interactions they power – this service has been an amazing benefit to customers. In some cases, our social media forensics have made the difference in reputational attacks and even helped defend a client against false legal allegations!
  • Thirdly, TigerTrax has powered the development of MachineTruth™, a powerful new approach to network mapping and asset discovery. By leaning on the power of analytics and machine learning, this offering has been able to organize thousands of machine configurations and millions of lines of log files and a variety of other data source to re-create a visual map of the environment, an inventory of the hosts on the network, an analysis of the relationships between hosts/network segments/devices and perform security baselining “en masse”. All offline. All without deploying any hardware or software on the network. It’s simply amazing for organizations with complex networks (we’ve done all sizes – from single data centers to continent-level networks), helps new CIOs or network managers understand their environment, closes the gap between “common wisdom” of what your engineers think the network is doing and the “machine truth” of what the devices are actually doing, aids risk assessment or acquisition teams in their work and can empower network segmentation efforts like no other offering we have seen.

Those are the 3 key ways that TigerTrax customers are benefiting today. Many many more are on the roadmap, and throughout 2016 we will be bringing new offerings and capability enhancements to our clients – based on the powerful analytics TigerTrax provides. Keep an eye on the blog and our website (which will be updated shortly) for news and information. Better yet, give us a call or touch base via email and schedule a time to sit down and discuss how these new capabilities can best assist you. We look forward to talking with you! 

— info (at) microsolved /dot/ com will get you to an account rep ASAP! Thanks for reading.

Old School Google Hacking Still Works…

Did some old school Google hacking last night.

“Filetype:xls & terms” still finds too much bad stuff.

Check for it lately for your organization?

Try other file types too. (doc/ppt/pdf/rtf, etc.)

Information leakage happens today, as it always has. Keeping an eye on it should be a part of your security program.

Ashley Madison Blackmail Campaigns Prowling Again

If you were involved in the Ashley Madison service, or know someone who was, it might be time to discuss the continuing issues of ongoing blackmail campaigns stemming from the breach. This article appeared this week in SC Magazine, reporting on just such a campaign, that has been potentially identified.

Please be aware that this is happening, and can represent a significant threat, especially for organizations associated with critical infrastructure, IP protection and/or government agencies. 

If you, or someone you know, is being harassed or targeted by black mailers, here are some resources:

General council advice.

Contacting the FBI.

WikiHow Advice from the public.

Stay safe out there!

CMHSecLunch is Monday Oct 12

Remember: ‪#‎CMHSecLunch‬ is tomorrow. 11:30, Polaris.

Come out and hang with some of your friends. This free form event is open to the public and often includes hacking stuff, lock picking, deep technical discussions, projects, etc.

Check it out at the link below & bring a friend!  

http://cmhseclunch.eventbrite.com

 

3 Things You Should Be Reading About

Just a quick post today to point to 3 things infosec pros should be watching from the last few days. While there will be a lot of news coming out of Derbycon, keep your eyes on these issues too:

1. Chinese PLA Hacking Unit with a SE Asia Focus Emerges – This is an excellent article about a new focused hacking unit that has emerged from shared threat intelligence. 

2. Free Tool to Hunt Down SYNful Knock – If you aren’t aware of the issues in Cisco Routers, check out the SYNful Knock details here. This has already been widely observed in the wild.

3. Microsoft Revokes Leaked D-Link Certs – This is what happens when certificates get leaked into the public. Very dangerous situation, since it could allow signing of malicious code/firmware, etc.

Happy reading! 

Podcast Episode 8 is Out

This time around we riff on Ashley Madison (minus the morals of the site), online privacy, OPSec and the younger generation with @AdamJLuck. Following that, is a short with John Davis. Check it out and let us know your thoughts via Twitter – @lbhuston. Thanks for listening! 

You can listen below:

IoT Privacy Concerns

Lately, I’ve been amazed at how quickly the Internet of Things (IoT) has become a part of my life. Everything from speakers to a Crock-Pot (yes, a Crock-Pot) has been connected to my home wireless network at some point. As much as I enjoy all the conveniences that these devices provide me, I always consider the security implications prior to purchasing an Internet-connected device. It’s worthwhile to weigh the convenience of installing new Internet-connected equipment vs. the privacy issues that can occur if the device is compromised.

There have already been a variety of security issues stemming from the widespread adoption of IoT devices. Last fall, a website published links to over 73,000 unsecured camera throughout the world. These cameras monitored everything from shopping malls to people’s bedrooms. Without implementing proper controls around IoT devices, we will continue to see similar issues arise.

I don’t intend for this blog to scare people away from purchasing IoT devices. In fact, I will provide you with a few simple changes you can make to your IoT configurations that will reduce the privacy issues that can occur by installing an IoT system. These changes won’t necessarily diminish the conveniences you can gain by buying an Internet-connected thermostat or installing the latest IoT security camera. However, they will significantly reduce the risk associated with installing an IoT system.

A few recommendations for your new gadget:

  • Change the default password  – A majority of the aforementioned cameras were compromised because the owners did not change the system’s default password. By simply setting the password to something that will be difficult for an attacker to guess, you can reduce the risk of someone compromising your device.
  • Segment – Try to isolate your IoT devices from the rest of your home network. It is very possible that an attacker would use an IoT system as an entry-point to gain access to other systems.
  • Check for software updates – Make a routine to check for software/firmware updates for all of your IoT devices. These updates will often contain a security patch that can protect your system from being exploited.
  • Do not expose the device directly to the Internet – There shouldn’t be a need to expose an IoT device directly to the Internet. This will provide an attacker a much larger surface to attempt to exploit your device. If the system requires that configuration, it is worthwhile to consider another option.

DOJ Best Practices for Breach Response

I stumbled on this great release from the US Department of Justice – a best practices guide to breach response.

Reading it is rather reminiscent of much of what we said in the 80/20 Rule of Information Security years ago. Namely, know your own environment, data flows, trusts and what data matters. Combine that with having a plan, beforehand, and some practice – and you at least get some decent insights into what your team needs and is capable of handling. Knowing those boundaries and when to ask for outside help will take you a long way.

I would also suggest you give our State of Security Podcast a listen. Episode 6, in particular, includes a great conversation about handling major breaches and the long term impacts on teams, careers and lives.

As always, if we can assist you in preparing a breach response process, good policies, performing those network mappings or running table top exercises (or deeper technical red team exercises), let us know. We help companies around the world master these skills and we have plenty of insights we would love to share!