Comparing 2 Models for DMZ Implementations

I recently had a discussion with another technician about the security of the two most popular DMZ implementation models. That is: 
  • The “3 Legged Model” or “single firewall” – where the DMZ segment(s) are connected via a dedicated interface (or interfaces) and a single firewall implements traffic control rules between all of the network segments (the firewall could be a traditional firewall simply enforcing interface to interface rules or a “next generation” firewall implementing virtualized “zones” or other logical object groupings)
  • The “Layered Model” or “dual firewall”- where the DMZ segment(s) are connected between two sets of firewalls, like a sandwich
 
Both approaches are clearly illustrated above, and explained in detail in the linked wikipedia article, so I won’t repeat that here. 
 
I fully believe that the “3 Legged Model” is a lower risk implementation than the layered model. This outright contradicts what the wikipedia article above states: 
 
     “The most secure approach, according to Stuart Jacobs, [1]is to use two firewalls to create a DMZ.” — wikipedia article above.
 
While the Layered model looks compelling at first blush, and seems to apply the concept of “more firewalls would need to be compromised to lead to internal network access”; I believe that, in fact, it reduces the overall security posture in the real world, and increases risk. Here’s why I feel that way. Two real-world issues that often make things that look great at first blush or that “just work” in the lab environment, have significant disadvantages in the real world are control complexity and entropy. Before we dig too deeply into those issues though, let’s talk about how the two models are similar. (Note that we are assuming that the firewalls themselves are equally hardened and monitored – IE, they have adequate and equal security postures both as an independent system and as a control set, in aggregate.)
 
Reviewing the Similarities
 
In both of the models, traffic from the DMZ segment(s) pass through the firewall(s) and traffic controls are applied. Both result in filtered access to the internal trusted network via an often complex set of rules. Since in both cases, traffic is appropriately filtered, authorization, logging and alerting can adequately occur in both models. 
 
Establishing Differences
 
Now the differences. In the 3 Legged model, the controls are contained in one place (assuming a high availability/failover pair counts as a single set of  synced controls), enforced in one place, managed and monitored in one place. The rule set does not have cascading dependencies on other implementations of firewalls, and if the rule set is well designed and implemented, analysis at a holistic level is less complex.
 
In the Layered model, the controls are contained across two separate instances, each with different goals, roles and enforcement requirements. However, the controls and rule sets are interdependent. The traffic must be controlled through a holistic approach spread across the devices, and failures at either firewall to adequately control traffic or adequately design the rule sets could cause cascading unintended results. The complexity of managing these rules across devices, with different rule sets, capabilities, goals and roles is significantly larger than in a single control instance. Many studies have shown that increased control complexity results in larger amounts of human error, which in turn contributes to higher levels of risk. 
 
Control Complexity Matters
 
Misconfigurations, human errors and outright mistakes are involved in a significant number (~95%) of compromises. How impactful are human mistakes on outright breaches? Well according to the 2015 Verizon DBIR:
 
“As with years past, errors made by internal staff, especially system administrators who were the prime actors in over 60% of incidents, represent a significant volume of breaches and records ,even with our strict definition of what an “error” is.” —DBIR
 
Specifically, misconfiguration of devices were involved in the cause of breaches directly in 3.6% of the breaches studied in the DBIR. That percentage may seem small, but the data set of 79,790 incidents resulting in 2,122 breaches that means a staggering number of 76 breaches of data were the result of misconfigurations.
 
This is exactly why control complexity matters. Since control complexity correlates with misconfiguration and human error directly, when complexity rises, so does risk – conversely, when controls are simplified, complexity falls and risk of misconfiguration and human error is reduced.
 
Not to beat on the wikipedia article and Stuart Jacob’s assertions, but further compounding the complexity of his suggestion is multiple types of firewalls, managed by multiple vendors. Talk about adding complexity, take an interdependent set of rules and spread them across devices, with differing roles and goals and you get complexity. Now make each part of the set a different device type with it’s own features, nuances, rule language, configuration mechanism and managed service vendor, and try to manage both of those vendors in sync to create a holistic implementation of a control function. What you have is a NIGHTMARE of complexity. At an enterprise scale, this implementation approach would scale in complexity, resources required and oversight needs logarthmically as new devices and alternate connections are added. 
 
So, which is less complex, a single implementation, on a single platform, with a unified rule set, managed, monitored and enforced in a single location – OR – a control implemented across multiple devices, with multiple rule sets that require monitoring, management and enforcement in interdependent deployments? I think the choice is obvious and rational.
 
Now Add Entropy
 
Ahh, entropy, our inevitable combatant and the age old foe of order. What can you say about the tendency for all things to break down? You know what I am about to point out though, right? Things that are complex, tend to break down more quickly. This applies to complex organisms, complex structures, complex machinery and complex processes. It also applies to complex controls.
 
In the case of our firewall implementation, both of our models will suffer entropy. Mistakes will be made. Firewall rules will be implemented that allow wider access than is needed. Over time, all controls lose efficiency and effectiveness. Many times this is referred to as “control drift” or “configuration drift”. In our case, the control drift over a single unified rule set would have a score of 1. Changes to the rule set, apply directly to behavior and effectiveness. However, in the case of the Layered model, the firewalls each have a distinct rule set, which will degrade – BUT – they are interdependent on each other – giving an effective score of 2 for each firewall. Thus, you can easily see, that as each firewall’s rule set degrades, the private network’s “view” of the risk increases significantly and at a more rapid pace. Simply put, entropy in the more complex implementation of multiple firewalls will occur faster, and is likely to result in more impact to risk. Again, add the additional complexity of different types of firewalls and distinct vendors for each, and the entropy will simply eat you alive…
 
Let’s Close with Threat Scenarios

Let’s discuss one last point – the actual threat scenarios involved in attacking the private network from the DMZ. In most cases, compromise of a DMZ host will give an attacker a foothold into the environment. From there, they will need to pivot to find a way to compromise internal network resources and establish a presence on the internal network. (Note that I am only focusing on this threat scenario, not the more common phishing/watering hole scenarios that don’t often involve the compromise of a DMZ host, except perhaps for exfiltration paths. But, this is outside our current scope.) If they get lucky, and the DMZ is poorly designed, they may find that their initially compromised host has some form of access to the internal network that they can exploit. But, in most cases, the attacker needs to perform lateral movement to compromise additional hosts, searching for a victim that has the capability to provide a launching point for attacks against the internal network.
 
In these cases, detection is the goal of the security team. Each attacker move and probe, should cause “friction” against the controls, thereby raising the alert and log levels and the amount of unusual activity. Ultimately, this should lead to the detection of the attacker presence and the incident response process engagement.
 
However, let’s say that you are the attacker, trying to find a host that can talk to the internal network from the DMZ in a manner that you can exploit. How likely are you to launch an attack against the firewalls themselves? After all, these are devices that are designed for security and detection. Most attackers, ignore the firewalls as a target, and continue to attempt to evade their detection capabilities. As such, in terms of the threat scenario, additional discreet firewall devices, offer little to no advantage – and the idea that the attacker would need to compromise more devices to gain access loses credibility. They aren’t usually looking to pop the firewall itself. They are looking for a pivot host that they can leverage for access through whatever firewalls are present to exploit internal systems. Thus, in this case, both deployment models are rationally equal in their control integrity and “strength” (for lack of a better term).
 
Wrapping This Up
 
So, we have established that the Layered model is more complex than the 3 Legged model, and that it suffers from higher entropy. We also established that in terms of control integrity against the most common threat scenario, the implementation models are equal. Thus, to implement the Layered model over the 3 Legged model, is to increase risk, both initially, and at a more rapid pace over time for NO increase in capability or control “strength”. This supports my assertion that the 3 Legged model is, in fact, less risky than the Layered model of implementation.
 
As always, feel free to let me know your thoughts on social media. I can be found on Twitter at @lbhuston. Thanks for reading! 

Ask The Experts: Devaluing 0-days

Earlier this week, I heard an awesome speech at Columbus BSides about the economics of Exploit Kits and E-Crime. As a follow-up, I thought it would be worthwhile to ask my fellow MSI co-workers if they felt there was a way to devalue 0day vulnerabilities.

Jim Klun responded with…

I don’t think you can ever really – given how Internet/computer usage has been universally adopted for all human activity – devalue the worth of a 0-day. The only thing I can imagine is making the chance of a 0-day being discovered in an area of computing that really matters as small as possible. So that means forcing – through law – all sensitive infrastructure (public or private) and comm channels to subscribe to tight controls on what can be used and how things can work. With ongoing inspection and fines/jail time for slackers. Really.. don’t maintain your part of the Wall properly, let the Mongols in and get some villages sacked, and its your head.

I would have techs who are allowed to touch such infrastructure (or develop for it) uniformly trained and licensed at the federal level. Formal process would exist for them doing doing 0-day research and reporting. Outsiders can do same…. but if they announce without chance for defensive response, jail.  And for all those who do play the game properly and find 0-days within the reduced space of critical infrastructure/software  – money and honor.

Brent Huston added his view…

Thats a tough question. Because you are asking to both devalue something, yet make it valuable for a different party. This is called market transference.

So for example, we need to somehow change the “incentive” to a “currency” that is non-redeemable by bad guys. The problem with that is – no matter how you transfer the currency mechanism, it is likely that it simply creates a different variant of the underground market.

For example, let’s say we make 0-days for good guys redeemable for a tax credit, so they can turn them into the IRS and get a tax credit in $ for the work… Seems pretty sound…Bad guys can’t redeem the tax credits without giving up anonymity. However – it reenforces the underground market and turns potential good guys into buyers.

Plus, 0days still have intrinsic value – IE other bad guys will still buy them for crime as long as the output of that crime has a value. Thus, you actually might increase the number of people working on 0day research. This is a great example of where market transference might well raise the value of 0days on the underground market (more bidders) and the population attackers looking for them (to sell or leverage for crime).

Lisa Wallace also provided her prospective…

Create financial incentives for the corporations to catch them before release. You get X if your product has no discovered 0-days in Y time.

Last but not least, Adam Hostetler weighed in when asked if incentives for the good guys would help devalue 0days…

That’s the current plan of a lot of big corporations, at least in web apps. I don’t think that really devalues them though. I don’t see any reasonable way to control that without strict control of network traffic, eavesdropping etc, or “setting the information free”.

Interesting Talk on Post Quantum Computing Impacts on Crypto

If you want to really get some great understanding of how the future of crypto is impacted by quantum computing, there is a fantastic talk embedded in this link
 
The talk really turns the high level math and theory of most of these discussions into knowledge you can parse and use. Take an hour and listen to it. I think you will find it most rewarding.
 
If you want to talk about your thoughts on the matter, hit us up on Twitter. (@microsolved)

OpenSSH Patch Released

If you’re using version 5.4-7.1 of OpenSSH, you should install the latest patch as soon as possible. The patch is for a critical vulnerability that can be exploited to reveal private keys to a malicious server. The vulnerability is tied to an undocumented feature called “roaming”. The roaming feature allows an OpenSSH client to resume an interrupted SSH session. If for some reason you’re unable to install the patch at this time, the vulnerable code can also be disabled by inserting “UseRoaming no” into the global ssh_config(5) file or adding “-UseRoaming=no” to the user configuration in ~/.ssh/config.
Despite the fact that this vulnerability is being compared to Heartbleed, it is not quite as severe. Exploiting this vulnerability would require a client to connect to a malicious SSH server. Heartbleed could be exploited by attacking the SSH server directly. However, it is still worth addressing this newly discovered vulnerability as soon as possible.

First Power Outage Ever Caused by Malware

For the last couple of decades industrial concerns, including public utilities such as power and gas providers, have been incorporating IP networks into their industrial control systems; apparently with very little awareness of the security problems this could cause. One of the reasons for this is that ICS/SCADA systems had always been fairly safe from tampering. They were “dumb” systems that had their own protocols, and were not connected to public networks. System administrators never had to think in terms of hackers and remote attacks. They were more concerned with things like physical break-ins and theft at that time, and hackers were mainly computer-savvy kids that weren’t really out to hurt anyone.

Another reason is that security almost always takes a back seat to greater efficiency and profitability. Couple this with the fact that public utilities were increasingly strapped with budgetary cutbacks, and it’s a no-brainer from their point of view. IP protocols were already in place and off-the-shelf hardware and software applications were relatively cheap.

Embracing expediency in this way is really costing the industry now, though. Public utilities are often guilty of failing to adequately segregate their control networks from their business networks, and even if they do, it is very difficult to fend off a persistent and talented attacker. Malware and social engineering techniques become more clever every day.

Factors such as these have made the security industry increasingly antsy for years. We have been warning that these vulnerabilities exist, and have been expecting a concrete example to crop up – and now it has!

Late last month, hackers caused what is believed to be the world’s first power outage using malware. It occurred in the Ukraine and knocked out regional power for several hours. The malware family used to perpetrate this outage is known as “BlackEnergy” and has been on the radar for some time.

Luckily, this was a relatively minor, short lived incident, and nothing like this has occurred (yet) in the United States. However, the fact that this outage was possible should be a wake-up call for all of us. Hopefully, the industry will pay attention to this incident and redouble their efforts to update, secure and monitor their systems.

Time Warner – 320,000 passwords compromised

Knock knock! Who’s there? The FBI….

This is never the way you’d like your day to play out. Last week, Time Warner was notified by the FBI that a cache of stolen credentials that appear to belong to Time Warner customers had been discovered.

At this point, the origination of the usernames and passwords is a bit of a mystery. Time Warner states: 

“We have not yet determined how the information was obtained, but there are no indications that TWC’s systems were breached.

The emails and passwords were likely previously stolen either through malware downloaded during phishing attacks or indirectly through data breaches of other companies that stored TWC customer information, including email addresses.

For those customers whose account information was stolen, we are contacting them individually to make them aware and to help them reset their passwords.”

Time Warner customers who have not yet been contacted should still consider changing their  passwords – there is no indication at this point if this is new or previously compromised password data, and a new password is never a bad idea.

Please share with anyone who is using Time Warner systems – friends, co-workers, weird relatives and neighbors as well. Remember that any password that is used twice isn’t a safe password – unique passwords are always the best practice. Password managers (LastPass, KeePass, etc.) are often a good idea to help maintain unique, difficult to decipher passwords.

It’s Tax Time Again: Watch Out for the Fake IRS Phishing Scams!

It seems like every year there is another phishing scam using the name of the Internal Revenue Service. Well, this year is no exception. This version claims to be a refund notification and contains an attachment for the unwary to click on. Don’t do it! For one thing, you should be aware that the IRS never initiates contact with taxpayers by email, text messages or social media channels to request personal or financial information.
This scam and other, similar scams also are perpetrated by telephone. Callers may say you have a refund due and try and get you to disclose private information to them. They may also call, say you owe them money, and demand immediate payment; they may even threaten to send the police to your home! Don’t panic. The more serious and immediate their demands seem, the more likely they are to be fakes. It will never hurt you to take the time to call the IRS and see if the call, text or email you have received is legitimate. Also, if you do happen to lose money to one of these scams, you can file a complaint with the Treasury Inspector General for Tax Administration.
The IRS website has resources in place to help taxpayers with this problem. This information is not particularly easy to find, but is accessible in a couple of areas of the website. If you click on the “News & Events” tab of the website there is a hyperlink to “Tax Scams”. This will get you started. You can also go to the “Help & Resources” tab. This area has links for reporting suspicious emails and scams, as well as a link to report tax fraud activity. For more information about past scams of this type, there is a page entitled “Phishing and Other Schemes Using the IRS Name.”
The important thing to take away from this is that Phishing and other types of social engineering techniques are becoming more prevalent every day, and are not about to go away. This is because as firewalls, SIEM solutions and other information security mechanism have become more effective, cyber criminals have had to find new ways to worm their way into your networks. So stay wary and avoid being credulous. Never open an attachment or click on a link anywhere without checking it out first. Also, never give unsolicited or suspicious callers any kind of private information. The old adage “Look Before You Leap” has never been more true and appropriate than it is right now!

3 Ways Clients are Benefiting from Our TigerTrax Platform Today

OK, so by now most folks know that we spent the last few years building out our own analytics platform, called TigerTrax™. Some folks know that we have been using it as a way to add impressive value to our traditional security offerings for the last couple of years. If you are a traditional assessment client, for example, you are likely seeing more threat data that is pinpoint accurate in your reports or you have been the beneficiary of some of the benefits of our passive technologies based on the platform, perhaps. If your organization hasn’t been briefed yet on our new capabilities and offerings, please let us know and we will book a time to sit down and walk you through what we believe is a game changing new approach to information security!

But, back to the message at hand. TigerTrax is already benefitting our clients in three very specific ways, and I wanted to take a moment to discuss them.

  • First, as I alluded to above, many clients are now leveraging our Targeted Threat Intelligence (TTI) offerings in a variety of ways. TTI engagements come in two flavors, Comprehensive and Baseline. You can think of this as a passive security assessment that identifies threats against your organization based on a variety of meta data analysis, tracks your brand presence across the online world and identifies where it might be present in a vulnerable state, correlates known and unknown attack campaigns against your online presence, and has been hugely successful in finding significant risks against networks/applications and intellectual property. The capability extends to findings across the spectrum of risks, threats and vulnerabilities – yet does the work without sending a single packet to the target network environments! That makes this offering hugely popular and successful in assisting organizations with supply chain, vendor management security validation and M&A research. In fact, some clients are actively using this technique across vendors on a global scale.
  • Second, TigerTrax has enabled MSI to offer security-focused monitoring of key employees and their online behaviors. From professional sports to futures/stock traders and even banking customer support teams – TigerTrax has been adapted to provide code of conduct monitoring, social media forensics and even customized mitigation training in near-real-time for the humans behind the keyboard. With so much attention to what your organization and your employees do online, how their stories spread and the customer interactions they power – this service has been an amazing benefit to customers. In some cases, our social media forensics have made the difference in reputational attacks and even helped defend a client against false legal allegations!
  • Thirdly, TigerTrax has powered the development of MachineTruth™, a powerful new approach to network mapping and asset discovery. By leaning on the power of analytics and machine learning, this offering has been able to organize thousands of machine configurations and millions of lines of log files and a variety of other data source to re-create a visual map of the environment, an inventory of the hosts on the network, an analysis of the relationships between hosts/network segments/devices and perform security baselining “en masse”. All offline. All without deploying any hardware or software on the network. It’s simply amazing for organizations with complex networks (we’ve done all sizes – from single data centers to continent-level networks), helps new CIOs or network managers understand their environment, closes the gap between “common wisdom” of what your engineers think the network is doing and the “machine truth” of what the devices are actually doing, aids risk assessment or acquisition teams in their work and can empower network segmentation efforts like no other offering we have seen.

Those are the 3 key ways that TigerTrax customers are benefiting today. Many many more are on the roadmap, and throughout 2016 we will be bringing new offerings and capability enhancements to our clients – based on the powerful analytics TigerTrax provides. Keep an eye on the blog and our website (which will be updated shortly) for news and information. Better yet, give us a call or touch base via email and schedule a time to sit down and discuss how these new capabilities can best assist you. We look forward to talking with you! 

— info (at) microsolved /dot/ com will get you to an account rep ASAP! Thanks for reading.

GRUB2 Authentication Bypass Vulnerability

A vulnerability has been discovered in the GRUB2 boot loader that affects versions dating back to 2009. GRUB2 is the default boot loader for a variety of popular Linux distributions including Ubuntu, Red Hat and Debian. The vulnerability can be exploited by pressing the backspace button 28 times when the boot loader asks for your username. This sequence of keys places the user into a “rescue shell”. An attacker could leverage this shell to access confidential data or install persistent malware.

It’s worth noting that the vulnerability requires access to the system’s console. Even if your organization has proper physical security controls in place, this issue should still be addressed as soon as possible. Ubuntu, RedHat and Debian have already released patches for this vulnerability.

Got MS DNS Servers? Get the Patch ASAP!

If you run DNS on Microsoft Windows, pay careful attention to the MS-15-127 patch.

Microsoft rates this patch as critical for most Windows platforms running DNS services.

Remote exploits are possible, including remote code execution. Attackers exploiting this issue could obtain Local System context and privileges.

We are currently aware that reverse engineering of the patch has begun by researchers and exploit development is under way in the underground pertaining to this issue. A working exploit is likely to be made available soon, if it is not already in play, as you read this.