Mapping Control Requirements to Vendor Tiers

Now that you have a proper tier structure set up for your vendors, we will discuss how to map controls to each of those tiers to create a control matrix that you can work from. This control matrix will serve as the basis for the vendor supply chain security effort – essentially providing a skeleton of the due diligence that you will perform for each vendor. Once this matrix is complete, you can use it to clearly and easily demonstrate the work that your organization does on supply chain security to any auditor or regulator who may ask to review it. In our experience, walking them through the matrix, along with providing them a documented process that you follow to enforce the matrix will suffice to meet most regulatory requirements – assuming of course, that you actually perform the work detailed in the matrix.
 
So – at a high level, how do we assign the controls? I usually start at the bottom of the stack of tiers and define the minimum controls first. Thus (referring back to the tier structure defined last time around):
  • Low Risk Vendors– What are the minimum steps we should perform for each vendor in this tier?
    • Controls Required: Scoping peer review to ensure that the criteria for this tier are met; contract and, when applicable, SLA review by the security team against established guidance & regulatory requirements, approval of financial due diligence team to avert fraud, etc. 
      • Comments: Since there are only isolated potentials for digital risk in this tier, we don’t need to perform cyber-security reviews and the like, or accumulate data we don’t need (which wastes time & resources, etc.). If, for example, this is a commodity or non-impactful application provider, we might review their contract for language around malware free deliverables, code security, patch/fix turnaround times, etc., as appropriate for each vendor and the service or good they provide.
  • Routine Risk Vendors – At this level, I try and think of the controls that I would want for just about any vendor that can impact us or our operations, but that aren’t capable of doing much beyond reputational or regulatory damage.
    • Controls Required: All of the controls of the lower level apply and are required. Any control reviews that are required for regulatory compliance over PII that we share (SAS70, PCI-DSS compliance statements, etc.). Plus, at this stage, I would really like some form of cyber-security assessment, which in this case is MSI’s passive assessment tool (that can be run without the vendor’s knowledge or permission) run against them on a yearly basis with NO HIGH RISK issues identified. If a HIGH RISK issue is found, then they would be flagged and would need to have a formal technical review of their security controls performed or even our traditional risk assessment process. Any deviance from the accepted controls would require a signed risk acceptance variance from a management team or steering committee, as an example.
      • Comments: Here, we are defining the basics. What do we need for most vendors that could hurt us? We try to keep the process as simple as possible, so that we can focus on the vendors that have higher risk of actually hurting us and our business. The use of passive assessments here is a powerful new approach to reduce the number of full fledged risk assessments that we need to perform, and the overhead created by dealing with the paperwork and interactions to complete the traditional risk assessment process.
  • High Risk Vendors – Here we build on the controls below for normal vendors to try and achieve a balance between work load and information security needs. We define a level that exceeds best practices and serves to give us more confidence in the vendors that could hurt us at a significant level.
    • Controls Required: All of the controls of the lower levels apply and are now definitely required(no variances accepted at this level for the basic controls defined for lower risk levels). In addition, we need to provide ongoing assessment of the vendor’s security controls, so a passive run is now required without any HIGH RISK findings on a quarterly basis. This is to help us combat control drift and control entropy in the vendor’s security posture. If at any time, a HIGH RISK issue is identified, then a FULL and COMPREHENSIVE risk assessment is required as soon as possible. This risk assessment should include the review of the vendor’s third party risk assessments, vulnerability assessments & penetration tests (these should be provided to us by the vendor, within 3 business days of the request). Failure to pass this risk assessment, respond properly or any significant issues identified that are not mitigated in a timely manner should result in financial and legal consequences for the vendor and their contract with our organization.
      • Comments: Again, we are trying to reduce the incidence of full risk assessments, so that we can focus our attention and limited resources on the vendors that can hurt us significantly and are in the worst security postures. Further, we create an incentive at this level for them to comply and respond rapidly.
  • Critical Risk Vendors – These are the vendors that can REALLY hurt us, so we spend a majority of our attention and resources here. 
    • Controls Required:  All of the controls of the lower levels apply and are now definitely required(no variances accepted at this level for the basic controls defined for lower risk levels). Additionally, passive assessments are now monthly in frequency (or maybe even weekly, depending on your paranoia/risk tolerance). Ongoing monitoring of target threat intelligence data is also required – so we are having MSI monitor social media/public web/deep web/dark web for any events or indicators of compromise that might emerge and be related to our vendors in this tier. At this level, we are performing the full comprehensive risk assessment process on a yearly basis, in addition to the passive work of MSI. While this is tedious, we want to ensure that we have provided the utmost effort on these vendors that can truly hurt us at the most damaging of levels. We can now do this easily without taxing our resources, thanks to the tiering architecture and the use of the focus points provided by MSI through our passive assessment and other services. Any identified MEDIUM or HIGH RISK issue flagged by MSI results in the immediate triggering of an update to the risk assessment process, notification of the vendor for the required response of their security team leadership, and the potential requirement for a formal incident response process for the vendor – which we manage by requiring the delivery of an incident response report and/or attestation by a third party security firm that the situation was mitigated and that our IIP was protected. Failure to pass this risk assessment, respond properly or any significant issues identified that are not mitigated in a timely manner should result in SIGNIFICANT financial and legal consequences for the vendor and their contract with our organization.
      • Comments: Here we leverage ongoing monitoring and take the lead on watching for potential compromises for ourselves and our vendors. Given the large percentage of breaches reported by third parties, we no longer believe that the detection and response capabilities of any partner organization are strong enough, alone, to protect our IIP. Thus the increased due diligence and oversight for the vendors that can hurt us the worst.

As you can see, building from the ground up makes leveraging the tiering process easy and logical. In the next post we will show you an example controls matrix we use to demonstrate and discuss our vendor supply chain security process. Over the years, we have found the matrix to be a powerful, auditor/regulator friendly tool to show clearly and concisely the due diligence process for vendor supply chain security. We hope you find it useful as well. Stay tuned! 

Sorting Vendors into Tiers

Previously, we reviewed some ideas around vendor discovery and laid out an example workflow and process. We also defined some tools and approaches to use for the task.
 
Once you have the vendors in your supply chain identified, and have obtained and cataloged the relevant data, the next step we suggest is to tier the vendors into levels to make it easier to classify vendors into “object groups”. Once we have the vendors sorted into tiers, we will discuss how to assign required controls to each tier in an easy to manage manner. This greatly simplifies the processing of future vendors that are added to the supply chain, since you need only identify the tier they fit into and then use the control requirements for that tier as your basis for evaluation and risk assessment. 
 
Vendor tiering, done properly, also makes assigning vendors to a given tier trivial in the long term. Our approach, as you will see, provides very clear criteria for the levels, making it easy to add new vendors and simple to manage vendors who change status as the supply chain and product lines evolve.
 
In our suggested model, we have four tiers, comprised as follows (using a product manufacturer as an example, obviously, other types of firms may require alternate specific criteria, but this should serve to lay out the model for you use as a baseline):
 
  • Critical Risk Vendors
    • Criteria: Mission critical “information intellectual property” (IIP) assets are shared with this vendor, where the assets represent a significant portion of the market differentiator or research and development of a product line OR the vendor’s IT operations are critical to our just in time manufacturing or delivery model – that is – ANY outage of the vendor’s IT operations would cause an outage for us that would impact our capability to deliver our products to our customers
      • Examples: Compromise of the IIP data would allow duplication of our product(s) or significant replication of our research; Outages or tampering with the vendor IT operations would impact manufacturing line operations, etc.
  • High Risk Vendors
    • Criteria: Non-critical IIP assets are shared with this vendor such that if said assets were compromised, they would represent damage to our long term product & brand strategies or research and development. Actual product replication would not be enabled, but feature replication might be possible. Outages of vendor’s IT operations at this level, if protracted, could impact our research and development or ability to deliver our products to our customers.
      • Examples: Breach of this vendors network could expose the design specs for a specific part of the product. Compromise of the vendor could expose our future marketing plan for a product and some of the differentiating features that we plan to leverage. If the vendor’s IT operations were disabled for a protracted time, (greater than /48, 72 or 96/ hours), our capability to deliver products could be impacted.
  • Routine Risk Vendors
    • Criteria: Non-critical IIP assets may be shared with this vendor tier, and compromise of that IIP may be damaging to our reputation. The IIP, if compromised, would not allow duplication of our product lines, research or differentiators of our products. In addition to reputational impacts, share of data that could impact our sales pipeline/process and/or other secondary systems or processes may be expected if breaches occur at this level. Regulatory or legally protected IIP also resides at this level.
      • Examples: Organizations where customer data, sales & marketing data, employee identification information, etc. are shared (outsourced payment, outsourced HR, etc.) are good examples here. This is the level of risk for any vendor that you share IIP with, in any form, that does NOT immediately empower delivery of your products or impact your longer term R&D efforts or market differentiators… 
  • Low Risk Vendors
    • Criteria: This tier is for vendors that we share NO IIPwith, in any form, and vendors that could not directly impact our product delivery via an IT operations outage in any way. These vendors, should they experience a breach, would result in little to no impact on the reputation or capabilities of our firm to operate.
      • Examples: Caterers, business supply companies, temporary employment agencies, hardware and software vendors for not manufacturing systems, commodity product or component dealers, packaging material suppliers, transport companies, etc.
 
Building such a tiered approach for your vendors creates an easy to manage way to prioritize them. The tiered approach will also be greatly useful in mapping groups of controls to the requirements for each tier. We will cover that in a future post, shortly. 

Ideas for Vendor Discovery

One of the most common issues in supply chain security is in identifying vendors initially and then in maintaining their status over the long term. To answer that challenge, here are some ideas around creating initiatives to answer those needs that we have seen work over the years. This post will focus on identifying vendors and refreshing vendor lists. Another post will discuss suggestions for creating vendor tiers and sorting vendors based upon various criteria and mapping that to controls for each tier.

 
Getting Started:
 
The first step in identifying your vendors and beginning the supply chain security process is to establish responsible parties. Who in the organization will be responsible for establishing the program and who will be responsible for oversight of the program. Who will the program report to, and what data is expected as a part of the report. This is often assigned to the company’s risk or security department, where available and flows upwards through their management chain to a steering committee or chief executive. In some cases, where security or risk functions don’t formally exist, we have seen supply chain security tasking as a part of either legal or operational teams. Rarest of all, and the least successful in our experience, is when it is assigned to members of the accounting team – mostly because they often lack sufficient technical and risk assessment skills to perform the work optimally.
 
Creating Data Boundaries:
 
Once you know who will do the work, the next step is to establish boundaries and the underlying mechanisms you will use to manage the data. In small companies, this might be as simple as a spreadsheet. Mid-size companies often build a small database or Sharepoint repository to hold the data. Large firms often use modules in their enterprise data platforms to manage the data. How you will manage the data though, irregardless of your chosen platform, is much less important than setting boundaries about how far back in the vendor supply chain you will go. In our experience, this is an area where organizations often damage their success early by trying to target too large a portion of the vendor population or using too much history. Our suggestion is to use only vendors that are currently serving the company, and then to pick a criteria such as “criticality to just in time delivery”, “line operations criticality”, gross spend or criteria that reflect the potential for large impacts to your operations or central valued assets. For example, if you have vendors that provide raw materials to your factories, and downtime of the line is a significant threat – then focus on those critical suppliers to start. If you are a bank or credit union and you outsource item processing or marketing to your clients/members to a third party – then these vendors could impact the core value of your business – the trust of your clients, so start there. To begin, start by identifying the top 10 or 20 vendors in this group. That becomes the working list to begin the process. 
 
Gathering the Data: 
 
Now that you know what vendor data you need and what the boundaries are, how do you actually gather the data? In most cases – the process begins by working with accounts payable to obtain their ranked and sorted list of vendor payees. A quick hint here is to check with your disaster recovery and/or business continuity team to see if they already have the data and have vetted it. In many cases the DR/BC folks have done the basic footwork – so you may be able to leverage thier processes, data and systems. Either way, once you get the list, it is advisable to do a rationality check with the various lines of business using the vendors. In many cases, their feedback can help you make sure that what accounting says is critical agrees with their operational sense of the world.
 
Once you have the data, and get it processed it into your systems – you will next want to establish a workflow on how you will use the data, what baselines you will use, etc. We will cover that shortly. 
 
Be sure to the document the collection processes you used, and create a periodic refresh process for the data based upon it. Optimize that process over time to expand scope, reduce time between updates, etc. Eventually, most organizations settle on monthly or quarterly updates vendor data, and then sort their vendor assessment efforts based upon tiers. Using and refining such a process will go a long way toward reducing your supply chain risks over time.

3 Reasons Your Supply Chain Security Program Stinks

  1. Let’s face it, Supply Chain Security and Vendor Risk Management is just plain hard. There are a lot of moving pieces – companies, contacts, agreements, SLAs, metrics, reporting, etc. Suppliers also change frequently, since they have their own mergers/acquisitions, get replaced due to price changes or quality issues, new suppliers are added to support new product lines and old vendors go away as their product lines become obsolete. Among all of that, is cyber-security. MSI has a better and faster way forward – an automated way to reduce the churn – a way to get a concise, easy to use and manageable view of the security of your vendors’ security posture. This month, we will show you what we have been doing in secret for some of the largest companies in the world… 
  2. Vendors with good security postures often look the same as vendors with dangerous security postures, on paper at least. You know the drill – review the contracts, maybe they send you an audit or scan report (often aged), maybe they do a questionnaire (if you’re lucky). You get all of this – after you chase them down and hound them for it. You hope they were honest. You hope the data is valid. You hope they are diligent. You hope they stay in the same security posture or improve over time, and not the opposite. You hope for a lot. You just don’t often KNOW, and what most companies do know about their vendors is often quite old in Internet terms, and can be far afield from where their security posture is at the moment. MSI can help here too. This month, we will make our passive assessment tool available to the public for the first time. Leveraging it, you will be able to rapidly, efficiently and definitively get a historic and current view of the security posture of your vendors, without their permission or knowledge, with as frequent updates as you desire. You’ll be able to get the definitive audit of their posture, from the eyes of an attacker, in a variety of formats – including direct data feeds back into your GRC tools. Yes, that’s right – you can easily differentiate between good and bad security AND put an end to data entry and keyboarding sessions. We will show you how… 
  3. Supply chain security via manual processes just won’t scale. That’s why we have created a set of automated tools and services to help organizations do ongoing assessments of their entire supply chain. You can even sort your supply chain vendors by criticality or impact, and assign more or less frequent testing to those groups. You can get written reports, suitable for auditors – or as we wrote above, data feeds back to your GRC tools directly. We can test tens of vendors or thousands of vendors – whatever you need to gain trust and assurance over your supply chain vendors. The point is, we built workflows, methodologies, services and tools that scale to the largest companies on the planet. This month, we will show you how to solve your supply chain security problems.
 
If you would like a private, sneak peak preview briefing of our research and the work we have done on this issue, please get in touch with your account executive or drop us a line via info (at) microsolved /dot/ com, call us at (614) 351-1237 or click the request a quote button at the top of our website – http://microsolved.com. We’ll be happy to sit down and walk through it with you. 
 
If you prefer to learn more throughout March – stay tuned to https://stateofsecurity.com for more to come. Thanks for reading! 

March is Supply Chain Security Month at MSI

This month, March of 2016, we will be creating and publishing content around supply chain security, vendor risk and our new products and services focused on this area of your business.

For the last 2.5 years, MSI has been working with partners and companies around the world to create new solutions to aid them in the battle of identifying, profiling and auditing the security of their supply chain vendors. Our research in this area has led to the creation of a new line of products and services that we will be making public throughout the month. 

Stay tuned to StateOfSecurity.com for the details as they unfold. In the meantime, if you would like to arrange a special private briefing about our exciting and unique new approaches and tools – give your account executive a call to arrange for a private discussion, capabilities briefing and demo.

As always, thanks for reading – and here is to helping making supply chain security manageable, efficient and effective for companies of all sizes!

Patch Your Cisco ASA’s ASAP!

Many networks employ Cisco Adaptive Security Appliances (ASA) as firewalls or to set up Virtual Private Networks, etc. Those of you that are among this group should be aware that Cisco published a critical security advisory on February 10 concerning a glitch in their ASA software. It seems that there is a vulnerability in the Internet Key Exchange (IKE) code of Cisco ASA Software that could potentially allow an unauthenticated attacker to gain full control of the system, or to cause a reload of the system.
This vulnerability is due to a buffer overflow condition in the function that processes fragmented IKE payloads. Attackers could exploit the flaw by sending crafted UDP packets to the affected system. It should be noted that this vulnerability is bad enough that it was given a maximum CVSS score of 10.
The ASA software on the following products may be affected by this vulnerability:
• Cisco ASA 5500 Series Adaptive Security Appliances
• Cisco ASA 5500-X Series Next-Generation Firewalls
• Cisco ASA Services Module for Cisco Catalyst 6500 Series Switches and Cisco 7600 Series Routers
• Cisco ASA 1000V Cloud Firewall
• Cisco Adaptive Security Virtual Appliance (ASAv)
• Cisco Firepower 9300 ASA Security Module
• Cisco ISA 3000 Industrial Security Appliance
Patches are now available for this flaw. We recommend that vulnerable users of this software apply these patches as soon as possible. For more information see:
https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20160210-asa-ike

Introducing Tomce

Today I am thrilled to announce that Tomce Kuzevski has joined the MSI team as an intelligence analyst, working on TigerTrax, analytics and machine learning focused services. I took a few minutes of Tomce’s time to ask some intro questions for you to get to know him. Welcome Tomce, and thanks for helping us take TigerTrax services to the next level! 
 
Q – Tomce, you are new to MSI, so tell the readers the story of how you developed your skills and got your spot on the Intelligence Team.
 
A- Ever since I was a kid, I was always into computers/electronics. I can’t tell you how much money my parents spent on computer/electronics for me, for them only to last a week or so. I would take them apart and put them back together constantly. Or wiping out the hard drive not knowing what I did until later. 
 
Growing up and still to this day, I was always the “go to kid” if someone needed help on computers/electronics which I didn’t mind at all. I enjoyed trying to figure out the issue’s. The way I learned was from failing and trying it myself. From when I was a kid to now, I still enjoy it and will continue to enjoy. I knew I wanted to be in the Computer/IT industry. 
 
I know Adam through a mutual friend of ours. He posted on FB MSI was hiring for a spot on their team. I contacted him about the position. He informed me on what they do and what they’re looking for, which was right up my alley. I am consistently on the internet searching anything and everything. I had a couple interviews with Brent and the team, everything went how it was suppose to. Here I am today about 7 weeks into it and enjoying it! That’s how I landed my spot on the MSI team.
 
Q – Share with the readers the most interesting couple of things they could approach you about at events for a discussion. What kind of things really get you into a passionate conversation?
 
A- I really enjoy talking about the future of technology. Yet, it’s scary and mind blowing at the same time. Being born in the 80’s and seeing the transformation from then to now, is scary. But, laying on the couch holding my iPhone while skyping my cuzin in Europe, checking FB and ordering a pizza all in the palm of my hands is mind blowing. I cant imagine what the world will be like in next 25 years. 
 
 
Q – I know that since joining our team, one of your big focus areas has been to leverage our passive security assessment and Intel engine – (essentially a slice of the TigerTrax™ platform) to study large scale security postures. You recently completed the holistic testing of a multi-national cellular provider. Tell our readers some of the lessons you learned from that engagement?
 
A- I absolutely could not believe my eye’s on what we discovered. Being such a huge telecom company, having so many security issues. I’ve been in the telecom business 5 years prior to me coming to MSI. I’ve never seen anything like this before. When signing up for a new cell phone provider, I highly recommend doing some “digging” on the company. We use our phones everyday, our phones have personal/sensitive information. For this cell phone provider being as big as they are, it was shocking! If you’re looking for a new cell phone provider, please take some time and do some research. 
 
 
Q – You also just finished running the entire critical infrastructures of a small nation through our passive assessment tool to support a larger security initiative for their government. Given how complex and large such an engagement is, tell us a bit about some of the lessons you learned there?
 
A- Coming from outside of the IT security world, I never thought I would see so many security issues at such a high level. It is a little scary finding all this information out. I used to think every company at this level wouldn’t have any flaws. Man, was I wrong! From here on out, I will research every company that I use currently and future. You cant think, “This is a big company, there fine” attitude. You have to go out and do the research.  
 
Q – Thanks for talking to us, Tomce. If the readers want to make contact with you or read more about your work, where can they find you?
 
You can reach me @TomceKuzevski via Twitter. I’am constantly posting Information Security articles thats going on in todays world. Please don’t hesitate to reach out to me. 

State Of Security Podcast Episode 10

Episode 10 is now available! 

This time around, we get to learn from the community, as I ask people to call in with their single biggest infosec lesson from 2015. Deeply personal, amazingly insightful and full of kindness to be shared with the rest of the world – thanks to everyone who participated! 

Comparing 2 Models for DMZ Implementations

I recently had a discussion with another technician about the security of the two most popular DMZ implementation models. That is: 
  • The “3 Legged Model” or “single firewall” – where the DMZ segment(s) are connected via a dedicated interface (or interfaces) and a single firewall implements traffic control rules between all of the network segments (the firewall could be a traditional firewall simply enforcing interface to interface rules or a “next generation” firewall implementing virtualized “zones” or other logical object groupings)
  • The “Layered Model” or “dual firewall”- where the DMZ segment(s) are connected between two sets of firewalls, like a sandwich
 
Both approaches are clearly illustrated above, and explained in detail in the linked wikipedia article, so I won’t repeat that here. 
 
I fully believe that the “3 Legged Model” is a lower risk implementation than the layered model. This outright contradicts what the wikipedia article above states: 
 
     “The most secure approach, according to Stuart Jacobs, [1]is to use two firewalls to create a DMZ.” — wikipedia article above.
 
While the Layered model looks compelling at first blush, and seems to apply the concept of “more firewalls would need to be compromised to lead to internal network access”; I believe that, in fact, it reduces the overall security posture in the real world, and increases risk. Here’s why I feel that way. Two real-world issues that often make things that look great at first blush or that “just work” in the lab environment, have significant disadvantages in the real world are control complexity and entropy. Before we dig too deeply into those issues though, let’s talk about how the two models are similar. (Note that we are assuming that the firewalls themselves are equally hardened and monitored – IE, they have adequate and equal security postures both as an independent system and as a control set, in aggregate.)
 
Reviewing the Similarities
 
In both of the models, traffic from the DMZ segment(s) pass through the firewall(s) and traffic controls are applied. Both result in filtered access to the internal trusted network via an often complex set of rules. Since in both cases, traffic is appropriately filtered, authorization, logging and alerting can adequately occur in both models. 
 
Establishing Differences
 
Now the differences. In the 3 Legged model, the controls are contained in one place (assuming a high availability/failover pair counts as a single set of  synced controls), enforced in one place, managed and monitored in one place. The rule set does not have cascading dependencies on other implementations of firewalls, and if the rule set is well designed and implemented, analysis at a holistic level is less complex.
 
In the Layered model, the controls are contained across two separate instances, each with different goals, roles and enforcement requirements. However, the controls and rule sets are interdependent. The traffic must be controlled through a holistic approach spread across the devices, and failures at either firewall to adequately control traffic or adequately design the rule sets could cause cascading unintended results. The complexity of managing these rules across devices, with different rule sets, capabilities, goals and roles is significantly larger than in a single control instance. Many studies have shown that increased control complexity results in larger amounts of human error, which in turn contributes to higher levels of risk. 
 
Control Complexity Matters
 
Misconfigurations, human errors and outright mistakes are involved in a significant number (~95%) of compromises. How impactful are human mistakes on outright breaches? Well according to the 2015 Verizon DBIR:
 
“As with years past, errors made by internal staff, especially system administrators who were the prime actors in over 60% of incidents, represent a significant volume of breaches and records ,even with our strict definition of what an “error” is.” —DBIR
 
Specifically, misconfiguration of devices were involved in the cause of breaches directly in 3.6% of the breaches studied in the DBIR. That percentage may seem small, but the data set of 79,790 incidents resulting in 2,122 breaches that means a staggering number of 76 breaches of data were the result of misconfigurations.
 
This is exactly why control complexity matters. Since control complexity correlates with misconfiguration and human error directly, when complexity rises, so does risk – conversely, when controls are simplified, complexity falls and risk of misconfiguration and human error is reduced.
 
Not to beat on the wikipedia article and Stuart Jacob’s assertions, but further compounding the complexity of his suggestion is multiple types of firewalls, managed by multiple vendors. Talk about adding complexity, take an interdependent set of rules and spread them across devices, with differing roles and goals and you get complexity. Now make each part of the set a different device type with it’s own features, nuances, rule language, configuration mechanism and managed service vendor, and try to manage both of those vendors in sync to create a holistic implementation of a control function. What you have is a NIGHTMARE of complexity. At an enterprise scale, this implementation approach would scale in complexity, resources required and oversight needs logarthmically as new devices and alternate connections are added. 
 
So, which is less complex, a single implementation, on a single platform, with a unified rule set, managed, monitored and enforced in a single location – OR – a control implemented across multiple devices, with multiple rule sets that require monitoring, management and enforcement in interdependent deployments? I think the choice is obvious and rational.
 
Now Add Entropy
 
Ahh, entropy, our inevitable combatant and the age old foe of order. What can you say about the tendency for all things to break down? You know what I am about to point out though, right? Things that are complex, tend to break down more quickly. This applies to complex organisms, complex structures, complex machinery and complex processes. It also applies to complex controls.
 
In the case of our firewall implementation, both of our models will suffer entropy. Mistakes will be made. Firewall rules will be implemented that allow wider access than is needed. Over time, all controls lose efficiency and effectiveness. Many times this is referred to as “control drift” or “configuration drift”. In our case, the control drift over a single unified rule set would have a score of 1. Changes to the rule set, apply directly to behavior and effectiveness. However, in the case of the Layered model, the firewalls each have a distinct rule set, which will degrade – BUT – they are interdependent on each other – giving an effective score of 2 for each firewall. Thus, you can easily see, that as each firewall’s rule set degrades, the private network’s “view” of the risk increases significantly and at a more rapid pace. Simply put, entropy in the more complex implementation of multiple firewalls will occur faster, and is likely to result in more impact to risk. Again, add the additional complexity of different types of firewalls and distinct vendors for each, and the entropy will simply eat you alive…
 
Let’s Close with Threat Scenarios

Let’s discuss one last point – the actual threat scenarios involved in attacking the private network from the DMZ. In most cases, compromise of a DMZ host will give an attacker a foothold into the environment. From there, they will need to pivot to find a way to compromise internal network resources and establish a presence on the internal network. (Note that I am only focusing on this threat scenario, not the more common phishing/watering hole scenarios that don’t often involve the compromise of a DMZ host, except perhaps for exfiltration paths. But, this is outside our current scope.) If they get lucky, and the DMZ is poorly designed, they may find that their initially compromised host has some form of access to the internal network that they can exploit. But, in most cases, the attacker needs to perform lateral movement to compromise additional hosts, searching for a victim that has the capability to provide a launching point for attacks against the internal network.
 
In these cases, detection is the goal of the security team. Each attacker move and probe, should cause “friction” against the controls, thereby raising the alert and log levels and the amount of unusual activity. Ultimately, this should lead to the detection of the attacker presence and the incident response process engagement.
 
However, let’s say that you are the attacker, trying to find a host that can talk to the internal network from the DMZ in a manner that you can exploit. How likely are you to launch an attack against the firewalls themselves? After all, these are devices that are designed for security and detection. Most attackers, ignore the firewalls as a target, and continue to attempt to evade their detection capabilities. As such, in terms of the threat scenario, additional discreet firewall devices, offer little to no advantage – and the idea that the attacker would need to compromise more devices to gain access loses credibility. They aren’t usually looking to pop the firewall itself. They are looking for a pivot host that they can leverage for access through whatever firewalls are present to exploit internal systems. Thus, in this case, both deployment models are rationally equal in their control integrity and “strength” (for lack of a better term).
 
Wrapping This Up
 
So, we have established that the Layered model is more complex than the 3 Legged model, and that it suffers from higher entropy. We also established that in terms of control integrity against the most common threat scenario, the implementation models are equal. Thus, to implement the Layered model over the 3 Legged model, is to increase risk, both initially, and at a more rapid pace over time for NO increase in capability or control “strength”. This supports my assertion that the 3 Legged model is, in fact, less risky than the Layered model of implementation.
 
As always, feel free to let me know your thoughts on social media. I can be found on Twitter at @lbhuston. Thanks for reading! 

Ask The Experts: Devaluing 0-days

Earlier this week, I heard an awesome speech at Columbus BSides about the economics of Exploit Kits and E-Crime. As a follow-up, I thought it would be worthwhile to ask my fellow MSI co-workers if they felt there was a way to devalue 0day vulnerabilities.

Jim Klun responded with…

I don’t think you can ever really – given how Internet/computer usage has been universally adopted for all human activity – devalue the worth of a 0-day. The only thing I can imagine is making the chance of a 0-day being discovered in an area of computing that really matters as small as possible. So that means forcing – through law – all sensitive infrastructure (public or private) and comm channels to subscribe to tight controls on what can be used and how things can work. With ongoing inspection and fines/jail time for slackers. Really.. don’t maintain your part of the Wall properly, let the Mongols in and get some villages sacked, and its your head.

I would have techs who are allowed to touch such infrastructure (or develop for it) uniformly trained and licensed at the federal level. Formal process would exist for them doing doing 0-day research and reporting. Outsiders can do same…. but if they announce without chance for defensive response, jail.  And for all those who do play the game properly and find 0-days within the reduced space of critical infrastructure/software  – money and honor.

Brent Huston added his view…

Thats a tough question. Because you are asking to both devalue something, yet make it valuable for a different party. This is called market transference.

So for example, we need to somehow change the “incentive” to a “currency” that is non-redeemable by bad guys. The problem with that is – no matter how you transfer the currency mechanism, it is likely that it simply creates a different variant of the underground market.

For example, let’s say we make 0-days for good guys redeemable for a tax credit, so they can turn them into the IRS and get a tax credit in $ for the work… Seems pretty sound…Bad guys can’t redeem the tax credits without giving up anonymity. However – it reenforces the underground market and turns potential good guys into buyers.

Plus, 0days still have intrinsic value – IE other bad guys will still buy them for crime as long as the output of that crime has a value. Thus, you actually might increase the number of people working on 0day research. This is a great example of where market transference might well raise the value of 0days on the underground market (more bidders) and the population attackers looking for them (to sell or leverage for crime).

Lisa Wallace also provided her prospective…

Create financial incentives for the corporations to catch them before release. You get X if your product has no discovered 0-days in Y time.

Last but not least, Adam Hostetler weighed in when asked if incentives for the good guys would help devalue 0days…

That’s the current plan of a lot of big corporations, at least in web apps. I don’t think that really devalues them though. I don’t see any reasonable way to control that without strict control of network traffic, eavesdropping etc, or “setting the information free”.