High-Level FAQ on Attack Surface Mapping

Q:What is attack surface mapping?

A: Attack surface mapping is a technique used to identify and assess potential attack vectors on a system or network. It involves identifying and analyzing the various components, data flows, and security controls of a system to identify potential vulnerabilities.

Q:What are the benefits of attack surface mapping?

A:Attack surface mapping helps organizations to better understand their security posture, identify weaknesses, and deploy appropriate controls. It can also help reduce risk by providing visibility into the system’s attack surface, allowing organizations to better prepare for potential threats.

Q:What are the components involved in attack surface mapping?

A: Attack surface mapping involves examining the various components of a system or network, including hardware, software, infrastructure, data flows, and security controls. It also includes evaluating the system’s current security posture, identifying potential attack vectors, and deploying appropriate controls.

Q:What techniques are used in attack surface mapping?

A: Attack surface mapping typically involves using visual representations such as mind-maps, heat maps, and photos to illustrate the various components and data flows of a system. In addition, it may involve using video demonstrations to show how potential vulnerabilities can be exploited.

3 Steps To Increase Cyber Security At Your Dealership

Car dealerships and automotive groups are juicy targets for cybercriminals with their wealth of identity and financial information. Cyber security in many dealerships is lax, and many don’t even have full time IT teams, with even fewer having cybersecurity risk management skills in house. While this is changing, for the better, as dealerships become more data-centric and more automated, many are moving to become more proactive against cybersecurity threats. 

In addition to organized criminals seeking to capture and sell personal information,  global threats stemming from phishing, malware, ransomware and social engineering also plague dealerships. Phishing and ransomware are among the leading causes of financial losses tied to cybersecurity in the dealership space. Even as the federal regulators refine their focus on dealerships as financial institutions, more and more attackers have shifted some of their attention in the automotive sales direction.

Additionally, a short walk through social media doesn’t require much effort to identify dealerships as a common target for consumer anger, frustration and threats. Some of the anger shown toward car dealerships has proven to turn into physical security concerns, while it is almost assured that some of the industry’s network breaches and data breaches can also be tied back to this form of “hacktivism”. In fact, spend some time on Twitter or chat rooms, and you can find conversations and a variety of information of hacking dealership wireless networks and WiFi cameras. These types of cybersecurity incidents are proving to be more and more popular. 

With all of this cybersecurity attention to dealerships, are there any quick wins to be had? We asked our MSI team and the folks we work with at the SecureDrive Alliance that very question. Here’s the best 3 tips they could put forth:

1) Perform a yearly cybersecurity risk assessment – this should be a comprehensive view of your network architecture, security posture, defenses, detection tools, incident response plans and disaster recovery/business continuity plan capabilities. It should include a complete inventory of all PII and threats that your dealership faces. Usually this is combined with penetration testing and vulnerability assessment of your information systems to measure network security and computer security, as well as address issues with applications and social engineering. 

2) Ensure that all customer wireless networks and physical security systems are logically and physically segmented from operations networks – all networks should be hardened in accordance with information security best practices and separated from the networks used for normal operations, especially finance and other PII related processes. Network traffic from the customer wireless networks should only be allowed to traverse the firewall to the Internet, and may even have its own Internet connection such as a cable modem or the like. Cameras and physical security systems should be hardened against attacks and all common credentials and default passwords should be changed. Software updates for all systems should be applied on a regular basis.

3) Train your staff to recognize phishing, eliminate password re-use among systems and applications and reportcybersecurity attacks to the proper team members – your staff is your single best means of detecting cyber threats. The more you train them to identify and resist dangerous behaviors, the stronger your cybersecurity maturity will be. Training staff members to recognize, handle, report and resist cyber risks is one of the strongest value propositions in information security today. The more your team members know about your dealership’s security protocols, service providers and threats, the more effective they can be at protecting the company and themselves. Buidling a training resource center, and setting up a single point of contact for reporting issues, along with sending out email blasts about the latest threats are all great ways to keep your team on top of the issues.

There you have it, three quick and easy wins to help your dealership do the due diligence of keeping things cyber secure. These three basic steps will go a long way to protecting the business, meeting the requirements of your regulatory authority and reduce the chances of substantial harm from cyber attacks. As always, remaining vigilant and attentive can turn the tide. 

If you need any assistance with cybersecurity, risk management, penetration testing or training, MicroSolved and the SecureDrive Alliance are here to help. No matter if you’re a small business or a large auto group, our risk management and information security processes based on the cybersecurity framework from the National Institute of Standards and Technology (NIST) will get you on the road to effective data security. Simply contact MSI via this web form, or the SecureDrive Alliance via our site, and we will be happy to have a no cost, no hassle discussion to see how we can assist you.  

All About FINRA Risk Assessments

FINRA (Financial Industry Regulatory Authority) requires an enterprise risk assessment once per year for all member firms. This risk assessment should be completed using the NIST Cyber-Security Framework, if appropriate for the size of the organization. At MSI, we fully embrace the NIST framework and use it routinely for our approach to information security and risk management.

Who Performs the FINRA Risk Assessment?

The FINRA requirements for risk assessment include that it be completed by independent third-party assessors, if possible, or otherwise by internal information security experts (if qualified and available). MSI’s approach is to work WITH our client’s internal team members, including them in the process, and leveraging their deep knowledge of the firm’s operations, while still maintaining our independence. In our experience, this provides the best return on investment for the risk assessment, and allows granular analysis without draining critical internal client resources.

What Analysis Does the FINRA Risk Assessment Require?

Each FINRA risk assessment should include an inventory of all critical data, PII and other sensitive information. Then, each asset should be reviewed for its impact on the business and identification of relevant controls, risks, mitigations and residual risks should occur. This process requires deeper knowledge of cyber security than most firms are comfortable with, and the experience and attention to detail of the assessor can make or break the value of the assessment.

Is the FINRA Risk Assessment Affordable?

Since the workload of a risk assessment varies greatly based on the size and complexity of the organization being assessed, smaller firms are naturally more affordable than larger firms. Risk assessments are affordable for nearly every firm today, and the work plans can be easily customized to fit even the tightest of budgets. In addition, when working with experienced and knowledgable assessors, the cost can be even lower and the results even more valuable. At MSI, our assessment team has more than 15 years of experience, across a wide variety of size, type and operational styles of client firms. You won’t find any “on the job training” here, our experts are among the best and most recognized in the world. We are excellent at what we do, and we can help your firm get the best ROI on a risk assessment in the industry.

How Do I Get Started on a FINRA Risk Assessment from MSI?

Simply drop us a line via this web form, or give us a call at (614) 351-1237 to arrange for a free, no hassle call with our team. We’ll explain how our process works, gather some basic information and provide you with a proposal. We’d love the chance to talk with you, and be of service to your firm. At MSI, we build long-term client relationships and we truly want to partner to help your firm be more successful, safer and manage the risks of the online world more easily. Give us a call today! 

A Quick Expert Conversation About Gap Assessment

Gap Assessment Interview with John Davis

What follows is a quick interview session with John Davis, who leads the risk assessment/policy/process team at MicroSolved. We completed the interview in January of 2020, and below are the relevant parts of our conversation.

Brent Huston: “Thanks for joining me today, John. Let’s start with what a gap assessment is in terms of HIPAA or other regulatory guidance.”

John Davis: “Thanks for the chance to talk about gap assessment. I have run into several HIPAA concerns such as hospitals and health systems who do HIPAA gap analysis / gap assessment in lieu of HIPAA risk assessment. Admittedly, gap assessment is the bulk of risk assessment, however, a gap assessment does not go to the point of assigning a risk rating to the gaps found. It also doesn’t go to the extent of addressing other risks to PHI that aren’t covered in HIPAA/HITECH guidance.”

BH: “So, in some ways, the gap assessment is more of an exploratory exercise – certainly providing guidance on existing gaps, but faster and more affordable than a full risk assessment? Like the 80/20 approach to a risk assessment?”

John Davis: “I suppose so, yes. The price is likely less than a full blown risk assessment, given that there is less analysis and reporting work for the assessment team. It’s also a bit faster of an engagement, since the deep details of performing risk analysis aren’t a part of it.”

BH: “Should folks interested in a gap assessment consider adding any technical components to the work plan? Does that combination ever occur?”

JD: “I can envision a gap assessment that also includes vulnerability assessment of their networks / applications. Don’t get me wrong, I think there is immense value in this approach. I think that to be more effective, you can always add a vulnerability assessment to gauge how well the policies and processes they have in place are working in the context of the day-to-day real-world operations.”

BH: “Can you tie this back up with what a full risk assessment contains, in addition to the gap assessment portion of the work plan?”

JD: “Sure! Real risk assessment includes controls and vulnerability analysis as regular parts of the engagement. But more than that, a complete risk assessment also examines threats and possibilities of occurrence. So, in addition to the statement of the gaps and a roadmap for improvement, you also get a much more significant and accurate view of the data you need to prioritize and scope many of the changes and control improvements needed. In my mind, it also gets you a much greater view of potential issues and threats against PHI than what may be directly referenced in the guidance.” 

BH: “Thanks for clarifying that, John. As always, we appreciate your expert insights and experience.”

JD: “Anytime, always happy to help.”

If you’d like to learn more about a gap assessment, vulnerability assessment or a full blown risk assessment against HIPAA, HITECH or any other regulatory guidance or framework, please just give us a call at (614) 351-1237 or you can click here to contact us via a webform. We look forward to hearing from you. Get in touch today! 

3 Reasons Your Supply Chain Security Program Stinks

  1. Let’s face it, Supply Chain Security and Vendor Risk Management is just plain hard. There are a lot of moving pieces – companies, contacts, agreements, SLAs, metrics, reporting, etc. Suppliers also change frequently, since they have their own mergers/acquisitions, get replaced due to price changes or quality issues, new suppliers are added to support new product lines and old vendors go away as their product lines become obsolete. Among all of that, is cyber-security. MSI has a better and faster way forward – an automated way to reduce the churn – a way to get a concise, easy to use and manageable view of the security of your vendors’ security posture. This month, we will show you what we have been doing in secret for some of the largest companies in the world… 
  2. Vendors with good security postures often look the same as vendors with dangerous security postures, on paper at least. You know the drill – review the contracts, maybe they send you an audit or scan report (often aged), maybe they do a questionnaire (if you’re lucky). You get all of this – after you chase them down and hound them for it. You hope they were honest. You hope the data is valid. You hope they are diligent. You hope they stay in the same security posture or improve over time, and not the opposite. You hope for a lot. You just don’t often KNOW, and what most companies do know about their vendors is often quite old in Internet terms, and can be far afield from where their security posture is at the moment. MSI can help here too. This month, we will make our passive assessment tool available to the public for the first time. Leveraging it, you will be able to rapidly, efficiently and definitively get a historic and current view of the security posture of your vendors, without their permission or knowledge, with as frequent updates as you desire. You’ll be able to get the definitive audit of their posture, from the eyes of an attacker, in a variety of formats – including direct data feeds back into your GRC tools. Yes, that’s right – you can easily differentiate between good and bad security AND put an end to data entry and keyboarding sessions. We will show you how… 
  3. Supply chain security via manual processes just won’t scale. That’s why we have created a set of automated tools and services to help organizations do ongoing assessments of their entire supply chain. You can even sort your supply chain vendors by criticality or impact, and assign more or less frequent testing to those groups. You can get written reports, suitable for auditors – or as we wrote above, data feeds back to your GRC tools directly. We can test tens of vendors or thousands of vendors – whatever you need to gain trust and assurance over your supply chain vendors. The point is, we built workflows, methodologies, services and tools that scale to the largest companies on the planet. This month, we will show you how to solve your supply chain security problems.
 
If you would like a private, sneak peak preview briefing of our research and the work we have done on this issue, please get in touch with your account executive or drop us a line via info (at) microsolved /dot/ com, call us at (614) 351-1237 or click the request a quote button at the top of our website – http://microsolved.com. We’ll be happy to sit down and walk through it with you. 
 
If you prefer to learn more throughout March – stay tuned to https://stateofsecurity.com for more to come. Thanks for reading! 

Comparing 2 Models for DMZ Implementations

I recently had a discussion with another technician about the security of the two most popular DMZ implementation models. That is: 
  • The “3 Legged Model” or “single firewall” – where the DMZ segment(s) are connected via a dedicated interface (or interfaces) and a single firewall implements traffic control rules between all of the network segments (the firewall could be a traditional firewall simply enforcing interface to interface rules or a “next generation” firewall implementing virtualized “zones” or other logical object groupings)
  • The “Layered Model” or “dual firewall”- where the DMZ segment(s) are connected between two sets of firewalls, like a sandwich
 
Both approaches are clearly illustrated above, and explained in detail in the linked wikipedia article, so I won’t repeat that here. 
 
I fully believe that the “3 Legged Model” is a lower risk implementation than the layered model. This outright contradicts what the wikipedia article above states: 
 
     “The most secure approach, according to Stuart Jacobs, [1]is to use two firewalls to create a DMZ.” — wikipedia article above.
 
While the Layered model looks compelling at first blush, and seems to apply the concept of “more firewalls would need to be compromised to lead to internal network access”; I believe that, in fact, it reduces the overall security posture in the real world, and increases risk. Here’s why I feel that way. Two real-world issues that often make things that look great at first blush or that “just work” in the lab environment, have significant disadvantages in the real world are control complexity and entropy. Before we dig too deeply into those issues though, let’s talk about how the two models are similar. (Note that we are assuming that the firewalls themselves are equally hardened and monitored – IE, they have adequate and equal security postures both as an independent system and as a control set, in aggregate.)
 
Reviewing the Similarities
 
In both of the models, traffic from the DMZ segment(s) pass through the firewall(s) and traffic controls are applied. Both result in filtered access to the internal trusted network via an often complex set of rules. Since in both cases, traffic is appropriately filtered, authorization, logging and alerting can adequately occur in both models. 
 
Establishing Differences
 
Now the differences. In the 3 Legged model, the controls are contained in one place (assuming a high availability/failover pair counts as a single set of  synced controls), enforced in one place, managed and monitored in one place. The rule set does not have cascading dependencies on other implementations of firewalls, and if the rule set is well designed and implemented, analysis at a holistic level is less complex.
 
In the Layered model, the controls are contained across two separate instances, each with different goals, roles and enforcement requirements. However, the controls and rule sets are interdependent. The traffic must be controlled through a holistic approach spread across the devices, and failures at either firewall to adequately control traffic or adequately design the rule sets could cause cascading unintended results. The complexity of managing these rules across devices, with different rule sets, capabilities, goals and roles is significantly larger than in a single control instance. Many studies have shown that increased control complexity results in larger amounts of human error, which in turn contributes to higher levels of risk. 
 
Control Complexity Matters
 
Misconfigurations, human errors and outright mistakes are involved in a significant number (~95%) of compromises. How impactful are human mistakes on outright breaches? Well according to the 2015 Verizon DBIR:
 
“As with years past, errors made by internal staff, especially system administrators who were the prime actors in over 60% of incidents, represent a significant volume of breaches and records ,even with our strict definition of what an “error” is.” —DBIR
 
Specifically, misconfiguration of devices were involved in the cause of breaches directly in 3.6% of the breaches studied in the DBIR. That percentage may seem small, but the data set of 79,790 incidents resulting in 2,122 breaches that means a staggering number of 76 breaches of data were the result of misconfigurations.
 
This is exactly why control complexity matters. Since control complexity correlates with misconfiguration and human error directly, when complexity rises, so does risk – conversely, when controls are simplified, complexity falls and risk of misconfiguration and human error is reduced.
 
Not to beat on the wikipedia article and Stuart Jacob’s assertions, but further compounding the complexity of his suggestion is multiple types of firewalls, managed by multiple vendors. Talk about adding complexity, take an interdependent set of rules and spread them across devices, with differing roles and goals and you get complexity. Now make each part of the set a different device type with it’s own features, nuances, rule language, configuration mechanism and managed service vendor, and try to manage both of those vendors in sync to create a holistic implementation of a control function. What you have is a NIGHTMARE of complexity. At an enterprise scale, this implementation approach would scale in complexity, resources required and oversight needs logarthmically as new devices and alternate connections are added. 
 
So, which is less complex, a single implementation, on a single platform, with a unified rule set, managed, monitored and enforced in a single location – OR – a control implemented across multiple devices, with multiple rule sets that require monitoring, management and enforcement in interdependent deployments? I think the choice is obvious and rational.
 
Now Add Entropy
 
Ahh, entropy, our inevitable combatant and the age old foe of order. What can you say about the tendency for all things to break down? You know what I am about to point out though, right? Things that are complex, tend to break down more quickly. This applies to complex organisms, complex structures, complex machinery and complex processes. It also applies to complex controls.
 
In the case of our firewall implementation, both of our models will suffer entropy. Mistakes will be made. Firewall rules will be implemented that allow wider access than is needed. Over time, all controls lose efficiency and effectiveness. Many times this is referred to as “control drift” or “configuration drift”. In our case, the control drift over a single unified rule set would have a score of 1. Changes to the rule set, apply directly to behavior and effectiveness. However, in the case of the Layered model, the firewalls each have a distinct rule set, which will degrade – BUT – they are interdependent on each other – giving an effective score of 2 for each firewall. Thus, you can easily see, that as each firewall’s rule set degrades, the private network’s “view” of the risk increases significantly and at a more rapid pace. Simply put, entropy in the more complex implementation of multiple firewalls will occur faster, and is likely to result in more impact to risk. Again, add the additional complexity of different types of firewalls and distinct vendors for each, and the entropy will simply eat you alive…
 
Let’s Close with Threat Scenarios

Let’s discuss one last point – the actual threat scenarios involved in attacking the private network from the DMZ. In most cases, compromise of a DMZ host will give an attacker a foothold into the environment. From there, they will need to pivot to find a way to compromise internal network resources and establish a presence on the internal network. (Note that I am only focusing on this threat scenario, not the more common phishing/watering hole scenarios that don’t often involve the compromise of a DMZ host, except perhaps for exfiltration paths. But, this is outside our current scope.) If they get lucky, and the DMZ is poorly designed, they may find that their initially compromised host has some form of access to the internal network that they can exploit. But, in most cases, the attacker needs to perform lateral movement to compromise additional hosts, searching for a victim that has the capability to provide a launching point for attacks against the internal network.
 
In these cases, detection is the goal of the security team. Each attacker move and probe, should cause “friction” against the controls, thereby raising the alert and log levels and the amount of unusual activity. Ultimately, this should lead to the detection of the attacker presence and the incident response process engagement.
 
However, let’s say that you are the attacker, trying to find a host that can talk to the internal network from the DMZ in a manner that you can exploit. How likely are you to launch an attack against the firewalls themselves? After all, these are devices that are designed for security and detection. Most attackers, ignore the firewalls as a target, and continue to attempt to evade their detection capabilities. As such, in terms of the threat scenario, additional discreet firewall devices, offer little to no advantage – and the idea that the attacker would need to compromise more devices to gain access loses credibility. They aren’t usually looking to pop the firewall itself. They are looking for a pivot host that they can leverage for access through whatever firewalls are present to exploit internal systems. Thus, in this case, both deployment models are rationally equal in their control integrity and “strength” (for lack of a better term).
 
Wrapping This Up
 
So, we have established that the Layered model is more complex than the 3 Legged model, and that it suffers from higher entropy. We also established that in terms of control integrity against the most common threat scenario, the implementation models are equal. Thus, to implement the Layered model over the 3 Legged model, is to increase risk, both initially, and at a more rapid pace over time for NO increase in capability or control “strength”. This supports my assertion that the 3 Legged model is, in fact, less risky than the Layered model of implementation.
 
As always, feel free to let me know your thoughts on social media. I can be found on Twitter at @lbhuston. Thanks for reading! 

State Of Security Podcast Episode 4

We are proud to announce the release of State Of Security, the podcast, Episode 4. This time around I am hosting John Davis, who riffs on policy development for modern users, crowdsourcing policy and process management, rational risk assessment and a bit of history.

Give it a listen and let us know what you think!

Thanks for supporting the podcast!

How to Use Risk Assessment to Secure Your Own Home

Risk assessment and treatment is something we all do, consciously or unconsciously, every day. For example, when you look out the window in the morning before you leave for work, see the sky is gray and decide to take your umbrella with you, you have just assessed and treated the risk of getting wet in the rain. In effect, you have identified a threat (rain) and a vulnerability (you are subject to getting wet), you have analyzed the possibility of occurrence (likely) and the impact of threat realization (having to sit soggy at your desk), and you have decided to treat that risk (taking your umbrella) risk assessment.

However, this kind of risk assessment is what is called ad hoc. All of the analysis and decision making you just made was informal and done on the fly. Pertinent information wasnt gathered and factored in, other consequences such as the bother of carrying the umbrella around wasnt properly considered, other treatment options werent considered, etc. What business concerns and government agencies have learned from long experience is that if you investigate, write down and consider such factors rationally and holistically, you end up with a more realistic idea of what you are really letting yourself in for, and therefore you are making better risk decisions formal risk assessment.

So why not apply this more formal risk assessment technique to important matters in your own life such as securing your home? Its not really difficult, but you do have to know how to go about it. Here are the steps:

1. System characterization: For home security, the system you are considering is your house, its contents, the people who live there, the activities that take place there, etc. Although, you know these things intimately it never hurts to write them down. Something about viewing information on the written page helps clarify it in our minds.

  1. Threat identification: In this step you imagine all the things that could threaten the security of your home and family. These would be such things as fire, bad weather, intruders, broken pipes, etc. For this (and other steps in the process), you can go beyond your own experience and see what threats other people have identified (i.e. google inquiries, insurance publications).

  2. Vulnerability identification: This is where you pair up the threats you have just identified with weaknesses in your home and its use. For example, perhaps your house is located on low ground that is subject to flooding, or you live in a neighborhood where burglaries may occur, or you have old ungrounded electrical wiring that may short and cause a fire. These are all vulnerabilities.

  3. Controls analysis: Controls analysis is simply listing the security mechanisms you already have in place. For example, security controls used around your home would be such things as locks on the doors and windows, alarm systems, motion-detecting lighting, etc.

  4. Likelihood determination: In this step you decide how likely it is that the threat/vulnerability will actually occur. There are really two ways you can make this determination. One is to make your best guess based on knowledge and experience (qualitative judgement). The second is to do some research and calculation and try to come up with actual percentage numbers (quantitative judgement). For home purposes I definitely recommend qualitative judgement. You can simply rate the likelihood of occurrence as high, medium or low risk.

  5. Impact analysis: In this step you decide what the consequences of threat/vulnerability realization will be. As with likelihood determination, this can be judged quantitatively or qualitatively, but for home purposes I recommend looking at worst-case scenarios. For example, if someone broke into your home, it could result in something as low impact as minor theft or vandalism, or it could result in very high impact such as serious injury or death. You should keep these more dire extremes in mind when you decide how you are going to treat the risks you find.

  1. Risk determination: Risk is determined by factoring in how likely threat/vulnerability realizations is with the magnitude of the impact that could occur and the effectiveness of the controls you already have in place. For example you could rate the possibility of home invasion occurring as low, and the impact of the occurrence as high. This would make your initial risk rating a medium. Then you factor in the fact that you have an alarm system and un- pickable door locks in place, which would lower your final risk rating to low. That final rating is known as residual risk.

  2. Risk treatment: Thats it! Once you have determined the level of residual risk, it is time to decide how to proceed from there. Is the risk of home invasion low enough that you think you dont need to apply any other controls? That is called accepting risk. Is the risk high enough that you feel you need to add more security controls to bring it down? That is called risk limitation or remediation. Do you think that the overall risk of home invasion is just so great that you have to move away? That is called risk avoidance. Do you not want to treat the risk yourself at all, and so you get extra insurance and hire a security company? That is called risk transference.

So, next time you have to make a serious decision in your life such as changing jobs or buying a new house, why not apply the risk assessment process? It will allow you to make a more rational and informed decision, and you will have the comfort of knowing you did your best in making the decision. 

Thanks to John Davis for this post.

Three Danger Signs I Look for when Scoping Risk Assessments

Scoping an enterprise-level risk assessment can be a real guessing game. One of the main problems is that it’s much more difficult and time consuming to do competent risk assessments of organizations with shoddy, disorganized information security programs than it is organizations with complete, well organized information security programs. There are many reasons why this is true, but generally it is because attaining accurate information is more difficult and because one must dig more deeply to ascertain the truth. So when I want to quickly judge the state of an organization’s information security program, I look for “danger” signs in three areas.

First, I’ll find out what kinds of network security assessments the organization undertakes. Is external network security assessment limited to vulnerability studies, or are penetration testing and social engineering exercises also performed on occasion? Does the organization also perform regular vulnerability assessments of the internal network? Is internal penetration testing also done? How about software application security testing? Are configuration and network architecture security reviews ever done?

Second, I look to see how complete and well maintained their written information security program is. Does the organization have a complete set of written information security policies that cover all of the business processes, IT processes and equipment used by the organization? Are there detailed network diagrams, inventories and data flow maps in place? Does the organization have written vendor management, incident response and business continuity plans? Are there written procedures in place for all of the above? Are all of these documents updated and refined on a regular basis? 

Third, I’ll look at the organization’s security awareness and training program. Does the organization provide security training to all personnel on a recurring basis? Is this training “real world”? Are security awareness reminders generously provided throughout the year? If asked, will general employees be able to tell you what their information security responsibilities are? Do they know how to keep their work areas, laptops and passwords safe? Do they know how to recognize and resist social engineering tricks like phishing emails? Do they know how to recognize and report a security incident, and do they know their responsibilities in case a disaster of some kind occurs?

I’ve found that if the answer to all of these questions is “yes”, you will have a pretty easy time conducting a thorough risk assessment of the organization in question. All of the information you need will be readily available and employees will be knowledgeable and cooperative. Conversely I’ve found that if the answer to most (or even some) of these questions is “no” you are going to have more problems and delays to deal with. And if the answers to all of these questions is “no”, you should really build in plenty of extra time for the assessment. You will need it!

Thanks to John Davis for this post.

MSI Launches New Threat Modeling Offering & Process

Yesterday, we were proud to announce a new service offering and process from MSI. This is a new approach to threat modeling that allows organizations to proactively model their threat exposures and the changes in their risk posture, before an infrastructure change is made, a new business operation is launched, a new application is deployed or other IT risk impacts occur.

Using our HoneyPoint technology, organizations can effectively model new business processes, applications or infrastructure changes and then deploy the emulated services in their real world risk environments. Now, for the first time ever, organizations can establish real-world threat models and risk conditions BEFORE they invest in application development, new products or make changes to their firewalls and other security tools.

Even more impressive is that the process generates real-world risk metrics that include frequency of interaction with services, frequency of interaction with various controls, frequency of interaction with emulated vulnerabilities, human attackers versus automated tools, insight into attacker capabilities, focus and intent! No longer will organizations be forced to guess at their threat models, now they can establish them with defendable, real world values!

Much of the data created by this process can be plugged directly into existing risk management systems, risk assessment tools and methodologies. Real-world values can be established for many of the variables and other metrics, that in the past have been decided by “estimation”.

Truly, if RISK = THREAT X VULNERABILITY, then this new process can establish that THREAT variable for you, even before typical security tools like scanners, code reviews and penetration testing have a rough implementation to work against to measure VULNERABILITY. Our new process can be used to model threats, even before a single line of real code has been written – while the project is still in the decision or concept phases!

We presented this material at the local ISSA chapter meeting yesterday. The slides are available here:

Threat Modeling Slides

Give us a call and schedule a time to discuss this new capability with an engineer. If your organization is ready to add some maturity and true insight into its risk management and risk assessment processes, then this just might be what you have been waiting for.