What is MSI Passive Assessment & How Does it Empower Supply Chain Security

MSI’s passive assessment represents a new approach to understanding the security risks associated with an organization, be it yours or a vendor, prospect or business partner’s. MSI’s passive assessment leverages the unique power of the MSI TigerTrax™ analytics platform to perform automated research, intelligence gathering and correlation from hundreds of sources, both public and private, that describe the effective security posture of an organization.
 
The engine is able to combine the power of hundreds of existing tools to build the definitive profile of an organization’s security posture –  such as:
  • open source intelligence
  • corporate data analytics
  • honeypot sources
  • deep & dark net search engines
  • other data mining tools 
 
MSI’s passive assessment gives you current and historical information about the security posture of the target, such as:
  • Current IOCs associated with them or their hosted applications/systems (perfect for cloud environments!)
  • Historic campaigns, breaches or outbreaks that have been identified or reported in public and in our proprietary intelligence sources
  • Leaked credentials, account information or intellectual property associated with the target
  • Underground and dark net data associated with the target
  • Misconfigurations or risky exposures of systems and services that could empower attackers
  • Public vulnerabilities
  • Other relevant intelligence about their risks, threats and vulnerabilities – new sources added weekly…
 
Best of all, it gathers and correlates that data without touching the target’s network or systems directly in any way. That means you do not need the organization’s permission or knowledge of your research, so you can keep your interest private!
 
In the supply chain security use case, the tool can be run against organizations as a replacement for full risk assessment processes and used as an initial layer to identify and focus on vendors with identified security issues. You can find more information about it used in the following posts about creating a process for supply chain security initiatives:
 
Clients are currently using this service for M&A, vendor supply chain security management, risk assessment and to get an attacker’s eye view of their own networks or cloud deployments/hosted solutions.
 
To learn more about MSI’s passive assessment, please talk with your MSI account executive today!
 
 
 

An Example Control Matrix for Supply Chain Security

Per the examples in the last post, here is what the Control Matrix for Vendor Supply Chain Security might look like.
 
In the beginning of the document, you can define the audience, the authors, the update process and the process for handling exceptions. I usually also add a footer that has relevant reference links to products/services/vendors and key terms used in the document.
 
The main content, of course, is the matrix itself, which usually looks something like this:
 
 
Name of Tier Tier Criteria Required Diligence Required Controls
Critical Risk Vendors Shared IIP that allows duplication of products or differentiator features or R&D; ANY outage of the vendor’s IT operations would harm JIT delivery or line manufacturing Any required regulatory document gathering (SAS70, PCI DSS, HIPAA, etc.); Monthly MSI passive assessment – MEDIUM or HIGH risk issues trigger FULL risk assessment & review of their security audits; MSI monitors vendor list for Targeted Threat Intelligence and if triggered, formal incident response process is required from the vendor
As determined by your firm…
All controls required – NO VARIANCE ALLOWED
High Risk Vendors Shared non-critical IIP that allows feature replication, long term damage to product/brand strategy or R&D; Protracted outage of the vendor’s IT operations could impact production Any required regulatory document gathering (SAS70, PCI DSS, HIPAA, etc.); Quarterly MSI passive assessment – HIGH risk issues trigger FULL risk assessment & review of their security audits
As determined by your firm…
All controls required – NO VARIANCE ALLOWED
Routine Risk Vendors IIP shared at this level represents a potential for reputational or regulatory impacts; Normal vendor level where data sharing occurs Any required regulatory document gathering (SAS70, PCI DSS, HIPAA, etc.); Yearly MSI passive assessment – HIGH risk issues trigger deeper risk assessment
As determined by your firm…
Variance allowed by signed acceptance from steering committee or executive team
Low Risk Vendors Data is not shared with this vendor and compromise of the vendor’s IT operations is unlikely to have any impact Peer review to validate tier eligibility; Contract language review; Financial fraud team validation Only contractual controls and/or SLA required
 
As you can see, the matrix makes the entire program easy to discuss and demonstrate. The more clearly you can define the tiers, their required due diligence, their required controls and other data elements – the easier the process gets. 
 
We hope this helps you put together your own vendor tiering program and easily demonstrate it. If you would like more information about our passive assessment platform or Targeted Threat Intelligence (passive monitoring of vendor-related IOCs and security issues), please touch base with your account executive. Many of our clients are actively using and recommending these offerings for their supply chain security initiatives. We’d love to tell you more about it, so just let us know! 
 

Mapping Control Requirements to Vendor Tiers

Now that you have a proper tier structure set up for your vendors, we will discuss how to map controls to each of those tiers to create a control matrix that you can work from. This control matrix will serve as the basis for the vendor supply chain security effort – essentially providing a skeleton of the due diligence that you will perform for each vendor. Once this matrix is complete, you can use it to clearly and easily demonstrate the work that your organization does on supply chain security to any auditor or regulator who may ask to review it. In our experience, walking them through the matrix, along with providing them a documented process that you follow to enforce the matrix will suffice to meet most regulatory requirements – assuming of course, that you actually perform the work detailed in the matrix.
 
So – at a high level, how do we assign the controls? I usually start at the bottom of the stack of tiers and define the minimum controls first. Thus (referring back to the tier structure defined last time around):
  • Low Risk Vendors– What are the minimum steps we should perform for each vendor in this tier?
    • Controls Required: Scoping peer review to ensure that the criteria for this tier are met; contract and, when applicable, SLA review by the security team against established guidance & regulatory requirements, approval of financial due diligence team to avert fraud, etc. 
      • Comments: Since there are only isolated potentials for digital risk in this tier, we don’t need to perform cyber-security reviews and the like, or accumulate data we don’t need (which wastes time & resources, etc.). If, for example, this is a commodity or non-impactful application provider, we might review their contract for language around malware free deliverables, code security, patch/fix turnaround times, etc., as appropriate for each vendor and the service or good they provide.
  • Routine Risk Vendors – At this level, I try and think of the controls that I would want for just about any vendor that can impact us or our operations, but that aren’t capable of doing much beyond reputational or regulatory damage.
    • Controls Required: All of the controls of the lower level apply and are required. Any control reviews that are required for regulatory compliance over PII that we share (SAS70, PCI-DSS compliance statements, etc.). Plus, at this stage, I would really like some form of cyber-security assessment, which in this case is MSI’s passive assessment tool (that can be run without the vendor’s knowledge or permission) run against them on a yearly basis with NO HIGH RISK issues identified. If a HIGH RISK issue is found, then they would be flagged and would need to have a formal technical review of their security controls performed or even our traditional risk assessment process. Any deviance from the accepted controls would require a signed risk acceptance variance from a management team or steering committee, as an example.
      • Comments: Here, we are defining the basics. What do we need for most vendors that could hurt us? We try to keep the process as simple as possible, so that we can focus on the vendors that have higher risk of actually hurting us and our business. The use of passive assessments here is a powerful new approach to reduce the number of full fledged risk assessments that we need to perform, and the overhead created by dealing with the paperwork and interactions to complete the traditional risk assessment process.
  • High Risk Vendors – Here we build on the controls below for normal vendors to try and achieve a balance between work load and information security needs. We define a level that exceeds best practices and serves to give us more confidence in the vendors that could hurt us at a significant level.
    • Controls Required: All of the controls of the lower levels apply and are now definitely required(no variances accepted at this level for the basic controls defined for lower risk levels). In addition, we need to provide ongoing assessment of the vendor’s security controls, so a passive run is now required without any HIGH RISK findings on a quarterly basis. This is to help us combat control drift and control entropy in the vendor’s security posture. If at any time, a HIGH RISK issue is identified, then a FULL and COMPREHENSIVE risk assessment is required as soon as possible. This risk assessment should include the review of the vendor’s third party risk assessments, vulnerability assessments & penetration tests (these should be provided to us by the vendor, within 3 business days of the request). Failure to pass this risk assessment, respond properly or any significant issues identified that are not mitigated in a timely manner should result in financial and legal consequences for the vendor and their contract with our organization.
      • Comments: Again, we are trying to reduce the incidence of full risk assessments, so that we can focus our attention and limited resources on the vendors that can hurt us significantly and are in the worst security postures. Further, we create an incentive at this level for them to comply and respond rapidly.
  • Critical Risk Vendors – These are the vendors that can REALLY hurt us, so we spend a majority of our attention and resources here. 
    • Controls Required:  All of the controls of the lower levels apply and are now definitely required(no variances accepted at this level for the basic controls defined for lower risk levels). Additionally, passive assessments are now monthly in frequency (or maybe even weekly, depending on your paranoia/risk tolerance). Ongoing monitoring of target threat intelligence data is also required – so we are having MSI monitor social media/public web/deep web/dark web for any events or indicators of compromise that might emerge and be related to our vendors in this tier. At this level, we are performing the full comprehensive risk assessment process on a yearly basis, in addition to the passive work of MSI. While this is tedious, we want to ensure that we have provided the utmost effort on these vendors that can truly hurt us at the most damaging of levels. We can now do this easily without taxing our resources, thanks to the tiering architecture and the use of the focus points provided by MSI through our passive assessment and other services. Any identified MEDIUM or HIGH RISK issue flagged by MSI results in the immediate triggering of an update to the risk assessment process, notification of the vendor for the required response of their security team leadership, and the potential requirement for a formal incident response process for the vendor – which we manage by requiring the delivery of an incident response report and/or attestation by a third party security firm that the situation was mitigated and that our IIP was protected. Failure to pass this risk assessment, respond properly or any significant issues identified that are not mitigated in a timely manner should result in SIGNIFICANT financial and legal consequences for the vendor and their contract with our organization.
      • Comments: Here we leverage ongoing monitoring and take the lead on watching for potential compromises for ourselves and our vendors. Given the large percentage of breaches reported by third parties, we no longer believe that the detection and response capabilities of any partner organization are strong enough, alone, to protect our IIP. Thus the increased due diligence and oversight for the vendors that can hurt us the worst.

As you can see, building from the ground up makes leveraging the tiering process easy and logical. In the next post we will show you an example controls matrix we use to demonstrate and discuss our vendor supply chain security process. Over the years, we have found the matrix to be a powerful, auditor/regulator friendly tool to show clearly and concisely the due diligence process for vendor supply chain security. We hope you find it useful as well. Stay tuned! 

Sorting Vendors into Tiers

Previously, we reviewed some ideas around vendor discovery and laid out an example workflow and process. We also defined some tools and approaches to use for the task.
 
Once you have the vendors in your supply chain identified, and have obtained and cataloged the relevant data, the next step we suggest is to tier the vendors into levels to make it easier to classify vendors into “object groups”. Once we have the vendors sorted into tiers, we will discuss how to assign required controls to each tier in an easy to manage manner. This greatly simplifies the processing of future vendors that are added to the supply chain, since you need only identify the tier they fit into and then use the control requirements for that tier as your basis for evaluation and risk assessment. 
 
Vendor tiering, done properly, also makes assigning vendors to a given tier trivial in the long term. Our approach, as you will see, provides very clear criteria for the levels, making it easy to add new vendors and simple to manage vendors who change status as the supply chain and product lines evolve.
 
In our suggested model, we have four tiers, comprised as follows (using a product manufacturer as an example, obviously, other types of firms may require alternate specific criteria, but this should serve to lay out the model for you use as a baseline):
 
  • Critical Risk Vendors
    • Criteria: Mission critical “information intellectual property” (IIP) assets are shared with this vendor, where the assets represent a significant portion of the market differentiator or research and development of a product line OR the vendor’s IT operations are critical to our just in time manufacturing or delivery model – that is – ANY outage of the vendor’s IT operations would cause an outage for us that would impact our capability to deliver our products to our customers
      • Examples: Compromise of the IIP data would allow duplication of our product(s) or significant replication of our research; Outages or tampering with the vendor IT operations would impact manufacturing line operations, etc.
  • High Risk Vendors
    • Criteria: Non-critical IIP assets are shared with this vendor such that if said assets were compromised, they would represent damage to our long term product & brand strategies or research and development. Actual product replication would not be enabled, but feature replication might be possible. Outages of vendor’s IT operations at this level, if protracted, could impact our research and development or ability to deliver our products to our customers.
      • Examples: Breach of this vendors network could expose the design specs for a specific part of the product. Compromise of the vendor could expose our future marketing plan for a product and some of the differentiating features that we plan to leverage. If the vendor’s IT operations were disabled for a protracted time, (greater than /48, 72 or 96/ hours), our capability to deliver products could be impacted.
  • Routine Risk Vendors
    • Criteria: Non-critical IIP assets may be shared with this vendor tier, and compromise of that IIP may be damaging to our reputation. The IIP, if compromised, would not allow duplication of our product lines, research or differentiators of our products. In addition to reputational impacts, share of data that could impact our sales pipeline/process and/or other secondary systems or processes may be expected if breaches occur at this level. Regulatory or legally protected IIP also resides at this level.
      • Examples: Organizations where customer data, sales & marketing data, employee identification information, etc. are shared (outsourced payment, outsourced HR, etc.) are good examples here. This is the level of risk for any vendor that you share IIP with, in any form, that does NOT immediately empower delivery of your products or impact your longer term R&D efforts or market differentiators… 
  • Low Risk Vendors
    • Criteria: This tier is for vendors that we share NO IIPwith, in any form, and vendors that could not directly impact our product delivery via an IT operations outage in any way. These vendors, should they experience a breach, would result in little to no impact on the reputation or capabilities of our firm to operate.
      • Examples: Caterers, business supply companies, temporary employment agencies, hardware and software vendors for not manufacturing systems, commodity product or component dealers, packaging material suppliers, transport companies, etc.
 
Building such a tiered approach for your vendors creates an easy to manage way to prioritize them. The tiered approach will also be greatly useful in mapping groups of controls to the requirements for each tier. We will cover that in a future post, shortly. 

Comparing 2 Models for DMZ Implementations

I recently had a discussion with another technician about the security of the two most popular DMZ implementation models. That is: 
  • The “3 Legged Model” or “single firewall” – where the DMZ segment(s) are connected via a dedicated interface (or interfaces) and a single firewall implements traffic control rules between all of the network segments (the firewall could be a traditional firewall simply enforcing interface to interface rules or a “next generation” firewall implementing virtualized “zones” or other logical object groupings)
  • The “Layered Model” or “dual firewall”- where the DMZ segment(s) are connected between two sets of firewalls, like a sandwich
 
Both approaches are clearly illustrated above, and explained in detail in the linked wikipedia article, so I won’t repeat that here. 
 
I fully believe that the “3 Legged Model” is a lower risk implementation than the layered model. This outright contradicts what the wikipedia article above states: 
 
     “The most secure approach, according to Stuart Jacobs, [1]is to use two firewalls to create a DMZ.” — wikipedia article above.
 
While the Layered model looks compelling at first blush, and seems to apply the concept of “more firewalls would need to be compromised to lead to internal network access”; I believe that, in fact, it reduces the overall security posture in the real world, and increases risk. Here’s why I feel that way. Two real-world issues that often make things that look great at first blush or that “just work” in the lab environment, have significant disadvantages in the real world are control complexity and entropy. Before we dig too deeply into those issues though, let’s talk about how the two models are similar. (Note that we are assuming that the firewalls themselves are equally hardened and monitored – IE, they have adequate and equal security postures both as an independent system and as a control set, in aggregate.)
 
Reviewing the Similarities
 
In both of the models, traffic from the DMZ segment(s) pass through the firewall(s) and traffic controls are applied. Both result in filtered access to the internal trusted network via an often complex set of rules. Since in both cases, traffic is appropriately filtered, authorization, logging and alerting can adequately occur in both models. 
 
Establishing Differences
 
Now the differences. In the 3 Legged model, the controls are contained in one place (assuming a high availability/failover pair counts as a single set of  synced controls), enforced in one place, managed and monitored in one place. The rule set does not have cascading dependencies on other implementations of firewalls, and if the rule set is well designed and implemented, analysis at a holistic level is less complex.
 
In the Layered model, the controls are contained across two separate instances, each with different goals, roles and enforcement requirements. However, the controls and rule sets are interdependent. The traffic must be controlled through a holistic approach spread across the devices, and failures at either firewall to adequately control traffic or adequately design the rule sets could cause cascading unintended results. The complexity of managing these rules across devices, with different rule sets, capabilities, goals and roles is significantly larger than in a single control instance. Many studies have shown that increased control complexity results in larger amounts of human error, which in turn contributes to higher levels of risk. 
 
Control Complexity Matters
 
Misconfigurations, human errors and outright mistakes are involved in a significant number (~95%) of compromises. How impactful are human mistakes on outright breaches? Well according to the 2015 Verizon DBIR:
 
“As with years past, errors made by internal staff, especially system administrators who were the prime actors in over 60% of incidents, represent a significant volume of breaches and records ,even with our strict definition of what an “error” is.” —DBIR
 
Specifically, misconfiguration of devices were involved in the cause of breaches directly in 3.6% of the breaches studied in the DBIR. That percentage may seem small, but the data set of 79,790 incidents resulting in 2,122 breaches that means a staggering number of 76 breaches of data were the result of misconfigurations.
 
This is exactly why control complexity matters. Since control complexity correlates with misconfiguration and human error directly, when complexity rises, so does risk – conversely, when controls are simplified, complexity falls and risk of misconfiguration and human error is reduced.
 
Not to beat on the wikipedia article and Stuart Jacob’s assertions, but further compounding the complexity of his suggestion is multiple types of firewalls, managed by multiple vendors. Talk about adding complexity, take an interdependent set of rules and spread them across devices, with differing roles and goals and you get complexity. Now make each part of the set a different device type with it’s own features, nuances, rule language, configuration mechanism and managed service vendor, and try to manage both of those vendors in sync to create a holistic implementation of a control function. What you have is a NIGHTMARE of complexity. At an enterprise scale, this implementation approach would scale in complexity, resources required and oversight needs logarthmically as new devices and alternate connections are added. 
 
So, which is less complex, a single implementation, on a single platform, with a unified rule set, managed, monitored and enforced in a single location – OR – a control implemented across multiple devices, with multiple rule sets that require monitoring, management and enforcement in interdependent deployments? I think the choice is obvious and rational.
 
Now Add Entropy
 
Ahh, entropy, our inevitable combatant and the age old foe of order. What can you say about the tendency for all things to break down? You know what I am about to point out though, right? Things that are complex, tend to break down more quickly. This applies to complex organisms, complex structures, complex machinery and complex processes. It also applies to complex controls.
 
In the case of our firewall implementation, both of our models will suffer entropy. Mistakes will be made. Firewall rules will be implemented that allow wider access than is needed. Over time, all controls lose efficiency and effectiveness. Many times this is referred to as “control drift” or “configuration drift”. In our case, the control drift over a single unified rule set would have a score of 1. Changes to the rule set, apply directly to behavior and effectiveness. However, in the case of the Layered model, the firewalls each have a distinct rule set, which will degrade – BUT – they are interdependent on each other – giving an effective score of 2 for each firewall. Thus, you can easily see, that as each firewall’s rule set degrades, the private network’s “view” of the risk increases significantly and at a more rapid pace. Simply put, entropy in the more complex implementation of multiple firewalls will occur faster, and is likely to result in more impact to risk. Again, add the additional complexity of different types of firewalls and distinct vendors for each, and the entropy will simply eat you alive…
 
Let’s Close with Threat Scenarios

Let’s discuss one last point – the actual threat scenarios involved in attacking the private network from the DMZ. In most cases, compromise of a DMZ host will give an attacker a foothold into the environment. From there, they will need to pivot to find a way to compromise internal network resources and establish a presence on the internal network. (Note that I am only focusing on this threat scenario, not the more common phishing/watering hole scenarios that don’t often involve the compromise of a DMZ host, except perhaps for exfiltration paths. But, this is outside our current scope.) If they get lucky, and the DMZ is poorly designed, they may find that their initially compromised host has some form of access to the internal network that they can exploit. But, in most cases, the attacker needs to perform lateral movement to compromise additional hosts, searching for a victim that has the capability to provide a launching point for attacks against the internal network.
 
In these cases, detection is the goal of the security team. Each attacker move and probe, should cause “friction” against the controls, thereby raising the alert and log levels and the amount of unusual activity. Ultimately, this should lead to the detection of the attacker presence and the incident response process engagement.
 
However, let’s say that you are the attacker, trying to find a host that can talk to the internal network from the DMZ in a manner that you can exploit. How likely are you to launch an attack against the firewalls themselves? After all, these are devices that are designed for security and detection. Most attackers, ignore the firewalls as a target, and continue to attempt to evade their detection capabilities. As such, in terms of the threat scenario, additional discreet firewall devices, offer little to no advantage – and the idea that the attacker would need to compromise more devices to gain access loses credibility. They aren’t usually looking to pop the firewall itself. They are looking for a pivot host that they can leverage for access through whatever firewalls are present to exploit internal systems. Thus, in this case, both deployment models are rationally equal in their control integrity and “strength” (for lack of a better term).
 
Wrapping This Up
 
So, we have established that the Layered model is more complex than the 3 Legged model, and that it suffers from higher entropy. We also established that in terms of control integrity against the most common threat scenario, the implementation models are equal. Thus, to implement the Layered model over the 3 Legged model, is to increase risk, both initially, and at a more rapid pace over time for NO increase in capability or control “strength”. This supports my assertion that the 3 Legged model is, in fact, less risky than the Layered model of implementation.
 
As always, feel free to let me know your thoughts on social media. I can be found on Twitter at @lbhuston. Thanks for reading! 

Old School Google Hacking Still Works…

Did some old school Google hacking last night.

“Filetype:xls & terms” still finds too much bad stuff.

Check for it lately for your organization?

Try other file types too. (doc/ppt/pdf/rtf, etc.)

Information leakage happens today, as it always has. Keeping an eye on it should be a part of your security program.

How to Use Risk Assessment to Secure Your Own Home

Risk assessment and treatment is something we all do, consciously or unconsciously, every day. For example, when you look out the window in the morning before you leave for work, see the sky is gray and decide to take your umbrella with you, you have just assessed and treated the risk of getting wet in the rain. In effect, you have identified a threat (rain) and a vulnerability (you are subject to getting wet), you have analyzed the possibility of occurrence (likely) and the impact of threat realization (having to sit soggy at your desk), and you have decided to treat that risk (taking your umbrella) risk assessment.

However, this kind of risk assessment is what is called ad hoc. All of the analysis and decision making you just made was informal and done on the fly. Pertinent information wasnt gathered and factored in, other consequences such as the bother of carrying the umbrella around wasnt properly considered, other treatment options werent considered, etc. What business concerns and government agencies have learned from long experience is that if you investigate, write down and consider such factors rationally and holistically, you end up with a more realistic idea of what you are really letting yourself in for, and therefore you are making better risk decisions formal risk assessment.

So why not apply this more formal risk assessment technique to important matters in your own life such as securing your home? Its not really difficult, but you do have to know how to go about it. Here are the steps:

1. System characterization: For home security, the system you are considering is your house, its contents, the people who live there, the activities that take place there, etc. Although, you know these things intimately it never hurts to write them down. Something about viewing information on the written page helps clarify it in our minds.

  1. Threat identification: In this step you imagine all the things that could threaten the security of your home and family. These would be such things as fire, bad weather, intruders, broken pipes, etc. For this (and other steps in the process), you can go beyond your own experience and see what threats other people have identified (i.e. google inquiries, insurance publications).

  2. Vulnerability identification: This is where you pair up the threats you have just identified with weaknesses in your home and its use. For example, perhaps your house is located on low ground that is subject to flooding, or you live in a neighborhood where burglaries may occur, or you have old ungrounded electrical wiring that may short and cause a fire. These are all vulnerabilities.

  3. Controls analysis: Controls analysis is simply listing the security mechanisms you already have in place. For example, security controls used around your home would be such things as locks on the doors and windows, alarm systems, motion-detecting lighting, etc.

  4. Likelihood determination: In this step you decide how likely it is that the threat/vulnerability will actually occur. There are really two ways you can make this determination. One is to make your best guess based on knowledge and experience (qualitative judgement). The second is to do some research and calculation and try to come up with actual percentage numbers (quantitative judgement). For home purposes I definitely recommend qualitative judgement. You can simply rate the likelihood of occurrence as high, medium or low risk.

  5. Impact analysis: In this step you decide what the consequences of threat/vulnerability realization will be. As with likelihood determination, this can be judged quantitatively or qualitatively, but for home purposes I recommend looking at worst-case scenarios. For example, if someone broke into your home, it could result in something as low impact as minor theft or vandalism, or it could result in very high impact such as serious injury or death. You should keep these more dire extremes in mind when you decide how you are going to treat the risks you find.

  1. Risk determination: Risk is determined by factoring in how likely threat/vulnerability realizations is with the magnitude of the impact that could occur and the effectiveness of the controls you already have in place. For example you could rate the possibility of home invasion occurring as low, and the impact of the occurrence as high. This would make your initial risk rating a medium. Then you factor in the fact that you have an alarm system and un- pickable door locks in place, which would lower your final risk rating to low. That final rating is known as residual risk.

  2. Risk treatment: Thats it! Once you have determined the level of residual risk, it is time to decide how to proceed from there. Is the risk of home invasion low enough that you think you dont need to apply any other controls? That is called accepting risk. Is the risk high enough that you feel you need to add more security controls to bring it down? That is called risk limitation or remediation. Do you think that the overall risk of home invasion is just so great that you have to move away? That is called risk avoidance. Do you not want to treat the risk yourself at all, and so you get extra insurance and hire a security company? That is called risk transference.

So, next time you have to make a serious decision in your life such as changing jobs or buying a new house, why not apply the risk assessment process? It will allow you to make a more rational and informed decision, and you will have the comfort of knowing you did your best in making the decision. 

Thanks to John Davis for this post.

How to Avoid Getting Phished

It’s much easier for an attacker to “hack a human” than “hack a machine”.  This is why complicated attacks against organizations often begin with the end user.  Although e-mails with malicious links or attachments are often dismissed and referred to as “spam”, these messages are often the beginning of a sophisticated hack against a company.  Unfortunately there is no “silver bullet” that can prevent these attacks from taking place.
 
I recently had the opportunity to give a presentation during one of our client’s all-staff meeting.  Despite the fact that our client’s company resides in a relatively niche market, I was able to discuss several data breaches that took place in their industry within the last year.  Not only did the hacks all take place recently, they were all the direct result of actions taken by an end-user.  A majority of these attacks were caused by an employee opening a malicious e-mail.  I gave our customer the following advice to help them avoid becoming a victim of Phishing e-mails and felt that it was worth sharing on StateOfSecurity.com.
 
Verify link URL:  If the e-mail you received contains a link, does the website URL match up with the content of the message?  For example, if the e-mail indicates you are about to visit a website for FedEx, is the address actually FedEx.com?  A common tactic used by attackers is to direct a user to a similar URL or IP address.  An example of this would be to direct the user to FedEx111.com or FedEx.SE as opposed to the organization’s actual URL.
 
Verify e-mail address of sender: If the e-mail message you received came from a friend, colleague or vendor, did it actually come from their e-mail address?  It’s worthwhile to take a few extra seconds to ensure that the e-mail actually came from the aforementioned colleague, friend or vendor.  Also, avoid opening e-mails from generic senders such as “Systems Administrator” or “IT Department”.
 
Exercise caution from messages sent by unknown senders: Be cautious if a message comes from an unknown sender.  Would you provide your checking account number or password to a random person that you saw on the street?  If not, then don’t provide confidential information to unknown senders.
 
Follow up with a phone call: In the event you receive a message requesting that you validate information or need to reset your password, take some time to follow up with the sender with a phone call.  Trust me, your IT department will be happy to spend a few seconds confirming or denying your request as opposed to dealing with a malware infection.  Also, if your “bank” sends any type of e-mail correspondence requesting that you perform some sort of action, it’s worthwhile to give them a call to confirm their intentions.  Always be sure to use a number that you found from another source outside of the e-mail.
Spot check for spelling/grammar errors: It is extremely common that malicious e-mails contain some sort of spelling mistake or grammatical error.  Spelling mistakes or grammatical errors are great indicators that you have received a malicious e-mail.
 
Do not open random attachments: If your e-mail messages meets any of the above criteria, DO NOT open the attachment to investigate further.  Typically these attachments or links are the actual mechanism for delivering malware to your machine.
 
This blog post by Adam Luck.

Mergers and Acquisitions: Look Before You Leap!

Mergers and acquisitions are taking place constantly. Companies combine with other companies (either amicably or forcibly) to fill some perceived strategic business need or to gain a foothold in a new market. M&As are most often driven by individual high ranking company executives, not by the company as a whole. If successful, such deals can be the highpoint in a CEOs career. If unsuccessful, they can lead to ignominy and professional doom.

Of course this level of risk/reward is irresistible to many at the top, and executives are constantly on the lookout for companies to take over or merge with. And the competition is fierce! So when they do spot a likely candidate, these individuals are naturally loath to hesitate or over question. They want to pull the trigger right away before conditions change or someone else beats them to the draw. Because of this, deal-drivers often limit their research of the target company to surface information that lacks depth and scope, but that can be gathered relatively quickly.

However, it is an unfortunate fact that just over half of all M&As fail. And one of the reasons this is true is that companies fail to gain adequate information about their acquisitions, the people that are really responsible for their successes and the current state of the marketplace they operate in before they negotiate terms and complete deals. Today more than ever, knowledge truly is power; power that can spell the difference between success and failure.

Fortunately, technology and innovation continues to march forward. MSIs TigerTraxTM intelligence engine can provide the information and analysis you need to make informed decisions, and they can get it to you fast. TigerTraxTM can quickly sift through and analyze multiple sources and billions of records to provide insights into the security posture and intellectual property integrity of the company in question. It can also be used to provide restricted individual tracing, supply chain analysis, key stakeholder profiling, history of compromise research and a myriad of other services. So why not take advantage of this boon and lookbefore you leap into your next M&A? 

This post courtesy of John Davis.

Tips for Writing Good Security Policies

Almost all organizations dread writing security policies. When I ask people why this process is so intimidating, the answer I get most often is that the task just seems overwhelming and they don’t know where to start. But this chore does not have to be as onerous or difficult as most people think. The key is pre-planning and taking one step at a time.

First you should outline all the policies you are going to need for your particular organization. Now this step itself is what I think intimidates people most. How are they supposed to ensure that they have all the policies they should have without going overboard and burdening the organization with too many and too restrictive policies? There are a few steps you can take to answer these questions:

  • Examine existing information security policies used by other, similar organizations and open source information security policy templates such as those available at SANS. You can find these easily online. However, you should resist simply copying such policies and adopting them as your own. Just use them for ideas. Every organization is unique and security policies should always reflect the culture of the organization and be pertinent, usable and enforceable across the board.
  • In reality, you should have information security policies for all of the business processes, facilities and equipment used by the organization. A good way to find out what these are is to look at the organizations business impact analysis (BIA). This most valuable of risk management studies will include all essential business processes and equipment needed to maintain business continuity. If the organization does not have a current BIA, you may have to interview personnel from all of the different business departments to get this information. 
  • If the organization is subject to information security or privacy regulation, such as financial institutions or health care concerns, you can easily download all of the information security policies mandated by these regulations and ensure that you include them in the organization’s security policy. 
  • You should also familiarize yourself with the available information security guidance such as ISO 27002, NIST 800-35, the Critical Security Controls for Effective Cyber Defense, etc. This guidance will give you a pool of available security controls that you can apply to fit your particular security needs and organizational culture.

Once you have the outline of your security needs in front of you it is time to start writing. You should begin with broad brush stroke, high level policies first and then add detail as you go along. Remember information security “policy” really includes policies, standards, guidelines and procedures. I’ve found it a very good idea to write “policy” in just that order.

Remember to constantly refer back to your outline and to consult with the business departments and users as you go along. It will take some adjustments and rewrites to make your policy complete and useable. Once you reach that stage, however, it is just a matter of keeping your policy current. Review and amend your security policy regularly to ensure it remains useable and enforceable. That way you won’t have to go through the whole process again!

Thanks to John Davis for this post.