Supply Chain Security Insights

Supply chain attacks are one of the most common cyber threats faced by organizations. They are costly and disruptive, often resulting in lost revenue and customer trust.

In this article, we’ll discuss five insights about supply chain attacks that all supply chain management and information security teams should be aware of.

#1. Supply Chains Can Be Vulnerable

Supply chains are complex networks of companies, suppliers, customers, and partners that provide goods and services to each other.

They include manufacturers, distributors, retailers, service providers, logistics providers, and others.

These entities may interact directly or indirectly via intermediaries such as banks, insurance companies, payment processors, freight forwarders, customs brokers, etc.

Supply chains are vulnerable to attack because they involve multiple parties and interactions between them. Each organization in the chain will have its own risk profile, security posture, and business model. This creates a complex environment for security risks. Attackers can target any part of the supply chain, and often focus on the weakest link, including manufacturing facilities, distribution centers, warehouses, transportation hubs, retail stores, etc.

Attackers can disrupt operations, steal intellectual property, damage reputation, and cause losses in revenue and profits.

#2. Supply Chain Security Must Include All Stakeholders

Supply chain security involves protecting against threats across the entire value stream. This means securing data, processes, systems, physical assets, personnel, and technology.

It also requires integrating security practices and technologies across the entire organization.

This includes ensuring that information sharing occurs among stakeholders, that employees understand their roles and responsibilities, and that policies and procedures are followed.

Security professionals should collaborate closely with executives, managers, and staff members to ensure that everyone understands the importance of security and has ownership over its implementation.

#3. Supply Chain Security Requires Ongoing Monitoring and Maintenance

Supply chain security requires ongoing monitoring and maintenance.

An effective approach is to continuously monitor the status of key indicators, assess risks, identify vulnerabilities, and implement countermeasures.

For example, an attacker could attempt to compromise sensitive data stored in databases, websites, mobile apps, and other locations.

To prevent these incidents, security teams should regularly review logs, audit reports, and other intelligence sources to detect suspicious activity.

They should also perform penetration tests, vulnerability scans, and other assessments to uncover potential weaknesses.

#4. Supply Chain Security Requires Collaboration Across Organizations

A single department cannot manage supply chain security within an organization.

Instead, it requires collaboration across departments and functional areas, including IT, finance, procurement, human resources, legal, marketing, sales, and others.

Each stakeholder must be responsible for maintaining security, understanding what constitutes acceptable behavior, and implementing appropriate controls.

Collaborating across organizational boundaries helps avoid silos of knowledge and expertise that can lead to gaps in security awareness and training.

#5. Supply Chain Security Is Critical to Organizational Success

Organizations that fail to protect their supply chains face significant financial penalties.

A recent study found that supply chain breaches cost United States businesses $6 trillion annually.

That’s equivalent to nearly 10% of the annual global GDP.

Supply chain attacks can result in lost revenues, damaged reputations, and increased costs.

Companies that invest in supply chain security can significantly improve operational efficiency, productivity, profitability, and brand image.

Three Things I’ve Learned About Credit Union Risk Management

I have been working with Credit Unions for more than 20 years and have done a wide variety of information security and risk management work over that time. I’ve worked with technical teams, management and boards over the span of more than two decades. Here are three things I’ve learned about how CUs manage risk during that time. 

1) Most credit unions that I’ve worked with care just as much, if not more, about information security than most of the regional size banks they often compete with.

I’ve heard more than one CU leader tell me that they have to be better than the banks, because when a bank gets hacked – that bank makes the news and feels the impact. However, he said, when a credit union gets hacked – all credit unions suffer from the bad press. I am not sure the data supports his claim, but it’s an example of how CUs often focus on working together to solve big problems, and put a lot more attention to detail into it.

2) Many of the credit unions I have worked with look at information security and threat awareness as something that they can offer to their members (“customers, in bank speak”).

More than a few of the CUs have engaged so deeply with their customers on phishing and identify theft, that they include them in discussions about what products and services the CU buys. They do trials, include members in beta-tests and I’ve even seen them do onsite training for how to use new multi-factor authentication tools – even ones that weren’t in use at the CU – just to help make the members more secure and reduce the threat of password re-use across personal sites.

3) The board is often more involved in the risk management process at my CU clients than my banking clients.

The NCUA has taken a lot of steps to increase board member awareness about information security, and it often shows at credit unions. Several times a year, I am asked to present threat updates or review the information security program of a CU, specifically with a presentation to the board in mind. I am often engaged as a third party, to spend a couple of days looking at a security program and reporting to the board on it’s maturity and areas of potential improvement.

During these board sessions, it is not uncommon for the board questions to last more than an hour, after the presentation has completed. The point is, most CU boards that I have worked with are deeply engaged in thinking about risk management at the credit union.

For those of you interested in more about risk management at credit unions, here are some of the best sources, which I refer to often in my presentations:

  • Credit unions also face such internal risks as internal fraud, legal and regulatory noncompliance, data breaches, and injuries to staff and visitors. (boardeffect.com)
  • The bottom line: Figuring out the risk appetite will help guide credit unions to create realistic and measurable risk guidelines. (visibleequity.com)

  • We have helped Credit Unions develop risk appetite statements and risk frameworks and can work with your Credit Union to develop the documentation you require. (creditunionupdate.com)

If you’d like to learn more about MSI and our work with credit unions, just drop us a line (info@microsolved.com) or give us a call (614-351-1237) and we’d be happy to talk about how we might be able to help your credit union excel in IT risk management.

A Quick Expert Conversation About Gap Assessment

Gap Assessment Interview with John Davis

What follows is a quick interview session with John Davis, who leads the risk assessment/policy/process team at MicroSolved. We completed the interview in January of 2020, and below are the relevant parts of our conversation.

Brent Huston: “Thanks for joining me today, John. Let’s start with what a gap assessment is in terms of HIPAA or other regulatory guidance.”

John Davis: “Thanks for the chance to talk about gap assessment. I have run into several HIPAA concerns such as hospitals and health systems who do HIPAA gap analysis / gap assessment in lieu of HIPAA risk assessment. Admittedly, gap assessment is the bulk of risk assessment, however, a gap assessment does not go to the point of assigning a risk rating to the gaps found. It also doesn’t go to the extent of addressing other risks to PHI that aren’t covered in HIPAA/HITECH guidance.”

BH: “So, in some ways, the gap assessment is more of an exploratory exercise – certainly providing guidance on existing gaps, but faster and more affordable than a full risk assessment? Like the 80/20 approach to a risk assessment?”

John Davis: “I suppose so, yes. The price is likely less than a full blown risk assessment, given that there is less analysis and reporting work for the assessment team. It’s also a bit faster of an engagement, since the deep details of performing risk analysis aren’t a part of it.”

BH: “Should folks interested in a gap assessment consider adding any technical components to the work plan? Does that combination ever occur?”

JD: “I can envision a gap assessment that also includes vulnerability assessment of their networks / applications. Don’t get me wrong, I think there is immense value in this approach. I think that to be more effective, you can always add a vulnerability assessment to gauge how well the policies and processes they have in place are working in the context of the day-to-day real-world operations.”

BH: “Can you tie this back up with what a full risk assessment contains, in addition to the gap assessment portion of the work plan?”

JD: “Sure! Real risk assessment includes controls and vulnerability analysis as regular parts of the engagement. But more than that, a complete risk assessment also examines threats and possibilities of occurrence. So, in addition to the statement of the gaps and a roadmap for improvement, you also get a much more significant and accurate view of the data you need to prioritize and scope many of the changes and control improvements needed. In my mind, it also gets you a much greater view of potential issues and threats against PHI than what may be directly referenced in the guidance.” 

BH: “Thanks for clarifying that, John. As always, we appreciate your expert insights and experience.”

JD: “Anytime, always happy to help.”

If you’d like to learn more about a gap assessment, vulnerability assessment or a full blown risk assessment against HIPAA, HITECH or any other regulatory guidance or framework, please just give us a call at (614) 351-1237 or you can click here to contact us via a webform. We look forward to hearing from you. Get in touch today! 

Tips on Reducing Human Error Risks

One of the largest risks that organizations face is human error. The outcome of human errors show themselves in security, architecture, business operations, IT & non-IT projects, etc. The list goes on and on. You can read more about the impacts of human error on infosec here and here.

It’s important to understand some of the reasons why these errors occur, especially when critical projects or changes are being considered.

Some of the high level things to think about:

  • Physical fatigue – this is likely the leading cause of human errors, workers may not be getting enough sleep or downtime, especially during critical projects when stress and demands may be high, not to speak of their personal lives – organizations should allow for key resources to have adequate downtime to reduce errors during critical projects
  • Decision fatigue – the more decisions that someone has to make, the worse their decisions get over time – just like physical fatigue, preserving their decision making capability should be a consideration during critical projects for key resources
  • Lack of time on task – in many organizations, critical project key personnel are often called to meeting after meeting to discuss, plan or execute parts of the project – when this minimizes their time on task to perform the research, work or development for the project then quality suffers – at the very least, it may aggravate the other problems of fatigue; organizations should focus key resources on time on task to up the quality of their work during critical projects
  • Lack of peer review – peer review is an essential control for human error, since it can catch such usual conditions as typos, missing words, simple mistakes in logic, etc. Critical projects should always include several layers of peer review to ensure higher quality of the process or outcome
  • Lack of preparation for failure – many critical projects suffer from this form of error as many people assume that their plans will be successful, but failure occurs often, and the more complex the systems or plans, the more likely it is to occur – have a contingency plan to prevent emotional decisions which can deeply impact quality and successful outcomes

There are many other issues around human error in critical projects and even more in day to day operations. But, these seem to be the most prevalent and immediate issues we see around critical projects with clients in the last few years. 

How does your team manage human errors? What controls have you implemented? Share with us on Twitter (@microsolved, @lbhuston) and we may write about it in future posts. As always, thanks for reading! 

80/20 Rule of Information Security

After my earlier this post about the SDIM project, several people on Twitter also asked me to do the same for the 80/20 Rule of Information Security project we completed several years ago. 

It is a list of key security projects, their regulatory mappings, maturity models and such. Great for building a program or checking yours against an easy to use baseline.

Thanks for reading, and here is where you can learn more about the 80/20 project. Click here.

3 Reasons Your Supply Chain Security Program Stinks

  1. Let’s face it, Supply Chain Security and Vendor Risk Management is just plain hard. There are a lot of moving pieces – companies, contacts, agreements, SLAs, metrics, reporting, etc. Suppliers also change frequently, since they have their own mergers/acquisitions, get replaced due to price changes or quality issues, new suppliers are added to support new product lines and old vendors go away as their product lines become obsolete. Among all of that, is cyber-security. MSI has a better and faster way forward – an automated way to reduce the churn – a way to get a concise, easy to use and manageable view of the security of your vendors’ security posture. This month, we will show you what we have been doing in secret for some of the largest companies in the world… 
  2. Vendors with good security postures often look the same as vendors with dangerous security postures, on paper at least. You know the drill – review the contracts, maybe they send you an audit or scan report (often aged), maybe they do a questionnaire (if you’re lucky). You get all of this – after you chase them down and hound them for it. You hope they were honest. You hope the data is valid. You hope they are diligent. You hope they stay in the same security posture or improve over time, and not the opposite. You hope for a lot. You just don’t often KNOW, and what most companies do know about their vendors is often quite old in Internet terms, and can be far afield from where their security posture is at the moment. MSI can help here too. This month, we will make our passive assessment tool available to the public for the first time. Leveraging it, you will be able to rapidly, efficiently and definitively get a historic and current view of the security posture of your vendors, without their permission or knowledge, with as frequent updates as you desire. You’ll be able to get the definitive audit of their posture, from the eyes of an attacker, in a variety of formats – including direct data feeds back into your GRC tools. Yes, that’s right – you can easily differentiate between good and bad security AND put an end to data entry and keyboarding sessions. We will show you how… 
  3. Supply chain security via manual processes just won’t scale. That’s why we have created a set of automated tools and services to help organizations do ongoing assessments of their entire supply chain. You can even sort your supply chain vendors by criticality or impact, and assign more or less frequent testing to those groups. You can get written reports, suitable for auditors – or as we wrote above, data feeds back to your GRC tools directly. We can test tens of vendors or thousands of vendors – whatever you need to gain trust and assurance over your supply chain vendors. The point is, we built workflows, methodologies, services and tools that scale to the largest companies on the planet. This month, we will show you how to solve your supply chain security problems.
 
If you would like a private, sneak peak preview briefing of our research and the work we have done on this issue, please get in touch with your account executive or drop us a line via info (at) microsolved /dot/ com, call us at (614) 351-1237 or click the request a quote button at the top of our website – http://microsolved.com. We’ll be happy to sit down and walk through it with you. 
 
If you prefer to learn more throughout March – stay tuned to https://stateofsecurity.com for more to come. Thanks for reading! 

Comparing 2 Models for DMZ Implementations

I recently had a discussion with another technician about the security of the two most popular DMZ implementation models. That is: 
  • The “3 Legged Model” or “single firewall” – where the DMZ segment(s) are connected via a dedicated interface (or interfaces) and a single firewall implements traffic control rules between all of the network segments (the firewall could be a traditional firewall simply enforcing interface to interface rules or a “next generation” firewall implementing virtualized “zones” or other logical object groupings)
  • The “Layered Model” or “dual firewall”- where the DMZ segment(s) are connected between two sets of firewalls, like a sandwich
 
Both approaches are clearly illustrated above, and explained in detail in the linked wikipedia article, so I won’t repeat that here. 
 
I fully believe that the “3 Legged Model” is a lower risk implementation than the layered model. This outright contradicts what the wikipedia article above states: 
 
     “The most secure approach, according to Stuart Jacobs, [1]is to use two firewalls to create a DMZ.” — wikipedia article above.
 
While the Layered model looks compelling at first blush, and seems to apply the concept of “more firewalls would need to be compromised to lead to internal network access”; I believe that, in fact, it reduces the overall security posture in the real world, and increases risk. Here’s why I feel that way. Two real-world issues that often make things that look great at first blush or that “just work” in the lab environment, have significant disadvantages in the real world are control complexity and entropy. Before we dig too deeply into those issues though, let’s talk about how the two models are similar. (Note that we are assuming that the firewalls themselves are equally hardened and monitored – IE, they have adequate and equal security postures both as an independent system and as a control set, in aggregate.)
 
Reviewing the Similarities
 
In both of the models, traffic from the DMZ segment(s) pass through the firewall(s) and traffic controls are applied. Both result in filtered access to the internal trusted network via an often complex set of rules. Since in both cases, traffic is appropriately filtered, authorization, logging and alerting can adequately occur in both models. 
 
Establishing Differences
 
Now the differences. In the 3 Legged model, the controls are contained in one place (assuming a high availability/failover pair counts as a single set of  synced controls), enforced in one place, managed and monitored in one place. The rule set does not have cascading dependencies on other implementations of firewalls, and if the rule set is well designed and implemented, analysis at a holistic level is less complex.
 
In the Layered model, the controls are contained across two separate instances, each with different goals, roles and enforcement requirements. However, the controls and rule sets are interdependent. The traffic must be controlled through a holistic approach spread across the devices, and failures at either firewall to adequately control traffic or adequately design the rule sets could cause cascading unintended results. The complexity of managing these rules across devices, with different rule sets, capabilities, goals and roles is significantly larger than in a single control instance. Many studies have shown that increased control complexity results in larger amounts of human error, which in turn contributes to higher levels of risk. 
 
Control Complexity Matters
 
Misconfigurations, human errors and outright mistakes are involved in a significant number (~95%) of compromises. How impactful are human mistakes on outright breaches? Well according to the 2015 Verizon DBIR:
 
“As with years past, errors made by internal staff, especially system administrators who were the prime actors in over 60% of incidents, represent a significant volume of breaches and records ,even with our strict definition of what an “error” is.” —DBIR
 
Specifically, misconfiguration of devices were involved in the cause of breaches directly in 3.6% of the breaches studied in the DBIR. That percentage may seem small, but the data set of 79,790 incidents resulting in 2,122 breaches that means a staggering number of 76 breaches of data were the result of misconfigurations.
 
This is exactly why control complexity matters. Since control complexity correlates with misconfiguration and human error directly, when complexity rises, so does risk – conversely, when controls are simplified, complexity falls and risk of misconfiguration and human error is reduced.
 
Not to beat on the wikipedia article and Stuart Jacob’s assertions, but further compounding the complexity of his suggestion is multiple types of firewalls, managed by multiple vendors. Talk about adding complexity, take an interdependent set of rules and spread them across devices, with differing roles and goals and you get complexity. Now make each part of the set a different device type with it’s own features, nuances, rule language, configuration mechanism and managed service vendor, and try to manage both of those vendors in sync to create a holistic implementation of a control function. What you have is a NIGHTMARE of complexity. At an enterprise scale, this implementation approach would scale in complexity, resources required and oversight needs logarthmically as new devices and alternate connections are added. 
 
So, which is less complex, a single implementation, on a single platform, with a unified rule set, managed, monitored and enforced in a single location – OR – a control implemented across multiple devices, with multiple rule sets that require monitoring, management and enforcement in interdependent deployments? I think the choice is obvious and rational.
 
Now Add Entropy
 
Ahh, entropy, our inevitable combatant and the age old foe of order. What can you say about the tendency for all things to break down? You know what I am about to point out though, right? Things that are complex, tend to break down more quickly. This applies to complex organisms, complex structures, complex machinery and complex processes. It also applies to complex controls.
 
In the case of our firewall implementation, both of our models will suffer entropy. Mistakes will be made. Firewall rules will be implemented that allow wider access than is needed. Over time, all controls lose efficiency and effectiveness. Many times this is referred to as “control drift” or “configuration drift”. In our case, the control drift over a single unified rule set would have a score of 1. Changes to the rule set, apply directly to behavior and effectiveness. However, in the case of the Layered model, the firewalls each have a distinct rule set, which will degrade – BUT – they are interdependent on each other – giving an effective score of 2 for each firewall. Thus, you can easily see, that as each firewall’s rule set degrades, the private network’s “view” of the risk increases significantly and at a more rapid pace. Simply put, entropy in the more complex implementation of multiple firewalls will occur faster, and is likely to result in more impact to risk. Again, add the additional complexity of different types of firewalls and distinct vendors for each, and the entropy will simply eat you alive…
 
Let’s Close with Threat Scenarios

Let’s discuss one last point – the actual threat scenarios involved in attacking the private network from the DMZ. In most cases, compromise of a DMZ host will give an attacker a foothold into the environment. From there, they will need to pivot to find a way to compromise internal network resources and establish a presence on the internal network. (Note that I am only focusing on this threat scenario, not the more common phishing/watering hole scenarios that don’t often involve the compromise of a DMZ host, except perhaps for exfiltration paths. But, this is outside our current scope.) If they get lucky, and the DMZ is poorly designed, they may find that their initially compromised host has some form of access to the internal network that they can exploit. But, in most cases, the attacker needs to perform lateral movement to compromise additional hosts, searching for a victim that has the capability to provide a launching point for attacks against the internal network.
 
In these cases, detection is the goal of the security team. Each attacker move and probe, should cause “friction” against the controls, thereby raising the alert and log levels and the amount of unusual activity. Ultimately, this should lead to the detection of the attacker presence and the incident response process engagement.
 
However, let’s say that you are the attacker, trying to find a host that can talk to the internal network from the DMZ in a manner that you can exploit. How likely are you to launch an attack against the firewalls themselves? After all, these are devices that are designed for security and detection. Most attackers, ignore the firewalls as a target, and continue to attempt to evade their detection capabilities. As such, in terms of the threat scenario, additional discreet firewall devices, offer little to no advantage – and the idea that the attacker would need to compromise more devices to gain access loses credibility. They aren’t usually looking to pop the firewall itself. They are looking for a pivot host that they can leverage for access through whatever firewalls are present to exploit internal systems. Thus, in this case, both deployment models are rationally equal in their control integrity and “strength” (for lack of a better term).
 
Wrapping This Up
 
So, we have established that the Layered model is more complex than the 3 Legged model, and that it suffers from higher entropy. We also established that in terms of control integrity against the most common threat scenario, the implementation models are equal. Thus, to implement the Layered model over the 3 Legged model, is to increase risk, both initially, and at a more rapid pace over time for NO increase in capability or control “strength”. This supports my assertion that the 3 Legged model is, in fact, less risky than the Layered model of implementation.
 
As always, feel free to let me know your thoughts on social media. I can be found on Twitter at @lbhuston. Thanks for reading! 

Podcast Episode 8 is Out

This time around we riff on Ashley Madison (minus the morals of the site), online privacy, OPSec and the younger generation with @AdamJLuck. Following that, is a short with John Davis. Check it out and let us know your thoughts via Twitter – @lbhuston. Thanks for listening! 

You can listen below:

State Of Security Podcast Episode 4

We are proud to announce the release of State Of Security, the podcast, Episode 4. This time around I am hosting John Davis, who riffs on policy development for modern users, crowdsourcing policy and process management, rational risk assessment and a bit of history.

Give it a listen and let us know what you think!

Thanks for supporting the podcast!

How to Use Risk Assessment to Secure Your Own Home

Risk assessment and treatment is something we all do, consciously or unconsciously, every day. For example, when you look out the window in the morning before you leave for work, see the sky is gray and decide to take your umbrella with you, you have just assessed and treated the risk of getting wet in the rain. In effect, you have identified a threat (rain) and a vulnerability (you are subject to getting wet), you have analyzed the possibility of occurrence (likely) and the impact of threat realization (having to sit soggy at your desk), and you have decided to treat that risk (taking your umbrella) risk assessment.

However, this kind of risk assessment is what is called ad hoc. All of the analysis and decision making you just made was informal and done on the fly. Pertinent information wasnt gathered and factored in, other consequences such as the bother of carrying the umbrella around wasnt properly considered, other treatment options werent considered, etc. What business concerns and government agencies have learned from long experience is that if you investigate, write down and consider such factors rationally and holistically, you end up with a more realistic idea of what you are really letting yourself in for, and therefore you are making better risk decisions formal risk assessment.

So why not apply this more formal risk assessment technique to important matters in your own life such as securing your home? Its not really difficult, but you do have to know how to go about it. Here are the steps:

1. System characterization: For home security, the system you are considering is your house, its contents, the people who live there, the activities that take place there, etc. Although, you know these things intimately it never hurts to write them down. Something about viewing information on the written page helps clarify it in our minds.

  1. Threat identification: In this step you imagine all the things that could threaten the security of your home and family. These would be such things as fire, bad weather, intruders, broken pipes, etc. For this (and other steps in the process), you can go beyond your own experience and see what threats other people have identified (i.e. google inquiries, insurance publications).

  2. Vulnerability identification: This is where you pair up the threats you have just identified with weaknesses in your home and its use. For example, perhaps your house is located on low ground that is subject to flooding, or you live in a neighborhood where burglaries may occur, or you have old ungrounded electrical wiring that may short and cause a fire. These are all vulnerabilities.

  3. Controls analysis: Controls analysis is simply listing the security mechanisms you already have in place. For example, security controls used around your home would be such things as locks on the doors and windows, alarm systems, motion-detecting lighting, etc.

  4. Likelihood determination: In this step you decide how likely it is that the threat/vulnerability will actually occur. There are really two ways you can make this determination. One is to make your best guess based on knowledge and experience (qualitative judgement). The second is to do some research and calculation and try to come up with actual percentage numbers (quantitative judgement). For home purposes I definitely recommend qualitative judgement. You can simply rate the likelihood of occurrence as high, medium or low risk.

  5. Impact analysis: In this step you decide what the consequences of threat/vulnerability realization will be. As with likelihood determination, this can be judged quantitatively or qualitatively, but for home purposes I recommend looking at worst-case scenarios. For example, if someone broke into your home, it could result in something as low impact as minor theft or vandalism, or it could result in very high impact such as serious injury or death. You should keep these more dire extremes in mind when you decide how you are going to treat the risks you find.

  1. Risk determination: Risk is determined by factoring in how likely threat/vulnerability realizations is with the magnitude of the impact that could occur and the effectiveness of the controls you already have in place. For example you could rate the possibility of home invasion occurring as low, and the impact of the occurrence as high. This would make your initial risk rating a medium. Then you factor in the fact that you have an alarm system and un- pickable door locks in place, which would lower your final risk rating to low. That final rating is known as residual risk.

  2. Risk treatment: Thats it! Once you have determined the level of residual risk, it is time to decide how to proceed from there. Is the risk of home invasion low enough that you think you dont need to apply any other controls? That is called accepting risk. Is the risk high enough that you feel you need to add more security controls to bring it down? That is called risk limitation or remediation. Do you think that the overall risk of home invasion is just so great that you have to move away? That is called risk avoidance. Do you not want to treat the risk yourself at all, and so you get extra insurance and hire a security company? That is called risk transference.

So, next time you have to make a serious decision in your life such as changing jobs or buying a new house, why not apply the risk assessment process? It will allow you to make a more rational and informed decision, and you will have the comfort of knowing you did your best in making the decision. 

Thanks to John Davis for this post.