About the Cyber Incident Reporting for Critical Infrastructure Act of 2022

The Cyber Incident Reporting for Critical Infrastructure Act of 2022 (CIRCIA) was adopted in March of 2022 and is an outgrowth of the National Infrastructure Protection Plan (NIPP) that has been around since 2013. What this means to organizations that are covered critical infrastructure entities it that they will be required to report cyber incidents and ransomware attacks to the Cybersecurity and Infrastructure Security Agency (CISA) in a very short time frame. Specifically, these organizations must:

  • Report any “covered cyber incident” within 72 hours of determining that the incident has occurred to the CISA
  • Report issuance of a ransomware payment to the CISA within 24 hours
  • Provide CISA with supplemental information when substantial or new information regarding the incident becomes available to the entity

A question that immediately occurs to one upon reading these requirements is, what is a “covered cyber incident” under CIRCIA? Covered cyber incident under this law must meet any one or all of the following criteria. A covered cyber incident causes or creates:

  • “Substantial loss of confidentiality, integrity, or availability” in information systems or “serious impact on the safety and resiliency” of operations
  • “Disruption of business or industrial operations,” including service denials, ransomware attacks, or exploitation of “zero-day vulnerabilities)”
  • “Unauthorized access or disruption of business or industrial operations” from the loss of services facilitated through or caused by a third-party data hosting provider or supplier

What business sectors are considered critical infrastructure in the U.S.? Critical infrastructure includes the following 16 sectors:

  1. The Chemical sector
  2. The Commercial Facilities sector
  3. The Communications sector
  4. The Critical Manufacturing sector
  5. The Dams sector
  6. The Defense Industrial Base sector
  7. The Emergency Services sector
  8. The Energy sector
  9. The Financial Services sector
  10. The Food and Agriculture sector
  11. The Government Facilities sector
  12. The Healthcare and Public Health sector
  13. The Information Technology sector
  14. The Nuclear Reactors, Materials and Waste sector
  15. The Transportation Systems sector
  16. The Water and Wastewater Systems sector

So, how are you to know if your organization is included under this new law? That is being determined now by the CISA. To define a covered entity under the law, they are considering three factors:

  1. The consequences that a particular cyber incident might have on national or economic security, public health and safety
  2. The likelihood that the entity could be targeted for attack
  3. The extent to which an incident is likely to disrupt the reliable operation of critical infrastructure

These criteria not only cover critical infrastructure organizations, they cover organizations that support the security and resiliency of critical infrastructure.

Luckily, organizations in this sector will have some time to get ready for these new requirements. The deadline for the publication of the Notice of Proposed Rulemaking is not until March 15, 2024, and the deadline for issuance of the Final Rule is slated for September 15, 2025. My advice is to take advantage of this time and prepare!

Use the CISA Known Exploited Vulnerabilities Catalogue to Improve Your Patching Program

Cyber criminals are finding and exploiting vulnerabilities in programs and equipment faster than ever. For an example, just this week the Cybersecurity and Infrastructure Security Agency (CISA) warned of two vulnerabilities with CVE ratings of 9.8 that are being actively exploited in the wild to attack unpatched versions of multiple product lines from VMware and of BIG-IP software from F5. According to an advisory published Wednesday, the vulnerabilities (tracked as CVE-2022-22960 and CVE-2022-22960) were reverse engineered by attackers, an exploit was developed, and unpatched devices were being attacked within 48 hours of the release. Currently, this kind of rapid exploitation is not at all unusual. This means that to keep in step, organizations not only must monitor all of their IT assets for vulnerabilities, they must patch them quickly and intelligently.

This is where the CISA Known Exploited Vulnerabilities Catalogue (also known as the “must patch list”) can be a real help. It is free to all, regularly updated, and can be accessed at https://www.cisa.gov/known-exploited-vulnerabilities-catalog. What is nice about this tool is that it only includes vulnerabilities that are known to be currently exploited and dangerous. This helps you avoid wasting time and effort patching vulnerabilities that can wait. The catalogue also helps prevent organizations from concentrating too much on Microsoft systems. When you view the current catalogue, you will see exploited vulnerabilities in Apple, Cisco, VMWare, Big-IP, Fortinet, Chrome and IBM just to name a few.

As we have emphasized before, it is very important to track all of your IT assets. That is why maintaining current inventories of all hardware devices, software applications, operating systems and firmware applications on your networks is listed as Job #1 in cutting-edge information security guidance. Once you have a process in place to ensure that your inventories are complete and regularly updated, why not leverage all of that work to inform your patching and security maintenance program? You can simply compare the must patch list with your IT asset inventories and see if any of the currently exploited vulnerabilities pertain to your systems. If they do, that gives you a quick guide on which systems should be immediately patched. Remember that in the current threat environment, speed is indeed of the essence!

Patching Perfection Now a Must for All Organizations

Look at the state of cybersecurity now. What a mess! Things have been getting steadily worse now for years and there seems to be no end in sight. Every time we seem to be getting a handle on one new malware campaign another one comes online to bedevil us. The latest iteration is the Log4j debacle. In its wake, the government has demanded that their departments increase their efficiency and timeliness in the patching of their systems. Non-government organizations should take a cue from this and also increase their efforts to patch their systems in a timely manner. It is certain that cybercriminals are not wasting any time in exploiting unpatched vulnerabilities on the computer networks of all kinds of organizations.

One thing to keep in mind in the present environment is that the most serious and far-ranging exploits against computer networks in the last several years are coming from nation states and government sponsored hackers. These groups are developing very cleaver attacks and then striking selected targets all at once. Once they have taken their pound of flesh, they are then ensuring that their exploits are shared with cybercriminals around the world so that they too may get on board the gravy train. That means that organizations that are not a part of the original attack list have some amount of time make their systems secure. But this lag time may be of rather short duration. It would be unwise to simply wait for the next patching cycle to address these virulent new exploits. This means that organizations need to institute programs of continuous vulnerability monitoring and patching, despite the headaches such programs bring with them.

Another thing to keep in mind is that organizations need to ensure that all network entities are included in the patching program, not just Windows machines. All operating systems, software applications, hardware devices and firmware applications present on the network should be addressed. To ensure that all these network entities are included, we advocate combining vulnerability management programs with hardware and software inventories. That way you can ensure that no systems on the network are “falling through the cracks” when it comes to monitoring and patching.

Although perfect patching is not a panacea, and is reactive rather than proactive in nature, it goes a long way in preventing successful attacks against the average organization. This is especially true if your reaction time is short!

How to Calculate Cyber Security Risk Value and Cyber Security Risk

There has been a lot of interest lately in formulas for calculating cyber security risk value. That is not at all surprising given the crisis in cyber security that has intensified so greatly in the last few years. Every interest from large government organizations and corporations to small businesses and even individuals are struggling to get a handle on data breaches, ransomware, supply chain attacks, malware incursions and all the other cyber-ills that are besetting us from every angle. And to gain that handle, interests must be able to assign relative value to their information assets and systems. It only makes sense that you provide the highest level of protection to those information assets that are the most critical to the organization, or those that contain the most sensitive information. Hence, the need for the ability to calculate risk value.

The formula for risk value, as it pertains to cyber security, is simply stated as the probability of occurrence x impact. This should not be confused with the formula for calculating cyber security risk, which is risk = (threat x vulnerability x probability of occurrence x impact)/controls in place. As can be seen, cyber security risk value is a subset of the larger cyber security risk calculation. It is useful because it allows the organization to assign a value to the risk, either in terms of the level of risk (i.e. high, medium or low) or the actual cost of the risk (i.e. dollars, time or reputation). The more realistically risk value can be calculated, the better an interest can rate the actual value of an information asset to the organization. In other words, it is the meat of risk assessment.

So, lets take a look at the two factors in risk value and see how we can calculate them. First is possibility of occurrence (or likelihood) determination. According to NIST, to derive the overall likelihood of a vulnerability being realized in a particular threat environment, three governing factors must be considered:

  1. Threat source motivation and capability: Is the threat source liable to be interested in the information asset? Can they make money or gain advantage from it? Do they have the ability to get at the asset? Is there known malware or social engineering techniques that may be able to get at the asset?
  2. Nature of the vulnerability: Is the vulnerability due to human nature? Is it a weakness in coding? Is it easily exercised or is it difficult to exercise? Is it presently being exploited in the wild?
  3. Existence and effectiveness of current controls: What security mechanisms are in place that could possibly prevent or detect exercise of the vulnerability? Have these controls been useful in stopping similar exploits in the past? Have other organizations demonstrated controls that have been effective in countering exercise of the vulnerability?

There is also a handy table for rating the likelihood of occurrence as high, medium or low:


Likelihood Level Likelihood Definition


The threat-source is highly motivated and sufficiently capable, and controls to prevent the vulnerability from being exercised are ineffective.


The threat-source is motivated and capable, but controls are in place that may impede successful exercise of the vulnerability.


The threat-source lacks motivation or capability, or controls are in place to prevent, or at least significantly impede, the vulnerability from being exercised.


Now let’s look at the other factor: impact. When judging the impact of the compromise of an information asset, we need to carefully consider a couple of factors:

  1. System and/or data criticality: What would happen if the information asset was illicitly modified? (Loss of integrity) What would happen if the information asset or system was not accessible or working? (Loss of availability) What would happen if the privacy of the information asset was compromised? (Loss of confidentiality) How much money per time period would the organization lose if the information asset was compromised?
  2. System and/or data sensitivity: Is the information asset proprietary to the organization? Is the information asset protected by government or industry regulation? Could compromise of the information asset lead to lawsuits? Could compromise of the information asset lead to loss of reputation or business share?

It should be noted that impact levels can be gauged in two ways: Quantitatively or qualitatively. Judging impact quantitatively means putting an actual dollar value on the successful compromise of an information asset. This type of impact analysis is very useful to business management, but is very difficult to accurately calculate in many cases. In my opinion, quantitative impact analysis works best when the complexity of the system is small. As complexity grows, so does the inaccuracy of the calculation.

Qualitative impact is easier to calculate, and is liable to be more useful when judging impact of complex systems or the enterprise as a whole. Qualitative impact ratings result in levels of impact such as high, medium or low, although I have seen impact level granularity of five or more levels. NIST has a handy table for judging the magnitude of a business impact:


Magnitude of Impact Impact Definition


Exercise of the vulnerability (1) may result in the highly costly loss of major tangible assets or resources; (2) may significantly violate, harm, or impede an organization’s mission, reputation, or interest; or (3) may result in human death or serious injury.


Exercise of the vulnerability (1) may result in the costly loss of tangible assets or resources; (2) may violate, harm, or impede an organization’s mission, reputation, or interest; or (3) may result in human injury.


Exercise of the vulnerability (1) may result in the loss of some tangible assets or resources or (2) may noticeably affect an organization’s mission, reputation, or interest.


I personally have employed these paradigms and definitions in performing risk assessments for a number of organizations of many types over the last two decades and have found them very useful in assigning both risk value and overall risk to organizations. They help me to be inclusive and clear in in my judgments while operating in a world of complexity and uncertainty.

New Federal Banking Rule Requires Notifying Regulators of Cyber Incident Within 36 Hours

Here is a new reason to get your cybersecurity incident response program in order: federal banking regulators have issued a new rule requiring banks to notify regulators of “qualifying” cybersecurity incidents within 36 hours of recognition. This rule has the collaboration of the FDIC, the Federal Reserve and the Comptroller of Currency, and will be effective on April 1 of 2022.

It’s not as bad as it seems, though. According to the rule, a computer security incident is defined as an occurrence that “results in actual harm to the confidentiality, integrity or availability of an information system or the information that that system processes, stores or transmits.” However, a computer security incident that must be reported according to the new timeline is one that has disrupted or degraded a bank’s operations and its ability to deliver services to a material portion of its customer base and to business lines. Since this is somewhat nebulous, they also listed a number of examples of incidents requiring 36 hour notification. These include (but are not limited to):

  • A failed system upgrade resulting in widespread user outage.
  • A large-scale DDoS attack disrupting account access for more than four hours.
  • A ransomware attack that encrypts core banking systems or backup data.
  • A bank service provider experiencing a widespread system outage.
  • A computer hacking incident disabling banking operations for an extended period of time.
  • An unrecoverable system failure resulting in activation of business continuity / disaster recovery plan.
  • Malware on a bank’s network that poses an imminent threat to core business lines or critical operations.

This same rule also requires banking service providers to notify at least one bank-designated point of contact at each affected customer banking organization “as soon as possible” when the service provider has experienced a computer security incident that disrupts services for 4 hours or more.

Although 36 hours seems like an adequate amount of time for banks to notify the FDIC, in reality this time is very short indeed. From having worked with financial institutions that have had various compromises in the past, we know that determining if the incident is real, determining exactly what happened, when, how and was perpetrated by whom are thorny problems that can take days to figure out. There is also the reality to consider that modern cyberattacks often have multiple stages in which one attack is used to obfuscate other insidious attacks that are launched during the confusion. The regulators have been working with banking industry to try to craft requirements that do not overly burden the affected financial institutions during times of crisis, but who knows how well that will work? Guess we’ll see next spring!

Is it Possible to Identify all Risks Associated with a Project, Program or System?

How good is risk assessment? Can a risk assessment actually identify all the risks that might plague a particular project, program or system? The short answer is no, not entirely.

Since humans became sentient and gained the ability to reason, we have been using our logical ability to attempt to see into the future and determine what may be coming next. We see what is going on around us, we remember what has happened in the past, we learn what others have experienced and we use that information as our guide to calculating the future. And that paradigm has served us well, generally speaking. We have the logical ability to avoid previously made mistakes and predict future trends pretty well. However, we never get it 100% right. It is a truism that no system ever designed to protect ourselves and our assets has not been defeated sooner or later. That is why a risk engineer will never tell you that their security measures will provide you with a zero-risk outcome. All you can do is lessen risk as much as possible.

One reason for this is an imperfect understanding of all the factors that contribute to risk for any given system or situation. These factors include understanding exactly what we are attempting to protect, understanding threats that menace the asset, understanding mechanisms that we have in place to protect the asset and understanding weaknesses that those threats may be able to exploit to defeat our protection mechanisms. If any one of these factors is imperfectly understood and integrated with the other factors involved, risk cannot be wholly eliminated.

Understanding what we a trying to protect is the factor that is easiest to accomplish usually, especially if it is something simple like money or our home. However, even this task can become daunting when you are trying to entirely understand something as complex as a software application or a computer network. These sorts of things are often composed of parts that we ourselves have not constructed, such as standard bits of code or networking devices that we simply employ in our bigger design but do not have a complete understanding of.

Understanding threats that menace our assets is more difficult. We are pretty good at protecting ourselves against threats that have been employed by attackers before. But the problem lies in innovative threats that are entirely new or that are novel uses and combinations of previously identified threats. These are the big reasons why we are always playing catchup with attackers.

Understanding the mechanisms we have in place to protect our assets is another area we can accomplish fairly well, but even this factor is often imperfectly understood. For example, how many of you have purchased a security software package to protect your network, but then have trouble getting it to work to its greatest effect because your team doesn’t have a handle on all of its complexities? We have seen this often in our work.

Finally, understanding weaknesses in our protection mechanisms may be the hardest factor of all to deal with. Often, security vulnerabilities go unrecognized until some cleaver attacker comes up with a zero-day exploit to take advantage of them. Or sometimes simple vulnerabilities seem easy to protect against until someone figures out that you can string a few of them together to affect a big compromise.

So, to get the most out of risk assessment, you need to gain the greatest understanding possible of all the factors that make up risk. In addition, you need to guard against complacency and ensure that you are not only protecting your assets to the greatest extent your ability and budget will allow, but you need to be prepared for those times that your efforts fail and security compromise does occur.

Why Penetration Testing Should Accompany Vulnerability Assessment

Twenty years ago, the world of network security was a whole different ballgame. At that time, the big threat was external attackers making their way onto your network and wreaking havoc. As hard as it is to believe now, many businesses and organizations did not even employ firewalls on their networks at that time! The big push among network security professionals then was to ensure that everyone had good firewalls, or “network perimeter” security, in place. This is the time when vulnerability assessment of distributed computer networks became big.

Vulnerability assessment entails examining networks for weaknesses such as exposed services and misconfigurations that could be exploited by attackers to gain access to private information and systems. This type of testing was encouraged by professionals to give businesses and organizations information about the weaknesses that were actually present at the time of testing. At first, vulnerability assessment was usually only conducted against the external network (that part of the network that is visible from outside the business, usually over the Internet).

Most businesses and organizations embraced the need for firewalls and external vulnerability assessments as time progressed. This was not only because doing so made good sense, but because of regulatory requirements penned to meet the requirements of modern laws such as HIPAA, GLBA and SOX. However, many did not see the need for other security studies such as internal vulnerability assessment (VA). Internal VA is like external VA, but looks for weaknesses on the internal network used by employees, partners and service providers that have been granted access and privileges to internal systems and services. The need for internal VA became increasingly important as cybercriminals found ways to worm their way into internal networks or the networks of service providers or partners. As more time passed, and network attacks increased in volume and competency, internal VA became more commonly performed among businesses and organizations.

Unfortunately, despite the increase in vulnerability studies, networks continued to be compromised. One of the reasons for this is the limited nature of vulnerability assessment. When a VA is performed, the assessors usually employ network scanning tools such as Nessus. The outputs of these tools show where vulnerabilities exist on the network, and even provide the consumer with recommendations for closing the security holes that were found. But it doesn’t go so far as to see if these vulnerabilities can actually be exploited by attackers. Also, these tools are limited, and do not show how the network may be vulnerable to combination attacks in which cybercriminals combine various weaknesses (technical, procedural and configurational weaknesses) on the network to foment big compromises. That is where penetration testing comes into play.

Penetration testing is not automated. It requires expert network security personnel to undertake properly. In penetration testing, the assessor employs the results of vulnerability studies and their own expertise to try to actually penetrate network security mechanisms just as a real-world cybercriminal would do. Obviously, the smarter and more knowledgeable the penetration tester is, the more valid the results they obtain. And for the consumer this can be a great boon.

It is true that penetration testing costs more money than performing vulnerability studies alone. What is little appreciated is the money it can save an organization in the long run. Not only can penetration testing uncover those tricky combined attacks mentioned above, it can also reveal which vulnerabilities found during VA are not presently exploitable by attackers to any great effect. This can save organizations from spending inordinate amounts of time and money fixing useless vulnerabilities and allows them to concentrate their resources on those network flaws that present the most actual danger to the organization.

What is the Difference Between a Risk Assessment and an Audit?

Many different types of organizations and businesses are required to undertake risk assessments and audits, either to satisfy some regulatory body or to satisfy internal policy requirements. But there often are questions about why both must be undertaken each year and what the differences between them are. These processes are very different, are done for different reasons and produce very different results

A risk assessment in reality is a way to estimate, or make “an informed guess” about the kinds and levels of risk facing just about anything. From a business perspective, you can perform a risk assessment on an individual business process, an information system, a third-party supplier, a software application or the enterprise as a whole. Risk assessments may be performed internally by company personnel, or by specialist, third-party security organizations. They can also be small-scale assessments conducted among a group of interested parties, or they can be large-scale, formal assessments that are comprehensive and fully documented. But whatever type and scale of risk assessment you are undertaking, they all share certain common characteristics.

To perform risk assessment, you first must characterize the system you wish to assess. For example, you may wish to assess the risk to the organization of implementing a new software application. “Characterizing,” in this case, means learning everything you can about the system and what is going to be entailed with installing it, maintaining it, training personnel to use it, how it connects to other systems, etc.

Once you have this information in hand, the next step is to find out what threats and vulnerabilities to the application exist or may appear in the near future. To do this, most organizations look to government and private organizations that keep track of threats and vulnerabilities and rate them for severity such as DHS, CERT, Cisco or SAP. In addition, organizations look to similar organizations and use groups to learn from them what threats they have experienced and what vulnerabilities they have found when implementing the software application in question.

The next steps in risk calculation are ascertaining the probability that the threats and vulnerabilities found in the previous steps may actually occur, and the impacts on the organization if they do. The final step is then to take into account the security controls that the organization has in place and the effect these countermeasures might have in preventing attackers from actually compromising the system. Thus, the formula for calculating risk is (threats x vulnerabilities x probability of occurrence x impact)/countermeasures in place = risk.

Looking at the above, it is obvious that there is much room for error in a risk calculation. You might not be able to find all the threats against the application, nor may you be able to determine all the vulnerabilities that exist. Probability of occurrence is also just an estimate, and even impact on the organization may not be fully understood. That is why I said that risk assessment is really just an estimate or educated guess. Audit, on the other hand, is something entirely different.

The goal of an audit is to ascertain if an organization is effectively implementing and adhering to a documented quality system. In other words, an audit examines written policies and processes, and records of how they are actually being implemented, to see if the organization is following the rules and to see if the processes they are following are effective. Auditors should be disinterested third-party professionals and in the case of IT audits are usually CPAs.

Most often, such as in the case of an audit by a regulatory body, a group of auditors will come on-site to the organization and start the process of records examination and interviews with personnel. This is an exhaustive process and contains little or no guesswork. Audits can be limited, such as an audit of an accounting system, or can look at all the business practices of an organization. You can even have an audit done to test the quality and effectiveness of your risk assessment and risk management processes. This is probably where some of the confusion between the two arise. Although both may be mandated for a single organization, they remain very different processes.

IT Security and OT Security Converging

The term “information technology” (also known as “IT”) has been with us for more than 60 years now. It was first coined by Harold Leavitt and Thomas Whisler and published in an article in the Harvard Business Review in 1958 (long before the Internet was conceived of). It refers to all those pieces/parts that make up electronic information systems. The term “operational technology” (also known as “OT”) was first coined nearly half a century later in a research paper from Gartner in 2006. It refers to industrial control systems that are controllable from remote locations, especially those that are controllable over an Internet connection. It has spawned another new acronym: “IIoT” (“industrial internet of things”). For the security industry, these terms highlight one of the biggest security problems facing us today; securing industrial controls systems from remote attacks by cybercriminals and hostile nation states.

For most of the Information Age, such terms and considerations were not necessary. Industrial control systems were largely analog and not subject to remote attack. Even after the Internet had been well established, the security of industrial control systems was not seen as a big problem since there was little reward to be had by disrupting such systems to the average hacker. In recent years that has all changed. Industries from infrastructure (i.e. electric grids, pipelines, water systems) to the private sector (i.e. manufacturing, mining, cargo transport) have been, and continue to, embrace the Internet as a medium for controlling and communicating with their industrial controls systems. It increases efficiency and cuts cost for these concerns. It also allows them to decrease the number of personnel needed and to centralize control and monitoring of these systems. A great boon! Unfortunately, security was not well considered or implemented as these processes were put in place. As a result, industrial control systems are now among the easiest to compromise by Internet attack. On top of that, there is now an attack vector that is attracting your average cybercriminal motivated by greed to target industrial control systems: ransomware.

Ransomware allows attackers to make money from almost any business or institution, including industry and infrastructure. Modern ransomware attackers not only threaten to encrypt information and make it unavailable to legitimate users, they threaten to disrupt industrial control systems or reveal private information publicly. One example is the recent Colonial Pipeline debacle. Because of this, it is increasingly important for industrial concerns to solve their Internet security problems. This problem is finally being recognized by the U. S. Government at the highest level. President Biden has recently threatened reprisals for attacks against vital American infrastructure and manufacturing concerns.

In addition, the CISA has recently published a fact sheet detailing their recommendations for protecting these systems against ransomware attacks. These recommendations include:

  • Determining how much your critical OT systems rely on key IT infrastructure.
  • Planning for when you lose access to IT and/or OT environments.
  • Exercising your incident response plans, and testing manual controls if OT networks need to be taken offline.
  • Implementing regular data backup procedures for both OT and IT networks.
  • Requiring multi-factor authentication for both OT and IT networks, and
  • Segmenting IT and OT networks.

These are good suggestions and should be implemented ASAP. However, they are not a panacea. Nobody to date has come up with a true answer to the problem of cyberattacks against industrial control systems. Because of this it is important to remain flexible and to devote adequate resources for fighting this very thorny problem.

Time to Revise and Update Your Incident Response Program

The last couple of years has seen a truly disturbing increase in the sophistication and effectiveness of cyberattacks. It seems that private cybercriminal organizations and those of nation states are feeding off of, and even actively supporting each other; sharing techniques and malware. Attacks are coming fast and furious from various angles that are difficult to predict. If it isn’t attacks against vulnerabilities in the DNS system, it’s exploits of weaknesses in cloud containers, input-output systems, or some other technical problem. Added to that are the ever-present threats of phishing attacks, application compromises, zero-days and ransomware attacks. What’s coming next is anyone’s guess, but I doubt very much the situation is going to get better or easier to cope with. Despite these difficulties, though, this is not the time to throw up our hands in despair. This is the time to prepare as well as may be.

One factor that makes all of these cyber-woes worse for any organization is panic. When people are surprised and unprepared, they often either freeze up and do nothing, or they do the first thing that comes to their minds no matter how inappropriate. In other words, they panic. And the more important the attacked resource is, the greater the panic that ensues. The Military has had to deal with this situation since time immemorial, and they have come up with some effective methods of dealing with it. We would be well advised to take advantage of this hard-won knowledge and apply to our own incident response plans.

The first step is to construct a program that is adapted to dealing with both the expected and the unexpected. In order to deal with the expected, we need to be constantly updating our incident response procedures to include the new attack vectors being used by the “enemy.” An example of this would be supply chain attacks. Does your current IR plan have specific information about and processes for responding to a supply chain attack? Is there information about recognizing the characteristics of a supply chain attack and how to deal with it in a step-by-step format in the plan? How about ransomware? DNS poisoning attacks? I recommend that someone from the incident response team should keep informed about the latest attacks vectors and methods and ensure that the whole team is made aware of these emerging attacks. Any that pose credible threats to your organization should be dealt with. These matters should be researched and specific methods for reacting to them should be developed and practiced. The best way to document these processes are checklists and/or decision trees. The Military has found that clearly documented processes accompanied by repeated training is the surest way of avoiding panic and making right decisions under stressful conditions.

This leads me to methods for preparing your IR team for dealing with the unexpected. Again, I’ll take a cue from the Military. Dealing effectively and calmly with the unexpected in incident response is largely a matter of mindset. As they teach recruits in the Marines, you need to learn to adapt and overcome. The problem is, when you are at panic-level stress, it is exceedingly difficult to think calmly, rationally and logically. Training is the answer to this problem.

Personnel should understand the signs that they are heading towards panic and practice using their logical minds to help control their emotional responses. This is admittedly a difficult thing to do, and the only way I know of to go about it is to practice. IR training sessions should be conducted often, and part of that training should be aimed at preparing the team for handling stressful and unexpected situations. To accomplish this, I recommend unannounced incident response training sessions that the team has no idea are not real. If the team does not believe that the incident is really occurring, they will never become inured to the stress of the situation. They must learn on a visceral level that the worst thing one can do under stress is to surrender to unreason and panic. After all, a calm and rational human mind is the most effective tool and problem solver in the known universe.