Risk Assessment Techniques for Everyone

Risk assessment and treatment is something we all do, consciously or unconsciously, every day. For example, when you look out the window in the morning before you leave for work, see the sky is gray and decide to take your umbrella with you, you have just assessed and treated the risk of getting wet in the rain. In effect, you have identified a threat (rain) and a vulnerability (you are subject to getting wet), you have analyzed the possibility of occurrence (likely) and the impact of threat realization (having to sit soggy at your desk), and you have decided to treat that risk (taking your umbrella) – risk assessment & treatment.

However, this kind of risk assessment is what is called “ad hoc”. All of the analysis and decision making you just made was informal and done on the fly. Pertinent information wasn’t gathered and factored in, other consequences such as the bother of carrying the umbrella around wasn’t properly considered, other treatment options weren’t considered, etc. What business concerns and government agencies have learned from long experience is that if you investigate, write down and consider such factors rationally and holistically, you end up with a more realistic idea of what you are really letting yourself in for, and therefore you are making better risk decisions – formal risk assessment.

So why not apply this more formal risk assessment technique to important matters in your own life such as securing your home? It’s not really difficult, but you do have to know how to go about it. Here are the steps:

  1. System characterization: For home security, the system you are considering is your house, its contents, the people who live there, the activities that take place there, etc. Although, you know these things intimately it never hurts to write them down. Something about viewing information on the written page helps clarify it in our minds.
  2. Threat identification: In this step you imagine all the things that could threaten the security of your home and family. These would be such things as fire, bad weather, intruders, broken pipes, etc. For this (and other steps in the process), you can go beyond your own experience and see what threats other people have identified (i.e. google inquiries, insurance publications).
  3. Vulnerability identification: This is where you pair up the threats you have just identified with weaknesses in your home and its use. For example, perhaps your house is located on low ground that is subject to flooding, or you live in a neighborhood where burglaries may occur, or you have old ungrounded electrical wiring that may short and cause a fire. These are all vulnerabilities.
  4. Controls analysis: Controls analysis is simply listing the security mechanisms you already have in place. For example, security controls used around your home would be such things as locks on the doors and windows, alarm systems, motion-detecting lighting, etc.
  5. Likelihood determination: In this step you decide how likely it is that the threat/vulnerability will actually occur. There are really two ways you can make this determination. One is to make your best guess based on knowledge and experience (qualitative judgement). The second is to do some research and calculation and try to come up with actual percentage numbers (quantitative judgement). For home purposes I definitely recommend qualitative judgement. You can simply rate the likelihood of occurrence as high, medium or low risk.
  6. Impact analysis: In this step you decide what the consequences of threat/vulnerability realization will be. As with likelihood determination, this can be judged quantitatively or qualitatively, but for home purposes I recommend looking at worst-case scenarios. For example, if someone broke into your home, it could result in something as low impact as minor theft or vandalism, or it could result in very high impact such as serious injury or death. You should keep these more dire extremes in mind when you decide how you are going to treat the risks you find.
  7. Risk determination: Risk is determined by factoring in how likely threat/vulnerability realizations is with the magnitude of the impact that could occur and the effectiveness of the controls you already have in place. For example you could rate the possibility of home invasion occurring as low, and the impact of the occurrence as high. This would make your initial risk rating a medium. Then you factor in the fact that you have an alarm system and un-pickable door locks in place, which would lower your final risk rating to low. That final rating is known as residual risk.
  8. Risk treatment: That’s it! Once you have determined the level of residual risk, it is time to decide how to proceed from there. Is the risk of home invasion low enough that you think you don’t need to apply any other controls? That is called accepting risk. Is the risk high enough that you feel you need to add more security controls to bring it down? That is called risk limitation or remediation. Do you think that the overall risk of home invasion is just so great that you have to move away? That is called risk avoidance. Do you not want to treat the risk yourself at all, and so you get extra insurance and hire a security company? That is called risk transference.

So, next time you have to make a serious decision in your life such as changing jobs or buying a new house, why not apply the risk assessment process? It will allow you to make a more rational and informed decision, and you will have the comfort of knowing you did your best in making the decision.

Human-Based Information Security Theory: Part 2

In my last blog, I wrote about the idea that information security is not a technological problem, but a human one. I also posited that the security controls and methodologies that we have followed for the last half century have not worked; in fact they have been proven less and less effective as time goes by.

My idea is this: if you want to counter modern information security threats, the most effective tool to throw at them is humans; technological devices should purely exist to:

  1. Prevent attackers from accessing network resources.
  2. To aid humans in collating and parsing monitoring information.
  3. And in the future (perhaps), to aid humans in retaliating against the attacks that are perpetrated against our sovereign and private information resources.

For the bulk of information security, I cite humans as the culpable parties. We should realize and plan on our known failings:

  1. Humans are basically lazy, self-interested and unreliable. Despite the fact that most of us do well most of the time, once and awhile we all exhibit these characteristics.
  2. Humans can be larcenous, vindictive and contentious, especially when their egos have been bruised or their aspirations have been thwarted.
  3. Humans are changeable and unreliable. Incidents in their private lives can greatly affect their business performance on a day to day basis.

We should also plan on the known strengths of humans:

  1. Humans can be noble and above contempt. Speaking to and depending on the rectitude of a true mensch can inspire humans to act above their normal inclinations.
  2. Humans are MUCH more perceptive than any machine ever build. Our minds work holistically and are at least an order of magnitude more complex than any two dimensional system ever built or conceived.
  3. Humans can be inspired to levels of effort and caring that are entirely beyond any machine.

So this is what I propose:

  1. Information security efforts should rely on human monitoring and risk assessment.
  2. Human ego and hubris should be mandated against; when cults of personality and ego arise it is time for a change to the more rational side of life.
  3. Human frailty and licentiousness should be expected, monitored for and countered effectively in a human manner.
  4. Make your enemies your friends. Find the clever ones that are usurping your defenses and bring them onto your side.
  5. Spend less on machines and applications and use the human resources you do have to their best effect.
  6. (This is the most controversial): Test your people to get a handle on who is trustworthy at the moment and who is not. Reward the loyal, but never lose sight of the fact that humans are changeable.
  7. Make sure that dual controls and separation of duties are employed to their greatest functional effect. No one person should hold the keys to the kingdom.
  8. Distrust centralization. Despite “efficiency and economies of scale,” putting all of your eggs in one basket is NEVER a good idea.

I think if we try this more human approach to information security, perhaps we will be more successful than we have been in the past. After all, what have we got to lose? Nobody can accuse us of doing overly well to this point!

InfoSec: A Human Problem, NOT A Technological Problem

I have been involved in (and thinking about) computer network information security since the early 80s. I’ve seen computer information security develop from BS7799 and the Rainbow series standards to the standards we are still using today such as ISO 27002, the NIST 800 series, PCI, etc. All of this guidance at its core is basically the same; employ strong access controls, monitor the system, employ configuration controls and all the other basics. These all seem like great security controls and I’m sure that we still need to use them. The conundrum is that all of this guidance has consistently failed to solve the problem, and not only hasn’t it solved it, the information security problem is getting worse!

I have been trying to understand why this is. Perhaps it’s too much new tech too fast, perhaps the problem itself is simply insoluble….or perhaps we have just been approaching the problem from the wrong angle all this time. After all, the height of futility has often been described as doing the same thing again and again and expecting a different outcome to result. So I decided to give the whole subject a fresh look. I took a cue from Marcus Aurelius and started with the basics: what is information security and why do we need it?

One of the first principles that occurred to me is just this: information security is a human problem, not a technological problem. Computers don’t have desires, they are not duplicitous, they are not evil and they are not aware. They are just tools. It is the humans that develop and use these tools that are exclusively responsible for information theft and corruption.

With that in mind, I suggest we embrace an information security standard based on human foible and weakness of character. I know that in the information security world we always pay lip service to the idea that we are paying attention to the human factor. But from what I see, that is all smoke and mirrors. What I really see is that we continue to throw machines and applications at the information security problem. “Oh, yes,” we say “this new adaptive security device or SIEM system or whatever new tool will protect our private information! I believe! Hallelujah!”

Good luck with that.

Nothing can replace the flexibility and intuition of a human mind. There are at best some schooled and semi-autonomous tools-and I emphasize the word TOOLS-that ape true intelligence. But in reality, they are only effective when combined with human input and oversight.

With all of this in mind, I suggest we look at the computer network security world from a purely human perspective. Expect people to do the worst, and then be elated when they rise above expectations. Plan your tactics with laziness, envy, spite, stupidity, inattention and all of our other bad characteristics in mind. Don’t spend a million dollars on the best current global information security device or service; spend FIVE million dollars on knowledgeable, canny and intelligent human employees.

I intend to write further about this subject and continue to explore the ways we can adjust our security controls and processes to better address the human factor and make inroads into better infosec. It will be interesting to see if it works!

Want Better Infosec? Limit Functionality and Visibility

We humans are great at exploiting and expanding new technologies, but we often jump in with both feet before we fully understand the ramifications what we are doing. I cite the Internet itself. The ARPANET and the TCP/IP suite were entities designed to enable and enhance communications between people, not restrict them. The idea of security was ill considered from the beginning and was never a part of the design. Unfortunately, by the time we realized this fact, the Internet was already going great guns and it was too late to change it.

The same thing happened with personal computers. Many businesses found it was cheaper and easier to exploit this new technology than to stay with the main frame. So they jumped right in, bought off the shelf devices and operating systems, networked them together and voila! Business heaven!

Unfortunately, there was a snake in the garden. These computers and operating systems were not designed with businesses, and their attendant need for security, in mind. Such commercial systems have all kinds of functionalities and “features” that are not only useless for business purposes, they are pure gold for hackers.

As with the Internet, once people understood the security dangers of using these products, their use was ingrained and change was practically impossible. All we can do now, at least until these basic flaws are corrected, is try to work around them. One way to make a good start at this is to limit what these systems can do as much as is possible; if it doesn’t have a business function it should be turned off or removed.

For example, why should most employees have the ability to browse the Internet or check their social networking sites on their business systems? Few employees actually need this functionality, and those who do should be strictly limited and monitored. Almost all job descriptions could get by with a handful of websites (white listing), and those that truly do need full Internet accessibility should have their own subnet. How many employees in these times don’t have a smart phone in their pocket? Can’t they go to Facebook or check their bank account on that?

There are also many other examples of limiting the functionality of business devices and applications. USB ports, card readers and disc players are not necessary for most job descriptions. How about all those lovely services and features found in many commercial software applications and operating systems? Why not turn off as many of those as possible. There are lots of things that can be disabled using Active Directory.

In addition to limiting what systems and people can do, it is also a very good security idea to limit what they can see. Access to information, applications and devices should be strictly based on need to know. And in addition to information, users should not be able to see across the network. Why should a user in workstation space have the ability to see into server space? Why should marketing personnel have access to accounting information? This means good network segmentation with firewalls, logging and monitoring between the segments. Do whatever you can to limit what systems can see and do and I guarantee you will immediately see the security benefits.

Nuance Detection: Not Always an Electronic Problem

This month’s theme is nuance detection. As Brent stated in his blog earlier this month, “the core of nuance detection is to extend alerting capabilities into finding situations that specifically should not exist, and if they happen, would indicate a significant security failure.” When IT oriented people think about this, their minds naturally gravitate to heuristics; how can we establish reliable “normal” user behavior and thereby more easily catch anomalies? And that is as it should be.

But it should also be noted that these “situations that should not exist” are not limited only to cyber events that can be detected and monitored electronically. There are also programmatic and procedural situations that can lead to system compromise and data breach. These need to be detected and corrected too.

One such possible programmatic snafu that could lead to a significant security failure is lack of proper access account monitoring and oversight procedures. Attackers often create new user accounts, or even better for them, take over outdated or unused access accounts that already exist. These accounts are preferable as there are no active users to notice anomalous activity, and to intruder detection systems everything seems normal.

I can’t stress enough the importance of monitoring the access account creation, monitoring and retirement process. The account initiation and approval process needs to be strong, the identification process needs to be strong, the monitoring and retirements processes need to be strong and the often ignored oversight process needs to be strong. A failure of any one of these processes can lead to illicit access, and when all is said and done access is the biggest part of the game for the attacker.

Another dangerous procedural security problem are the system users that make lots of errors with security repercussions, or that just can’t seem to follow the security rules. Maybe they are harried and stressed, maybe just forgetful. Or perhaps they just think the whole “security thing” is just a waste of their time. But whatever the reasons, these foci of security incidents need to be detected and corrected just like any other security problem.

And once again, there should be regular processes in place for dealing with these individuals. Records of security and compliance errors should be kept in order to facilitate detection of transgressors. Specific, hierarchical procedures should be put in place for addressing the problem, including levels of discipline and how they should be imposed. And once again, there should be an oversight component to the process to ensure it is being carried out properly.

These are just a couple of the programmatic and procedural security situations that demand detection and correction. I’m sure there are many more. So my advice is to look at your security situation holistically and not just from the high tech point of view.

 

Detection: Humans in the Loop a Must

Detecting incidents is probably the most difficult network security task to perform well and consistently. Did you know that less than one out of five security incidents are detected by the organization being affected? Most organizations only find out they’ve experienced an information security incident when law enforcement comes knocking on their door, if they find out about it at all that is. And that can be very bad for business in the present environment. Customers are increasingly demanding stronger information security measures from their service providers and partners.

In order to have the best chance of detecting network security incidents, you need to record and monitor system activities. However, there is no easier way to shut down the interest of a network security or IT administrator than to say the word “monitoring”. You can just mention the word and their faces fall as if a rancid odor had suddenly entered the room! And I can’t say that I blame them. Most organizations still do not recognize the true necessity of monitoring, and so do not provide proper budgeting and staffing for the function. As a result, already fully tasked (and often times inadequately prepared) IT or security personnel are tasked with the job. This not only leads to resentment, but also virtually guarantees that the job will not be performed effectively.

But all is not gloom and doom. Many companies are reacting to the current business environment and are devoting more resources to protecting their private information. In addition, the security industry is constantly developing new tools that help streamline and remove much of the drudge work from the monitoring and detection tasks. And I surely recommend that businesses employ these tools to their full effect. Use log aggregation tools, parsers, artificial intelligence and whatever else is made available for these jobs.

However, it behooves us not to rely on these new magic bullets too much. As can be easily demonstrated from the history of security in general, there has never been a defense strategy that cannot be overcome by human cleverness and persistence. This continues to be demonstrably true in the world of information security.

My advice is to use the new tools to their maximum effectiveness, but to use them wisely. Only spend enough on the technology to accomplish the jobs at hand; don’t waste your money on redundant tools and capabilities. Instead, spend those savings on information security personnel and training. It will pay you well in the long run.

Network Segmentation versus Network Enclaving

As we have discussed in earlier blogs, network segmentation is the practice of splitting computer networks into subnets using combinations of firewalls, VLANs, access controls and policies & procedures. We have seen that the primary reason for segmenting networks is to prevent a simple perimeter breach from exposing the totality of an organization’s information assets. So what is the difference between network segmentation and network enclaving?

One of the differences is just the degree of segmentation you impose upon the network. Enclaves are more thoroughly segmented from the general network environment than usual. In fact, enclaving is sometimes just described as “enhanced network segmentation.”

Another difference between segmentation and enclaving is the primary threat enclaving strives to thwart: the internal threat. Although the preponderance of cyber-attacks come from external threat sources such as hackers, cyber-criminals and nation states, many of the most devastating breaches originate from internal sources such as employees and trusted service providers. These internal information security breaches may be either purposeful attacks or may simply be caused by employee error. Either way, they are just as devastating to an organization’s reputation and business share.

A rarely considered difference between enclaving and network segmentation is physical security. When fully controlling access to information assets based on the principle of need to know, it is not enough to just control logical access. It is necessary to restrict physical access to work areas and computer devices as well. These areas should be locked, and access by authorized personnel should be recorded and monitored. Visitors and service providers should be pre-approved and escorted when in protected areas.

An obvious problem with enclaving is that it is more difficult to implement and maintain than the usual information security measures. It requires more planning, more devices and more employee hours. So why should businesses trying to control expenditures put their resources into enclaving?

As an information security professional I would say that it should be done because it is the best way we know to protect information assets. But for many business concerns, the greatest benefit of true enclaving is in securing protected and regulated information such as payment card information, patient health records and personal financial information. If you employ enclaving to protect such assets, you are showing clients and regulators alike that your business is serious about securing the information in its charge. And in today’s business climate, that can be a very important differentiator indeed!

Network Knowledge and Segmentation

If you look at most cutting-edge network information security guidance, job #1 can be paraphrased as “Know Thy Network.” It is firmly recommended (and in much regulatory guidance demanded) that organizations keep up-to-date inventories of hardware and software assets present on their computer networks. In fact, the most current recommendation is that organizations utilize software suites that not only keep track of inventories, but monitor all critical network entities with the aim of detecting any hardware or software applications that should not be there.

Another part of network knowledge is mapping data flows and trust relationships on networks, and mapping which business entities use which IT resources and information. For this knowledge, I like to go to my favorite risk management tool: the Business Impact Analysis (BIA). In this process, input comes from personnel across the enterprise detailing what they do, how they do it, what resources they need, what input they need, what output they produce and more (see MSI blog archives for more information about BIA and what it can do for your information security program).

About now, you are probably asking what all this has to do with network segmentation. The answer is that you simply must know where all your network assets are, who needs access to them and how they move before you can segment the network intelligently and effectively. It can all be summed up with one phrase: Need to Know. Need to know is the very basis of access control, and access control is what network segmentation is all about. You do not want anyone on your network to “see” information assets that they do not need to see in order to properly perform their business functions. And by the same token, you do not want network personnel to be cut off from information assets that they do need to perform their jobs. These are the reasons network knowledge and network segmentation go hand-in-hand.

Proper network knowledge becomes even more important when you take the next step in network segmentation: enclaving. I will discuss segmentation versus enclaving in my next blog later this month.

Why Segment Your Network?

Network segmentation is the practice of splitting your computer network into subnetworks or network segments (also known as zoning). This is typically done using combinations of firewalls, VLANs, access controls and policies & procedures. Implementing network segmentation requires planning and effort, and it can entail some teething problems along the way as well. So why should it be done?

The number one reason is to protect the security of your network resources and information. When people first started to defend their homes and enterprises from attack, they built perimeter walls and made sure everything important was inside of those walls. They figured doing this would keep their enemies outside where they couldn’t possibly do any damage. This was a great idea, but unfortunately it had problems in the real world.

People found that the enemy only had to make one small hole in their perimeter defenses to be able to get at all of their valuables. They also realized that their perimeter defense didn’t stop evil insiders from wreaking havoc on their valuables. To deal with these problems, people started to add additional layers of protection inside of their outer walls. They walled off enclaves inside the outer defenses and added locks and guards to their individual buildings to thwart attacks.

This same situation exists now in the world of network protection. As network security assessors and advisors, we see that most networks we deal with are still “flat;” they are not really segmented and access controls alone are left to prevent successful attacks from occurring. But in the real world, hacking into a computer network is all about gaining a tiny foothold on the network, then leveraging that access to navigate around the network. The harder it is for these attackers to see the resources they want and navigate to them, the safer those resources are. In addition, the more protections that hackers need to circumvent during their attacks, the more likely they are to be detected. It should also be noted that network segmentation works just as well against the internal threat; it is just as difficult for an employee to gain access to a forbidden network segment as it is for an Internet-based attacker.

Increased security is not the only advantage of network segmentation. Instead of making network congestions worse, well implemented segmentation can actually reduce network congestion. This is because there are fewer hosts, thus less local traffic per segment. In addition, segmentation can help you contain network problems by limiting the effects of local failures that occur on other parts of the network.

The business reasons for implementing network segmentation are becoming more apparent every day. Increasingly, customers are demanding better information security from the businesses they employ. If the customer has a choice between two very similar companies, they will almost assuredly pick the company with better security. Simply being able to say to your customers that your network is fully segmented and controlled can improve your chances of success radically.

Election Hacking

There has been a lot of talk in the news lately about election hacking, especially about the Russia government possibly attempting to subvert the upcoming presidential election. And I think that in a lot of ways it is good that this has come up. After all, voting systems are based on networked computer systems. Private election and campaign information is stored and transmitted on networked computer systems. That means that hacking can indeed be a factor in elections, and the public should be made well aware of it. We are always being told by ‘authorities’ and ‘pundits’ what is and is not possible. And generally we are gullible enough to swallow it. But history has a lot of lessons to teach us, and one of the most important is that the ‘impossible’ has a nasty way of just happening.

Authorities are saying now that because of the distributed nature of voting systems and redundancies in voting record-keeping that it would be virtually impossible for an outside party to rig the numbers in the election. But that is just a direct method of affecting an election. What about the indirect methods? What would happen, for instance, if hackers could just cause delays and confusion on Election Day? If they could cause long lines in certain voting districts and smooth sailing in other voting districts, couldn’t they affect the number of Democratic Votes versus Republican votes? We all know that if there is a hassle at the polls that a lot of people will just give up and go back home again. And this is just one way that elections could be affected by hacking. There are bound to be plenty of others.

With this in mind, isn’t it wise to err on the side of caution? Shouldn’t we as a people insist that our voting systems are secured as well as is possible? Don’t we want to consider these systems to be ‘vital infrastructure’? These are the reasons I advocate instituting best practices as the guidance to be used when securing electronic voting systems. Systems should be configured as securely as possible, associated communications systems should be robust and highly encrypted, risk should be assessed and addressed before the election, monitoring efforts should be strictly followed and incident response plans should be practiced and ready to go. These efforts would be one good way to help ensure a fair and ‘hacker free’ election.