InfoSec: A Human Problem, NOT A Technological Problem

I have been involved in (and thinking about) computer network information security since the early 80s. I’ve seen computer information security develop from BS7799 and the Rainbow series standards to the standards we are still using today such as ISO 27002, the NIST 800 series, PCI, etc. All of this guidance at its core is basically the same; employ strong access controls, monitor the system, employ configuration controls and all the other basics. These all seem like great security controls and I’m sure that we still need to use them. The conundrum is that all of this guidance has consistently failed to solve the problem, and not only hasn’t it solved it, the information security problem is getting worse!

I have been trying to understand why this is. Perhaps it’s too much new tech too fast, perhaps the problem itself is simply insoluble….or perhaps we have just been approaching the problem from the wrong angle all this time. After all, the height of futility has often been described as doing the same thing again and again and expecting a different outcome to result. So I decided to give the whole subject a fresh look. I took a cue from Marcus Aurelius and started with the basics: what is information security and why do we need it?

One of the first principles that occurred to me is just this: information security is a human problem, not a technological problem. Computers don’t have desires, they are not duplicitous, they are not evil and they are not aware. They are just tools. It is the humans that develop and use these tools that are exclusively responsible for information theft and corruption.

With that in mind, I suggest we embrace an information security standard based on human foible and weakness of character. I know that in the information security world we always pay lip service to the idea that we are paying attention to the human factor. But from what I see, that is all smoke and mirrors. What I really see is that we continue to throw machines and applications at the information security problem. “Oh, yes,” we say “this new adaptive security device or SIEM system or whatever new tool will protect our private information! I believe! Hallelujah!”

Good luck with that.

Nothing can replace the flexibility and intuition of a human mind. There are at best some schooled and semi-autonomous tools-and I emphasize the word TOOLS-that ape true intelligence. But in reality, they are only effective when combined with human input and oversight.

With all of this in mind, I suggest we look at the computer network security world from a purely human perspective. Expect people to do the worst, and then be elated when they rise above expectations. Plan your tactics with laziness, envy, spite, stupidity, inattention and all of our other bad characteristics in mind. Don’t spend a million dollars on the best current global information security device or service; spend FIVE million dollars on knowledgeable, canny and intelligent human employees.

I intend to write further about this subject and continue to explore the ways we can adjust our security controls and processes to better address the human factor and make inroads into better infosec. It will be interesting to see if it works!

Petya/PetyaWrap Threat Info

As we speak, there is a global ransomware outbreak spreading. The infosec community is working together, in the open, on Twitter and mailing lists sharing information with each other and the world about the threat. 

The infector is called “Petya”/“PetyaWrap” and it appears to use psexec to execute the EternalBlue exploits from the NSA.

The current infector has the following list of target file extensions in the current (as of an hour ago) release. https://twitter.com/bry_campbell/status/879702644394270720/photo/1

Those with robust networks will likely find containment a usual activity, while those who haven’t implement defense in depth and a holistic enclaving strategy are likely in trouble.

Here are the exploits it is using: CVE-2017-0199 and MS17-010, so make sure you have these patched on all systems. Make sure you find anything that is outside the usual patch cycle, like HVAC, elevators, network cameras, ATMs, IoT devices, printers and copiers, ICS components, etc. Note that this a combination of a client-side attack and a network attack, so likely very capable of spreading to internal systems… Client side likely to yield access to internals pretty easily.

May only be affecting the MBR, so check that to see if it is true for you. Some chatter about multiple variants. If you can open a command prompt, bootrec may help. Booting from a CD/USB or using a drive rescue tool may be of use. Restore/rebuild the MBR seems to be successful for some victims. >>  “bootrec /RebuildBcd bootrec /fixMbr bootrec /fixboot” (untested)

New Petrwrap/Petya ransomware has a fake Microsoft digital signature appended. Copied from Sysinternals Utils. – https://t.co/JooBu8lb9e

Lastline indicated this hash as an IOC: 027cc450ef5f8c5f653329641ec1fed91f694e0d229928963b30f6b0d7d3a745 – They also found these activities: https://pbs.twimg.com/media/DDVj-llVYAAHqk4.jpg

Eternal Blue detection rules are firing in several detection products, ET Rules firing on that Petya 71b6a493388e7d0b40c83ce903bc6b04  (drops 7e37ab34ecdcc3e77e24522ddfd4852d ) – https://twitter.com/kafeine/status/879711519038210048

Make sure Office updates are applied, in addition to OS updates for Windows. <<Office updates needed to be immune to CVE-2017-0199.

Now is a great time to ensure you have backups that work for critical systems and that your restore processes are functional.

Chatter about wide scale spread to POS systems across europe. Many industries impacted so far.

Bitdefender initial analysis – https://labs.bitdefender.com/2017/06/massive-goldeneye-ransomware-campaign-slams-worldwide-users/?utm_source=SMGlobal&utm_medium=Twitter&utm_campaign=labs

Stay safe out there! 

 

3:48pm Eastern

Update: Lots of great info on detection, response, spread and prevention can be found here: https://securelist.com/schroedingers-petya/78870/

Also, this is the last update to this post unless something significant changes. Follow me on Twitter for more info: @lbhuston 

State Of Security Podcast Episode 13 Is Out

Hey there! I hope your week is off to a great start.

Here is Episode 13 of the State of Security Podcast. This new “tidbit” format comes in under 35 minutes and features some pointers on unusual security questions you should be asking cloud service providers. 

I also provide a spring update about my research, where it is going and what I have been up to over the winter.

Check it out and let me know what you think via Twitter.

Want Better Infosec? Limit Functionality and Visibility

We humans are great at exploiting and expanding new technologies, but we often jump in with both feet before we fully understand the ramifications what we are doing. I cite the Internet itself. The ARPANET and the TCP/IP suite were entities designed to enable and enhance communications between people, not restrict them. The idea of security was ill considered from the beginning and was never a part of the design. Unfortunately, by the time we realized this fact, the Internet was already going great guns and it was too late to change it.

The same thing happened with personal computers. Many businesses found it was cheaper and easier to exploit this new technology than to stay with the main frame. So they jumped right in, bought off the shelf devices and operating systems, networked them together and voila! Business heaven!

Unfortunately, there was a snake in the garden. These computers and operating systems were not designed with businesses, and their attendant need for security, in mind. Such commercial systems have all kinds of functionalities and “features” that are not only useless for business purposes, they are pure gold for hackers.

As with the Internet, once people understood the security dangers of using these products, their use was ingrained and change was practically impossible. All we can do now, at least until these basic flaws are corrected, is try to work around them. One way to make a good start at this is to limit what these systems can do as much as is possible; if it doesn’t have a business function it should be turned off or removed.

For example, why should most employees have the ability to browse the Internet or check their social networking sites on their business systems? Few employees actually need this functionality, and those who do should be strictly limited and monitored. Almost all job descriptions could get by with a handful of websites (white listing), and those that truly do need full Internet accessibility should have their own subnet. How many employees in these times don’t have a smart phone in their pocket? Can’t they go to Facebook or check their bank account on that?

There are also many other examples of limiting the functionality of business devices and applications. USB ports, card readers and disc players are not necessary for most job descriptions. How about all those lovely services and features found in many commercial software applications and operating systems? Why not turn off as many of those as possible. There are lots of things that can be disabled using Active Directory.

In addition to limiting what systems and people can do, it is also a very good security idea to limit what they can see. Access to information, applications and devices should be strictly based on need to know. And in addition to information, users should not be able to see across the network. Why should a user in workstation space have the ability to see into server space? Why should marketing personnel have access to accounting information? This means good network segmentation with firewalls, logging and monitoring between the segments. Do whatever you can to limit what systems can see and do and I guarantee you will immediately see the security benefits.

Nuance Detection: Not Always an Electronic Problem

This month’s theme is nuance detection. As Brent stated in his blog earlier this month, “the core of nuance detection is to extend alerting capabilities into finding situations that specifically should not exist, and if they happen, would indicate a significant security failure.” When IT oriented people think about this, their minds naturally gravitate to heuristics; how can we establish reliable “normal” user behavior and thereby more easily catch anomalies? And that is as it should be.

But it should also be noted that these “situations that should not exist” are not limited only to cyber events that can be detected and monitored electronically. There are also programmatic and procedural situations that can lead to system compromise and data breach. These need to be detected and corrected too.

One such possible programmatic snafu that could lead to a significant security failure is lack of proper access account monitoring and oversight procedures. Attackers often create new user accounts, or even better for them, take over outdated or unused access accounts that already exist. These accounts are preferable as there are no active users to notice anomalous activity, and to intruder detection systems everything seems normal.

I can’t stress enough the importance of monitoring the access account creation, monitoring and retirement process. The account initiation and approval process needs to be strong, the identification process needs to be strong, the monitoring and retirements processes need to be strong and the often ignored oversight process needs to be strong. A failure of any one of these processes can lead to illicit access, and when all is said and done access is the biggest part of the game for the attacker.

Another dangerous procedural security problem are the system users that make lots of errors with security repercussions, or that just can’t seem to follow the security rules. Maybe they are harried and stressed, maybe just forgetful. Or perhaps they just think the whole “security thing” is just a waste of their time. But whatever the reasons, these foci of security incidents need to be detected and corrected just like any other security problem.

And once again, there should be regular processes in place for dealing with these individuals. Records of security and compliance errors should be kept in order to facilitate detection of transgressors. Specific, hierarchical procedures should be put in place for addressing the problem, including levels of discipline and how they should be imposed. And once again, there should be an oversight component to the process to ensure it is being carried out properly.

These are just a couple of the programmatic and procedural security situations that demand detection and correction. I’m sure there are many more. So my advice is to look at your security situation holistically and not just from the high tech point of view.

 

Detection: Humans in the Loop a Must

Detecting incidents is probably the most difficult network security task to perform well and consistently. Did you know that less than one out of five security incidents are detected by the organization being affected? Most organizations only find out they’ve experienced an information security incident when law enforcement comes knocking on their door, if they find out about it at all that is. And that can be very bad for business in the present environment. Customers are increasingly demanding stronger information security measures from their service providers and partners.

In order to have the best chance of detecting network security incidents, you need to record and monitor system activities. However, there is no easier way to shut down the interest of a network security or IT administrator than to say the word “monitoring”. You can just mention the word and their faces fall as if a rancid odor had suddenly entered the room! And I can’t say that I blame them. Most organizations still do not recognize the true necessity of monitoring, and so do not provide proper budgeting and staffing for the function. As a result, already fully tasked (and often times inadequately prepared) IT or security personnel are tasked with the job. This not only leads to resentment, but also virtually guarantees that the job will not be performed effectively.

But all is not gloom and doom. Many companies are reacting to the current business environment and are devoting more resources to protecting their private information. In addition, the security industry is constantly developing new tools that help streamline and remove much of the drudge work from the monitoring and detection tasks. And I surely recommend that businesses employ these tools to their full effect. Use log aggregation tools, parsers, artificial intelligence and whatever else is made available for these jobs.

However, it behooves us not to rely on these new magic bullets too much. As can be easily demonstrated from the history of security in general, there has never been a defense strategy that cannot be overcome by human cleverness and persistence. This continues to be demonstrably true in the world of information security.

My advice is to use the new tools to their maximum effectiveness, but to use them wisely. Only spend enough on the technology to accomplish the jobs at hand; don’t waste your money on redundant tools and capabilities. Instead, spend those savings on information security personnel and training. It will pay you well in the long run.

Revisiting Nuance Detection

The core of nuance detection is to extend alerting capabilities into finding situations that specifically should not exist, and if they happen, would indicate a significant security failure. A simple, elegant example would be a motion sensor on a safe in your home, combined with something like your home alarm system.
 
A significant failure state would be for the motion sensor inside the safe to trigger while the home alarm system is set in away mode. When the alarm is in away mode, there should be no condition that triggers motion inside the safe. If motion is detected, anytime, you might choose to alert in a minor way. But, if the alarm is set to away mode, you might signal all kinds of calamity and flashing lights, bells and whistles, for example.
 
This same approach can apply to your network environment, applications or data systems. Define what a significant failure state looks like, and then create detection and alerting mechanisms, even if conditional, for the indicators of that state. It can be easy. 
 
I remember thinking more deeply about this for the first time when I saw Marcus Ranum give his network burglar alarm speech at Defcon, what seems like a 1000 years ago now. That moment changed my life forever. Since then, I have always wanted to work on small detections. The most nuanced of fail states. The deepest signs of compromise. HoneyPoint™ came from that line of thinking, albeit, many years later. (Thanks, Marcus, you are amazing! BTW.) 🙂
 
I’ve written about approaches to it in the past, too. Things like detecting web shells, detection in depth techniques and such. I even made some nice maturity and deployment models.
 
This month, I will be revisiting nuance detection more deeply. Creating some more content around it, and speaking about it more openly. I’ll also cover how we have extended HoneyPoint with the Handler portion of HoneyPoint Agent. in order to fully support event management and data handling into your security alerting systems from basic scripts and simple tools you can create yourself. 
 
Stay tuned, and in the meantime, drop me a line on Twitter (@lbhuston) and let me know more about nuance detections you can think of or have implemented. I’d love to hear more about it. 

Network Segmentation versus Network Enclaving

As we have discussed in earlier blogs, network segmentation is the practice of splitting computer networks into subnets using combinations of firewalls, VLANs, access controls and policies & procedures. We have seen that the primary reason for segmenting networks is to prevent a simple perimeter breach from exposing the totality of an organization’s information assets. So what is the difference between network segmentation and network enclaving?

One of the differences is just the degree of segmentation you impose upon the network. Enclaves are more thoroughly segmented from the general network environment than usual. In fact, enclaving is sometimes just described as “enhanced network segmentation.”

Another difference between segmentation and enclaving is the primary threat enclaving strives to thwart: the internal threat. Although the preponderance of cyber-attacks come from external threat sources such as hackers, cyber-criminals and nation states, many of the most devastating breaches originate from internal sources such as employees and trusted service providers. These internal information security breaches may be either purposeful attacks or may simply be caused by employee error. Either way, they are just as devastating to an organization’s reputation and business share.

A rarely considered difference between enclaving and network segmentation is physical security. When fully controlling access to information assets based on the principle of need to know, it is not enough to just control logical access. It is necessary to restrict physical access to work areas and computer devices as well. These areas should be locked, and access by authorized personnel should be recorded and monitored. Visitors and service providers should be pre-approved and escorted when in protected areas.

An obvious problem with enclaving is that it is more difficult to implement and maintain than the usual information security measures. It requires more planning, more devices and more employee hours. So why should businesses trying to control expenditures put their resources into enclaving?

As an information security professional I would say that it should be done because it is the best way we know to protect information assets. But for many business concerns, the greatest benefit of true enclaving is in securing protected and regulated information such as payment card information, patient health records and personal financial information. If you employ enclaving to protect such assets, you are showing clients and regulators alike that your business is serious about securing the information in its charge. And in today’s business climate, that can be a very important differentiator indeed!