SilentTiger Targeted Threat Intelligence Update

Just a quick update on SilentTiger™, our passive security assessment and intelligence engine. 

We have released a new version of the platform to our internal team, and this new version automatically builds the SilentTiger configuration for our analysts. That means that clients using our SilentTiger offering will no longer have to provide any more information than the list of domain names to engage the process. 

This update also now includes a host inventory mechanism, and a new data point – who runs the IP addresses identified. This is very useful for finding out the cloud providers that a given set of targets are using and makes it much easier to find industry clusters of service providers that could be a risk to the supply chain.

For more information about using SilentTiger to perform ongoing assessments for your organization, your M&A prospects, your supply chain or as a form of industry intelligence, simply get in touch. Clients ranging from global to SMB and across a wide variety of industries are already taking advantage of the capability. Give us 20 minutes, and we’ll be happy to explain! 

Want Better Infosec? Limit Functionality and Visibility

We humans are great at exploiting and expanding new technologies, but we often jump in with both feet before we fully understand the ramifications what we are doing. I cite the Internet itself. The ARPANET and the TCP/IP suite were entities designed to enable and enhance communications between people, not restrict them. The idea of security was ill considered from the beginning and was never a part of the design. Unfortunately, by the time we realized this fact, the Internet was already going great guns and it was too late to change it.

The same thing happened with personal computers. Many businesses found it was cheaper and easier to exploit this new technology than to stay with the main frame. So they jumped right in, bought off the shelf devices and operating systems, networked them together and voila! Business heaven!

Unfortunately, there was a snake in the garden. These computers and operating systems were not designed with businesses, and their attendant need for security, in mind. Such commercial systems have all kinds of functionalities and “features” that are not only useless for business purposes, they are pure gold for hackers.

As with the Internet, once people understood the security dangers of using these products, their use was ingrained and change was practically impossible. All we can do now, at least until these basic flaws are corrected, is try to work around them. One way to make a good start at this is to limit what these systems can do as much as is possible; if it doesn’t have a business function it should be turned off or removed.

For example, why should most employees have the ability to browse the Internet or check their social networking sites on their business systems? Few employees actually need this functionality, and those who do should be strictly limited and monitored. Almost all job descriptions could get by with a handful of websites (white listing), and those that truly do need full Internet accessibility should have their own subnet. How many employees in these times don’t have a smart phone in their pocket? Can’t they go to Facebook or check their bank account on that?

There are also many other examples of limiting the functionality of business devices and applications. USB ports, card readers and disc players are not necessary for most job descriptions. How about all those lovely services and features found in many commercial software applications and operating systems? Why not turn off as many of those as possible. There are lots of things that can be disabled using Active Directory.

In addition to limiting what systems and people can do, it is also a very good security idea to limit what they can see. Access to information, applications and devices should be strictly based on need to know. And in addition to information, users should not be able to see across the network. Why should a user in workstation space have the ability to see into server space? Why should marketing personnel have access to accounting information? This means good network segmentation with firewalls, logging and monitoring between the segments. Do whatever you can to limit what systems can see and do and I guarantee you will immediately see the security benefits.

Nuance Detection: Not Always an Electronic Problem

This month’s theme is nuance detection. As Brent stated in his blog earlier this month, “the core of nuance detection is to extend alerting capabilities into finding situations that specifically should not exist, and if they happen, would indicate a significant security failure.” When IT oriented people think about this, their minds naturally gravitate to heuristics; how can we establish reliable “normal” user behavior and thereby more easily catch anomalies? And that is as it should be.

But it should also be noted that these “situations that should not exist” are not limited only to cyber events that can be detected and monitored electronically. There are also programmatic and procedural situations that can lead to system compromise and data breach. These need to be detected and corrected too.

One such possible programmatic snafu that could lead to a significant security failure is lack of proper access account monitoring and oversight procedures. Attackers often create new user accounts, or even better for them, take over outdated or unused access accounts that already exist. These accounts are preferable as there are no active users to notice anomalous activity, and to intruder detection systems everything seems normal.

I can’t stress enough the importance of monitoring the access account creation, monitoring and retirement process. The account initiation and approval process needs to be strong, the identification process needs to be strong, the monitoring and retirements processes need to be strong and the often ignored oversight process needs to be strong. A failure of any one of these processes can lead to illicit access, and when all is said and done access is the biggest part of the game for the attacker.

Another dangerous procedural security problem are the system users that make lots of errors with security repercussions, or that just can’t seem to follow the security rules. Maybe they are harried and stressed, maybe just forgetful. Or perhaps they just think the whole “security thing” is just a waste of their time. But whatever the reasons, these foci of security incidents need to be detected and corrected just like any other security problem.

And once again, there should be regular processes in place for dealing with these individuals. Records of security and compliance errors should be kept in order to facilitate detection of transgressors. Specific, hierarchical procedures should be put in place for addressing the problem, including levels of discipline and how they should be imposed. And once again, there should be an oversight component to the process to ensure it is being carried out properly.

These are just a couple of the programmatic and procedural security situations that demand detection and correction. I’m sure there are many more. So my advice is to look at your security situation holistically and not just from the high tech point of view.

 

Detection: Humans in the Loop a Must

Detecting incidents is probably the most difficult network security task to perform well and consistently. Did you know that less than one out of five security incidents are detected by the organization being affected? Most organizations only find out they’ve experienced an information security incident when law enforcement comes knocking on their door, if they find out about it at all that is. And that can be very bad for business in the present environment. Customers are increasingly demanding stronger information security measures from their service providers and partners.

In order to have the best chance of detecting network security incidents, you need to record and monitor system activities. However, there is no easier way to shut down the interest of a network security or IT administrator than to say the word “monitoring”. You can just mention the word and their faces fall as if a rancid odor had suddenly entered the room! And I can’t say that I blame them. Most organizations still do not recognize the true necessity of monitoring, and so do not provide proper budgeting and staffing for the function. As a result, already fully tasked (and often times inadequately prepared) IT or security personnel are tasked with the job. This not only leads to resentment, but also virtually guarantees that the job will not be performed effectively.

But all is not gloom and doom. Many companies are reacting to the current business environment and are devoting more resources to protecting their private information. In addition, the security industry is constantly developing new tools that help streamline and remove much of the drudge work from the monitoring and detection tasks. And I surely recommend that businesses employ these tools to their full effect. Use log aggregation tools, parsers, artificial intelligence and whatever else is made available for these jobs.

However, it behooves us not to rely on these new magic bullets too much. As can be easily demonstrated from the history of security in general, there has never been a defense strategy that cannot be overcome by human cleverness and persistence. This continues to be demonstrably true in the world of information security.

My advice is to use the new tools to their maximum effectiveness, but to use them wisely. Only spend enough on the technology to accomplish the jobs at hand; don’t waste your money on redundant tools and capabilities. Instead, spend those savings on information security personnel and training. It will pay you well in the long run.

Revisiting Nuance Detection

The core of nuance detection is to extend alerting capabilities into finding situations that specifically should not exist, and if they happen, would indicate a significant security failure. A simple, elegant example would be a motion sensor on a safe in your home, combined with something like your home alarm system.
 
A significant failure state would be for the motion sensor inside the safe to trigger while the home alarm system is set in away mode. When the alarm is in away mode, there should be no condition that triggers motion inside the safe. If motion is detected, anytime, you might choose to alert in a minor way. But, if the alarm is set to away mode, you might signal all kinds of calamity and flashing lights, bells and whistles, for example.
 
This same approach can apply to your network environment, applications or data systems. Define what a significant failure state looks like, and then create detection and alerting mechanisms, even if conditional, for the indicators of that state. It can be easy. 
 
I remember thinking more deeply about this for the first time when I saw Marcus Ranum give his network burglar alarm speech at Defcon, what seems like a 1000 years ago now. That moment changed my life forever. Since then, I have always wanted to work on small detections. The most nuanced of fail states. The deepest signs of compromise. HoneyPoint™ came from that line of thinking, albeit, many years later. (Thanks, Marcus, you are amazing! BTW.) 🙂
 
I’ve written about approaches to it in the past, too. Things like detecting web shells, detection in depth techniques and such. I even made some nice maturity and deployment models.
 
This month, I will be revisiting nuance detection more deeply. Creating some more content around it, and speaking about it more openly. I’ll also cover how we have extended HoneyPoint with the Handler portion of HoneyPoint Agent. in order to fully support event management and data handling into your security alerting systems from basic scripts and simple tools you can create yourself. 
 
Stay tuned, and in the meantime, drop me a line on Twitter (@lbhuston) and let me know more about nuance detections you can think of or have implemented. I’d love to hear more about it. 

Network Segmentation versus Network Enclaving

As we have discussed in earlier blogs, network segmentation is the practice of splitting computer networks into subnets using combinations of firewalls, VLANs, access controls and policies & procedures. We have seen that the primary reason for segmenting networks is to prevent a simple perimeter breach from exposing the totality of an organization’s information assets. So what is the difference between network segmentation and network enclaving?

One of the differences is just the degree of segmentation you impose upon the network. Enclaves are more thoroughly segmented from the general network environment than usual. In fact, enclaving is sometimes just described as “enhanced network segmentation.”

Another difference between segmentation and enclaving is the primary threat enclaving strives to thwart: the internal threat. Although the preponderance of cyber-attacks come from external threat sources such as hackers, cyber-criminals and nation states, many of the most devastating breaches originate from internal sources such as employees and trusted service providers. These internal information security breaches may be either purposeful attacks or may simply be caused by employee error. Either way, they are just as devastating to an organization’s reputation and business share.

A rarely considered difference between enclaving and network segmentation is physical security. When fully controlling access to information assets based on the principle of need to know, it is not enough to just control logical access. It is necessary to restrict physical access to work areas and computer devices as well. These areas should be locked, and access by authorized personnel should be recorded and monitored. Visitors and service providers should be pre-approved and escorted when in protected areas.

An obvious problem with enclaving is that it is more difficult to implement and maintain than the usual information security measures. It requires more planning, more devices and more employee hours. So why should businesses trying to control expenditures put their resources into enclaving?

As an information security professional I would say that it should be done because it is the best way we know to protect information assets. But for many business concerns, the greatest benefit of true enclaving is in securing protected and regulated information such as payment card information, patient health records and personal financial information. If you employ enclaving to protect such assets, you are showing clients and regulators alike that your business is serious about securing the information in its charge. And in today’s business climate, that can be a very important differentiator indeed!

Network Knowledge and Segmentation

If you look at most cutting-edge network information security guidance, job #1 can be paraphrased as “Know Thy Network.” It is firmly recommended (and in much regulatory guidance demanded) that organizations keep up-to-date inventories of hardware and software assets present on their computer networks. In fact, the most current recommendation is that organizations utilize software suites that not only keep track of inventories, but monitor all critical network entities with the aim of detecting any hardware or software applications that should not be there.

Another part of network knowledge is mapping data flows and trust relationships on networks, and mapping which business entities use which IT resources and information. For this knowledge, I like to go to my favorite risk management tool: the Business Impact Analysis (BIA). In this process, input comes from personnel across the enterprise detailing what they do, how they do it, what resources they need, what input they need, what output they produce and more (see MSI blog archives for more information about BIA and what it can do for your information security program).

About now, you are probably asking what all this has to do with network segmentation. The answer is that you simply must know where all your network assets are, who needs access to them and how they move before you can segment the network intelligently and effectively. It can all be summed up with one phrase: Need to Know. Need to know is the very basis of access control, and access control is what network segmentation is all about. You do not want anyone on your network to “see” information assets that they do not need to see in order to properly perform their business functions. And by the same token, you do not want network personnel to be cut off from information assets that they do need to perform their jobs. These are the reasons network knowledge and network segmentation go hand-in-hand.

Proper network knowledge becomes even more important when you take the next step in network segmentation: enclaving. I will discuss segmentation versus enclaving in my next blog later this month.

Why Segment Your Network?

Network segmentation is the practice of splitting your computer network into subnetworks or network segments (also known as zoning). This is typically done using combinations of firewalls, VLANs, access controls and policies & procedures. Implementing network segmentation requires planning and effort, and it can entail some teething problems along the way as well. So why should it be done?

The number one reason is to protect the security of your network resources and information. When people first started to defend their homes and enterprises from attack, they built perimeter walls and made sure everything important was inside of those walls. They figured doing this would keep their enemies outside where they couldn’t possibly do any damage. This was a great idea, but unfortunately it had problems in the real world.

People found that the enemy only had to make one small hole in their perimeter defenses to be able to get at all of their valuables. They also realized that their perimeter defense didn’t stop evil insiders from wreaking havoc on their valuables. To deal with these problems, people started to add additional layers of protection inside of their outer walls. They walled off enclaves inside the outer defenses and added locks and guards to their individual buildings to thwart attacks.

This same situation exists now in the world of network protection. As network security assessors and advisors, we see that most networks we deal with are still “flat;” they are not really segmented and access controls alone are left to prevent successful attacks from occurring. But in the real world, hacking into a computer network is all about gaining a tiny foothold on the network, then leveraging that access to navigate around the network. The harder it is for these attackers to see the resources they want and navigate to them, the safer those resources are. In addition, the more protections that hackers need to circumvent during their attacks, the more likely they are to be detected. It should also be noted that network segmentation works just as well against the internal threat; it is just as difficult for an employee to gain access to a forbidden network segment as it is for an Internet-based attacker.

Increased security is not the only advantage of network segmentation. Instead of making network congestions worse, well implemented segmentation can actually reduce network congestion. This is because there are fewer hosts, thus less local traffic per segment. In addition, segmentation can help you contain network problems by limiting the effects of local failures that occur on other parts of the network.

The business reasons for implementing network segmentation are becoming more apparent every day. Increasingly, customers are demanding better information security from the businesses they employ. If the customer has a choice between two very similar companies, they will almost assuredly pick the company with better security. Simply being able to say to your customers that your network is fully segmented and controlled can improve your chances of success radically.

Segmenting With MSI MachineTruth

Many organizations struggle to implement network segmentation and secure network enclaves for servers, industrial controls, SCADA or regulated data. MicroSolved, Inc. (“MSI”) has been helping clients solve information security challenges for nearly twenty-five years on a global scale. In helping our clients segment their networks and protect their traffic flows, we identified a better approach to solving this often untenable problem.

That approach, called MachineTruth™, leverages our proprietary machine learning and data analytics platform to support our industry leading team of experts throughout the process. Our team leverages offline analysis of configuration files, net flow and traffic patterns to simplify the challenge. Instead of manual review by teams of network and systems administrators, MachineTruth takes automated deep dives into the data to provide real insights into how to segment, where to segment, what filtering rules need to be established and how those rules are functioning as they come online.

Our experts then work with your network and security teams, or one of our select MachineTruth Implementation Partners, to guide them through the process of installing and configuring filtering devices, detection tools and applications needed to support the segmentation changes. As the enclaves start to take shape, ongoing oversight is performed by the MSI team, via continual analytics and modeling throughout the segmentation effort. As the data analysis and implementation processes proceed, the controls and rules are optimized and transitioned to steady state maintenance.

Lastly, the MSI team works with the segmentation stakeholders to document, socialize and transfer knowledge to those who will manage and support the newly segmented network and its various enclaves for the long term. This last step is critical to ensuring that the network changes and segmentation initiatives remain in place in the future.

This data-focused, machine learning-based approach enables segmentation for even the most complex of environments. It has been used to successfully save hundreds of man-years of labor and millions of dollars in overhead costs. It has reduced the time to segment international networks from years to months, while significantly raising the quality and security of the new environments. It has accomplished these feats, all while reducing network downtime, outages and potentially dangerous misconfiguration issues.

If your organization is considering or in the process of performing network segmentation for your critical data, you should take a look at the MachineTruth approach from MSI. It could mean the difference between success and struggle for this critical initiative.