Leaking RFC1918 IP Addresses to the Internet

There has been a lot of conversation with clients about exposing internal DNS information to the public Internet lately. 

There are some security considerations, and a lot of the arguments often devolve into security by obscurity types of control discussions. My big problem with the leakage of internal DNS data to the Internet is that I hypothesize that it attracts attacker interest. That is, I know when I see it at a client company, I often immediately assume they have immature networking practices and wonder what other deeper security issues are present. It sort of makes me deeper attention to my pen-testing work and dig deeper for other subtle holes. I am guessing that it does the same for attackers. 

Of course, I don’t have any real data to back that up. Maybe someone out there has run some honeypots with and without such leakage and then measured the aggregate risk difference between the two scenarios, but I doubt it. Most folks aren’t given to obsess over modeling like I am, and that is likely a good thing.

It turns out though, that there are other concerns with exposed internal DNS information. Here are a few links to those discussions, and there are several more on the NANOG mailing list from the past several years.

Server fault, Quora, and, of course, the RFC1918 that says you shouldn’t leak them. 🙂 

So, you might wanna check and see if you have these exposures, and if so, and you don’t absolutely need them, then remove them. It makes you potentially safer, and it makes the Internet a nicer place. 🙂 

If you have an actual use for leaking them to the public Internet, I would love to hear more about it. Hit me up on Twitter and let me know about it. I’ll write a later post with some use scenarios if folks have them. 

Thanks for reading! 

Quick Look at Ransomware Content

Ransomware certainly is a hot topic in information security these days. I thought I would take a few moments and look at some of the content out there about it. Here are some quick and semi-random thoughts on the what I saw.

  • It very difficult to find an article on ransomware that scores higher than 55% on objectivity. Lots of marketing going on out there.
  • I used the new “Teardown” rapid learning tool I built to analyze 50 of the highest ranked articles on ransomware. Most of that content is marketing, even from vendors not associated with information security or security in general. Lots of product and service suggestive selling going on…
  • Most common tip? Have good and frequent backups. It helps if you make sure they restore properly.
  • Most effective tip, IMHO? Have strong egress controls. It helps if you have detective controls and process that are functional & effective.
  • Worst ransomware tip from the sample? Use a registry hack across all Windows machines to prevent VBS execution. PS – Things might break…

Overall, it is clear that tons of vendors are using ransomware and WannaCry as a marketing bandwagon. That should make you very suspicious of things you read, especially those that seem vendor or product specific. If you need a set of good information to use to present ransomware to your board or management team, I thought the Wikipedia article here was pretty decent information. Pay attention to where you get your information from, and until next time, stay safe out there!

State Of Security Podcast Episode 13 Is Out

Hey there! I hope your week is off to a great start.

Here is Episode 13 of the State of Security Podcast. This new “tidbit” format comes in under 35 minutes and features some pointers on unusual security questions you should be asking cloud service providers. 

I also provide a spring update about my research, where it is going and what I have been up to over the winter.

Check it out and let me know what you think via Twitter.

SilentTiger Targeted Threat Intelligence Update

Just a quick update on SilentTiger™, our passive security assessment and intelligence engine. 

We have released a new version of the platform to our internal team, and this new version automatically builds the SilentTiger configuration for our analysts. That means that clients using our SilentTiger offering will no longer have to provide any more information than the list of domain names to engage the process. 

This update also now includes a host inventory mechanism, and a new data point – who runs the IP addresses identified. This is very useful for finding out the cloud providers that a given set of targets are using and makes it much easier to find industry clusters of service providers that could be a risk to the supply chain.

For more information about using SilentTiger to perform ongoing assessments for your organization, your M&A prospects, your supply chain or as a form of industry intelligence, simply get in touch. Clients ranging from global to SMB and across a wide variety of industries are already taking advantage of the capability. Give us 20 minutes, and we’ll be happy to explain! 

Want Better Infosec? Limit Functionality and Visibility

We humans are great at exploiting and expanding new technologies, but we often jump in with both feet before we fully understand the ramifications what we are doing. I cite the Internet itself. The ARPANET and the TCP/IP suite were entities designed to enable and enhance communications between people, not restrict them. The idea of security was ill considered from the beginning and was never a part of the design. Unfortunately, by the time we realized this fact, the Internet was already going great guns and it was too late to change it.

The same thing happened with personal computers. Many businesses found it was cheaper and easier to exploit this new technology than to stay with the main frame. So they jumped right in, bought off the shelf devices and operating systems, networked them together and voila! Business heaven!

Unfortunately, there was a snake in the garden. These computers and operating systems were not designed with businesses, and their attendant need for security, in mind. Such commercial systems have all kinds of functionalities and “features” that are not only useless for business purposes, they are pure gold for hackers.

As with the Internet, once people understood the security dangers of using these products, their use was ingrained and change was practically impossible. All we can do now, at least until these basic flaws are corrected, is try to work around them. One way to make a good start at this is to limit what these systems can do as much as is possible; if it doesn’t have a business function it should be turned off or removed.

For example, why should most employees have the ability to browse the Internet or check their social networking sites on their business systems? Few employees actually need this functionality, and those who do should be strictly limited and monitored. Almost all job descriptions could get by with a handful of websites (white listing), and those that truly do need full Internet accessibility should have their own subnet. How many employees in these times don’t have a smart phone in their pocket? Can’t they go to Facebook or check their bank account on that?

There are also many other examples of limiting the functionality of business devices and applications. USB ports, card readers and disc players are not necessary for most job descriptions. How about all those lovely services and features found in many commercial software applications and operating systems? Why not turn off as many of those as possible. There are lots of things that can be disabled using Active Directory.

In addition to limiting what systems and people can do, it is also a very good security idea to limit what they can see. Access to information, applications and devices should be strictly based on need to know. And in addition to information, users should not be able to see across the network. Why should a user in workstation space have the ability to see into server space? Why should marketing personnel have access to accounting information? This means good network segmentation with firewalls, logging and monitoring between the segments. Do whatever you can to limit what systems can see and do and I guarantee you will immediately see the security benefits.

Nuance Detection: Not Always an Electronic Problem

This month’s theme is nuance detection. As Brent stated in his blog earlier this month, “the core of nuance detection is to extend alerting capabilities into finding situations that specifically should not exist, and if they happen, would indicate a significant security failure.” When IT oriented people think about this, their minds naturally gravitate to heuristics; how can we establish reliable “normal” user behavior and thereby more easily catch anomalies? And that is as it should be.

But it should also be noted that these “situations that should not exist” are not limited only to cyber events that can be detected and monitored electronically. There are also programmatic and procedural situations that can lead to system compromise and data breach. These need to be detected and corrected too.

One such possible programmatic snafu that could lead to a significant security failure is lack of proper access account monitoring and oversight procedures. Attackers often create new user accounts, or even better for them, take over outdated or unused access accounts that already exist. These accounts are preferable as there are no active users to notice anomalous activity, and to intruder detection systems everything seems normal.

I can’t stress enough the importance of monitoring the access account creation, monitoring and retirement process. The account initiation and approval process needs to be strong, the identification process needs to be strong, the monitoring and retirements processes need to be strong and the often ignored oversight process needs to be strong. A failure of any one of these processes can lead to illicit access, and when all is said and done access is the biggest part of the game for the attacker.

Another dangerous procedural security problem are the system users that make lots of errors with security repercussions, or that just can’t seem to follow the security rules. Maybe they are harried and stressed, maybe just forgetful. Or perhaps they just think the whole “security thing” is just a waste of their time. But whatever the reasons, these foci of security incidents need to be detected and corrected just like any other security problem.

And once again, there should be regular processes in place for dealing with these individuals. Records of security and compliance errors should be kept in order to facilitate detection of transgressors. Specific, hierarchical procedures should be put in place for addressing the problem, including levels of discipline and how they should be imposed. And once again, there should be an oversight component to the process to ensure it is being carried out properly.

These are just a couple of the programmatic and procedural security situations that demand detection and correction. I’m sure there are many more. So my advice is to look at your security situation holistically and not just from the high tech point of view.

 

Detection: Humans in the Loop a Must

Detecting incidents is probably the most difficult network security task to perform well and consistently. Did you know that less than one out of five security incidents are detected by the organization being affected? Most organizations only find out they’ve experienced an information security incident when law enforcement comes knocking on their door, if they find out about it at all that is. And that can be very bad for business in the present environment. Customers are increasingly demanding stronger information security measures from their service providers and partners.

In order to have the best chance of detecting network security incidents, you need to record and monitor system activities. However, there is no easier way to shut down the interest of a network security or IT administrator than to say the word “monitoring”. You can just mention the word and their faces fall as if a rancid odor had suddenly entered the room! And I can’t say that I blame them. Most organizations still do not recognize the true necessity of monitoring, and so do not provide proper budgeting and staffing for the function. As a result, already fully tasked (and often times inadequately prepared) IT or security personnel are tasked with the job. This not only leads to resentment, but also virtually guarantees that the job will not be performed effectively.

But all is not gloom and doom. Many companies are reacting to the current business environment and are devoting more resources to protecting their private information. In addition, the security industry is constantly developing new tools that help streamline and remove much of the drudge work from the monitoring and detection tasks. And I surely recommend that businesses employ these tools to their full effect. Use log aggregation tools, parsers, artificial intelligence and whatever else is made available for these jobs.

However, it behooves us not to rely on these new magic bullets too much. As can be easily demonstrated from the history of security in general, there has never been a defense strategy that cannot be overcome by human cleverness and persistence. This continues to be demonstrably true in the world of information security.

My advice is to use the new tools to their maximum effectiveness, but to use them wisely. Only spend enough on the technology to accomplish the jobs at hand; don’t waste your money on redundant tools and capabilities. Instead, spend those savings on information security personnel and training. It will pay you well in the long run.

Revisiting Nuance Detection

The core of nuance detection is to extend alerting capabilities into finding situations that specifically should not exist, and if they happen, would indicate a significant security failure. A simple, elegant example would be a motion sensor on a safe in your home, combined with something like your home alarm system.
 
A significant failure state would be for the motion sensor inside the safe to trigger while the home alarm system is set in away mode. When the alarm is in away mode, there should be no condition that triggers motion inside the safe. If motion is detected, anytime, you might choose to alert in a minor way. But, if the alarm is set to away mode, you might signal all kinds of calamity and flashing lights, bells and whistles, for example.
 
This same approach can apply to your network environment, applications or data systems. Define what a significant failure state looks like, and then create detection and alerting mechanisms, even if conditional, for the indicators of that state. It can be easy. 
 
I remember thinking more deeply about this for the first time when I saw Marcus Ranum give his network burglar alarm speech at Defcon, what seems like a 1000 years ago now. That moment changed my life forever. Since then, I have always wanted to work on small detections. The most nuanced of fail states. The deepest signs of compromise. HoneyPoint™ came from that line of thinking, albeit, many years later. (Thanks, Marcus, you are amazing! BTW.) 🙂
 
I’ve written about approaches to it in the past, too. Things like detecting web shells, detection in depth techniques and such. I even made some nice maturity and deployment models.
 
This month, I will be revisiting nuance detection more deeply. Creating some more content around it, and speaking about it more openly. I’ll also cover how we have extended HoneyPoint with the Handler portion of HoneyPoint Agent. in order to fully support event management and data handling into your security alerting systems from basic scripts and simple tools you can create yourself. 
 
Stay tuned, and in the meantime, drop me a line on Twitter (@lbhuston) and let me know more about nuance detections you can think of or have implemented. I’d love to hear more about it. 

Network Segmentation versus Network Enclaving

As we have discussed in earlier blogs, network segmentation is the practice of splitting computer networks into subnets using combinations of firewalls, VLANs, access controls and policies & procedures. We have seen that the primary reason for segmenting networks is to prevent a simple perimeter breach from exposing the totality of an organization’s information assets. So what is the difference between network segmentation and network enclaving?

One of the differences is just the degree of segmentation you impose upon the network. Enclaves are more thoroughly segmented from the general network environment than usual. In fact, enclaving is sometimes just described as “enhanced network segmentation.”

Another difference between segmentation and enclaving is the primary threat enclaving strives to thwart: the internal threat. Although the preponderance of cyber-attacks come from external threat sources such as hackers, cyber-criminals and nation states, many of the most devastating breaches originate from internal sources such as employees and trusted service providers. These internal information security breaches may be either purposeful attacks or may simply be caused by employee error. Either way, they are just as devastating to an organization’s reputation and business share.

A rarely considered difference between enclaving and network segmentation is physical security. When fully controlling access to information assets based on the principle of need to know, it is not enough to just control logical access. It is necessary to restrict physical access to work areas and computer devices as well. These areas should be locked, and access by authorized personnel should be recorded and monitored. Visitors and service providers should be pre-approved and escorted when in protected areas.

An obvious problem with enclaving is that it is more difficult to implement and maintain than the usual information security measures. It requires more planning, more devices and more employee hours. So why should businesses trying to control expenditures put their resources into enclaving?

As an information security professional I would say that it should be done because it is the best way we know to protect information assets. But for many business concerns, the greatest benefit of true enclaving is in securing protected and regulated information such as payment card information, patient health records and personal financial information. If you employ enclaving to protect such assets, you are showing clients and regulators alike that your business is serious about securing the information in its charge. And in today’s business climate, that can be a very important differentiator indeed!