Network Segmentation: A Best Practice We Should All be Using

It would be nice to be able to say that we are winning the war; that network security efforts are slowly getting the better of the bad guys. But I cant do that. Despite all the money being thrown at security tools and hosted services, the cyber-thugs are improving their game at a faster rate than we are. The ten worst known cyber security breaches of this century have all taken place since 2008, and 2013 and 2014 are notorious for their information security incidents.

I think there are a multitude of reasons for this state of affairs to exist. One is confusion, indecisiveness and slow reaction times among regulatory bodies and standards providers. Another is the check the boxcompliance mentality that exists both in government agencies and in the private sector. A third is simply the insane rate of innovation in the information technology realm. There are many more. But despite the reasons, one thing is clear: we have to stop rigidly complying with baseline standards and move into the more flexible and effective world of best practices. And today the best practice I want to touch on is network segmentation.

In our business we see a lot of computer networks that are just flat. There is little or no network segmentation and anyone on the inside can pretty much see everything. I cant begin to tell you how easy this kind of setup makes it for us during penetration testing success is virtually assured! And its amazing how even just basic network segmentation can slow us down or stop us all together.

A good reason to start with network segmentation is that you can go at in easy stages. Maybe you can begin by segmenting off a separate development or test network. Those are pretty basic and can give your networking team some valuable experience for more difficult efforts to come. Then you can ensure that user spaceis separated from server space. Doing just that much can have an amazing effect – it really helps to thwart successful cyber-attacks.

As the team gains confidence in their abilities, they can move onto the next step: real enclaving of the network. This is anything but a trivial effort, and it requires detailed knowledge of the various functions of the different business departments and how information moves into and out of each one of them (a task made very much easier if the company has a good business continuity program and business impact analysis in place). But in the long run these efforts will be well worth the trouble. It is very difficult indeed to gain access to or exfiltrate information from a well enclaved network especially from the Internet.

This blog post by John Davis.

Using TigerTrax to Analyze Device Configurations & Discover Networks

One of the biggest challenges that our M&A clients face is discovering what networks look like, how they are interconnected and what assets are priorities in their newly acquired environments. Sure, you bought the company and the ink is drying on the contracts — but now you have to fold their network into yours, make sure they meet your security standards and double check to make sure you know what’s out there.

That’s where the trouble begins. Because, in many cases, the result is “ask the IT folks”. You know, the already overworked, newly acquired, untrusted and now very nervous IT staff of the company you just bought. Even if they are honest and expedient, they often forget some parts of the environment or don’t know themselves that parts exist…

Thus, we get brought in, as a part of our Information Security Mergers & Acquisitions practice. Our job is usually to discover assets, map the networks and perform security assessments to identify gaps that don’t meet the acquiring company’s policies. Given that we have had to do this so often, we have designed a great new technique for performing these type of mapping and asset identification engagements. For us, instead of asking the humans, we simply ask the machines. We accumulate the router, switch, firewall and other device configurations and then leverage TigerTrax’s unique analytics capabilities to quickly establish network instances, interconnections, prioritized network hosts & segments, common configuration mistakes, etc. “en masse”. TigerTrax  then outputs that data for the MSI analysts, who can quickly perform their assessments, device reviews and inventories — armed with real-world data about the environment!

This approach has been winning us client kudos again and again!

Want to discuss our M&A practice and the unique ways that TigerTrax and MSI can help you before, during and after a merger or acquisition? Give us a call at (614) 351-1237 or drop us a line at info (at) microsolved /dot/ com. We’d be happy to schedule a FREE, no commitment & no pressure call with our Customer Champions & our security engineers. Indexing Crawler Issues

The crawler is an indexing application that spiders hosts and puts the results into the search engine. Like Google, Bing and other search engines, the system searches out new contents on the web continually and adds the content to the search engine database. Usually, these types of activities cause little issues for those whose sites are being indexed, and in fact, over the years an etiquette system based on rules placed in the robots.txt file of a web site has emerged.

Robots.txt files provide a rule set for search engine behaviors. They indicate what areas of a site a crawler may index and what sections of the site are to be avoided. Usually this is used to protect overly dynamic areas of the site where a crawler could encounter a variety of problems or inputs that can have either bandwidth or application issues for either the crawler, the web host or both. 

Sadly, many web crawlers and index bots do not honor the rules of robots.txt. Nor do attackers who are indexing your site for a variety of attack reasons. Given the impacts that some of these indexing tools can have on bandwidth, CPU use or database connectivity, other options for blocking them are sometimes sought. In particular, there are a lot of complaints about and their aggressive parsing, application interaction and deep site inspection techniques. They clearly have been identified as a search engine that does not seem to respect the honor system of robots.txt. A Google search for “ ignores robots.txt” will show you a wide variety of complaints.

In our monitoring of the HITME traffic, we have observed many deep crawls by from a variety of IP ranges. In the majority of them, they either never requested the robots.txt file at all, or they simply ignored the contents of the file altogether. In fact, some of our HITME web applications have experienced the same high traffic cost concerns that other parts of the web community have been complaining about. In a couple of cases, the cost for supporting the scans of represent some 30+% of the total web traffic observed by the HITME end point. From our standpoint, that’s a pain in the pocketbook and in our attention span, to continually parse their alert traffic out of our metrics.

Techniques for blocking more forcibly than robots.txt have emerged. You can learn about some of them by searching “blocking”. The easiest and what has proven to be an effective way, is to use .htaccess rules. We’ve also had some more modest success with forcibly returning redirects to requests with known url parameters associated with, along with some level of success by blocking specific IPs associated with them via an ignore rule in HoneyPoint.

If you are battling crawling and want to get some additional help, drop us a comment or get in touch via Twitter (@lbhuston, @microsolved). You can also give an account representative a call to arrange for a more technical discussion. We hope this post helps some folks who are suffering increased bandwidth use or problems with their sites/apps due to this and other indexing crawler issues. Until next time, stay safe out there!