Surface Mapping Pays Off

You have heard us talk about surface mapping applications during an assessment before. You have likely even seen some of our talks about surface mapping networks as a part of the 80/20 Rule of InfoSec. But, we wanted to discuss how that same technique extends into the physical world as well. 

In the last few months, we have done a couple of engagements where the customer really wanted a clear and concise way to discuss physical security issues, possible controls and communicate that information to upper management. We immediately suggested a mind-map style approach with photos where possible for the icons and a heat map approach for expressing the levels of attack and compromise.

In one case, we surface mapped a utility substation. We showed pictures of the controls, pictures of the tools and techniques used to compromise them and even shot some video that demonstrated how easily some of the controls were overcome. The entire presentation was explained as a story and the points came across very very well. The management team was engaged, piqued their interest in the video and even took their turn at attempting to pick a couple of simple locks we had brought along. (Thanks to @sempf for the suggestion!) In my 20+ years of information security consulting, I have never seen a group folks as engaged as this group. It was amazing and very well received.

Another way we applied similar mapping techniques was while assessing an appliance we had in the lab recently. We photographed the various ports, inputs and pinouts. We shot video of connecting to the device and the brought some headers and tools to the meetings with us to discuss while they passed them around. We used screen shots as slides to show what the engineers saw and did at each stage. We gave high level overviews of the “why” we did this and the other thing. The briefing went well again and the customer was engaged and interested throughout our time together. In this case, we didn’t get to combine a demo in, but they loved it nonetheless. Their favorite part were the surface maps.

Mapping has proven its worth, over and over again to our teams and our clients. We love doing them and they love reading them. This is exactly how product designers, coders and makers should be engaged. We are very happy that they chose MSI and our lab services to engage with and look forward to many years of a great relationship!

Thanks for reading and reach out on Twitter (@lbhuston) or in the comments if you have any questions or insights to share.

MicroSolved Lab Services: A Secret from Behind the Locked Doors

One of the oddest, most fun and most secretive parts of MSI is our testing lab services. You don’t hear a lot about what happens back there, behind the locked doors, but that is because of our responsible disclosure commitments. We don’t often talk publicly about the testing we do in the lab, but it varies from testing unreleased operating systems, applications, hardware devices, voting mechanisms, ICS/SCADA equipment, etc. We also do a small amount of custom controls and application development for specific niche solutions. 

Mostly though, the lab breaks things. We break things using a variety of electronic tools, custom hardware, bus/interface tampering, software hacking, and even some more fun (think fire, water & electric shock) kinds of scenarios. Basically, whatever the threat model your devices or systems face, most of them can be modeled, examined, tested, simulated or otherwise tampered into place in the MSI labs.

Our labs have several segments, with a wide array of emulated environments. Some of the lab segments are virtualized environments, some are filled with discreet equipment, including many historical devices for cross testing and regression assessments, etc. Our electronics equipment also brings a set of capabilities for tampering with devices beyond the usual network focus. We often tamper with and find security issues, well below the network stack of a device. We can test a wide range of inputs, outputs and attack surfaces using state of the art techniques and creatively devious approaches.

Our labs also include the ability to leverage HoneyPoint technology to project lab tested equipment and software into parts of the Internet in very controlled simulations. Our models and HoneyPoint tools can be used to put forth fake attack surfaces into the crimestream on a global basis and identify novel attacks, model attack sources and truly provide deep threat metrics for entire systems, specific attack surfaces or components of systems. This data and the capabilities and techniques they are based upon are entirely proprietary and unique to MicroSolved.

If you would like to discuss how our lab services could assist your organization or if you have some stuff you want tested, get in touch. We would love to talk with you about some of the things we are doing, can do and some of the more creatively devious ideas we have for the future. 🙂

Drop us a line or give us a call today.  We look forward to engaging with you and as always, thanks for reading! 

Oracle CSO Online Interview

My interview with CSO Online became available over the weekend. It discusses vendor trust and information security implications of the issues with password security in the Oracle database. You can read more about it here. Thanks to CSO Online for thinking of us and including us in the article.

Columbus OWASP Meeting Presentation

Last week, I presented at the Columbus OWASP meeting on defensive fuzzing, tampering with production web applications as a defensive tactic and some of the other odd stuff we have done in that arena. 

The presentation was called “Hey, You Broke My Web Thingee :: Adventures in Tampering with Production” and I had a lot of fun giving the talk. The crowd interaction was excellent and a lot of folks have asked for the slide deck from the talk, so I wanted to post it here

If you missed the talk in person, feel free to reach out on Twitter (@lbhuston) and engage with me about the topics. I’d love to discuss them some more. Please support OWASP by joining it as a member. These folks do a lot of great work for the community and the local chapter is quite active these days! 

OWASP Talk Scheduled for Sept 13 in Columbus

I have finally announced my Columbus OWASP topic for the 13th of September (Thursday). I hope it turns out to be one of the most fun talks I have given in a long while. I am really excited about the chance to discuss some of this in public. Here’s the abstract:

Hey, You Broke My Web Thingee! :: Adventures in Tampering with Production

Abstract:
The speaker will tell a few real world stories about practical uses of his defensive fuzzing techniques in production web applications. Examples of fighting with things that go bump in the web to lower deployment costs, unexpected application errors and illicit behavior will be explained in some detail. Not for the “play by the book” web team, these techniques touch on unconventional approaches to defending web applications against common (and not so common) forms of waste, fraud and abuse. If the “new Web” is a thinking admin’s game, unconventional wisdom from the trenches might just be the game changer you need.

You can find out more about attending here. Hope to see you in the crowd!

PS – I’ll be sharing the stage with Jim Manico from White Hat Security, who is always completely awesome. So, come out and engage with us!

Terminal Services Attack Reductions Redux

Last week, we published a post about the high frequency of probes, scans and attacks against exposed Windows Terminal Services from the Internet. Many folks commented on Twitter to me about some of the things that can be done to minimize the risk of these exposures. As we indicated in the previous post, the best suggestions are to eliminate them altogether by placing Terminal Services exposures behind VPN connections or through the implementation of tokens/multi-factor authentication. 

Another idea is to implement specific firewall rules that block access to all but a specific set of IP addresses (such as the home IP address range of your admins or that of a specific jump host, etc.) This can go a long way to minimizing the frequency of interaction with the attack surfaces by random attacker tools, probes and scans. It also raises the bar slightly for more focused attackers by forcing them to target specific systems (where you can deploy increased monitoring).

In addition, a new tool for auditing the configuration of Terminal Services implementations came to our attention. This tool, called “rdp-sec-check”, was written by Portcullis Security and is available to the public. Our testing of the tool showed it to be quite useful in determining the configuration of exposed Terminal Services and in creating a path for hardening them wherever deployed. (Keep in mind, it is likely useful to harden the Terminal Services implementations internally to critical systems as well…)

Note that we particularly loved that the tool could be used REMOTELY. This makes it useful to audit multiple customer implementations, as well as to check RDP exposures during penetration testing engagements. 

Thanks to Portcullis for making this tool available. Hopefully between this tool to harden your deployments and our advice to minimize the exposures, we can all drive down some of the compromises and breaches that result from poor RDP implementations.

If you would like to create some threat metrics for what port 3389 Terminal Services exposures might look like for your organization, get in touch and we can discuss either metrics from the HITME or how to use HoneyPoint to gather such metrics for yourself

PS – Special thanks to @SecRunner for pointing out that many cloud hosting providers make Terminal Server available with default configurations when provisioning cloud systems in an ad-hoc manner. This is likely a HUGE cause for concern and may be what is keeping scans and probes for 3389/TCP so active, particularly amongst cloud-hosted HITME end points.

PSS – We also thought you might enjoy seeing a sample of the videos that show entry level attackers exactly how to crack weak passwords via Terminal Services using tools easily available on the Internet. These kinds of videos are common for low hanging fruit attack vectors. This video was randomly pulled from the Twitter stream with a search. We did not make it and are not responsible for its content. It may not be safe for work (NSFW), depending on your organization’s policies. 

 

Yandex.ru Indexing Crawler Issues

The yandex.ru crawler is an indexing application that spiders hosts and puts the results into the yandex.ru search engine. Like Google, Bing and other search engines, the system searches out new contents on the web continually and adds the content to the search engine database. Usually, these types of activities cause little issues for those whose sites are being indexed, and in fact, over the years an etiquette system based on rules placed in the robots.txt file of a web site has emerged.

Robots.txt files provide a rule set for search engine behaviors. They indicate what areas of a site a crawler may index and what sections of the site are to be avoided. Usually this is used to protect overly dynamic areas of the site where a crawler could encounter a variety of problems or inputs that can have either bandwidth or application issues for either the crawler, the web host or both. 

Sadly, many web crawlers and index bots do not honor the rules of robots.txt. Nor do attackers who are indexing your site for a variety of attack reasons. Given the impacts that some of these indexing tools can have on bandwidth, CPU use or database connectivity, other options for blocking them are sometimes sought. In particular, there are a lot of complaints about yandex.ru and their aggressive parsing, application interaction and deep site inspection techniques. They clearly have been identified as a search engine that does not seem to respect the honor system of robots.txt. A Google search for “yandex.ru ignores robots.txt” will show you a wide variety of complaints.

In our monitoring of the HITME traffic, we have observed many deep crawls by yandex.ru from a variety of IP ranges. In the majority of them, they either never requested the robots.txt file at all, or they simply ignored the contents of the file altogether. In fact, some of our HITME web applications have experienced the same high traffic cost concerns that other parts of the web community have been complaining about. In a couple of cases, the cost for supporting the scans of yandex.ru represent some 30+% of the total web traffic observed by the HITME end point. From our standpoint, that’s a pain in the pocketbook and in our attention span, to continually parse their alert traffic out of our metrics.

Techniques for blocking yandex.ru more forcibly than robots.txt have emerged. You can learn about some of them by searching “blocking yandex.ru”. The easiest and what has proven to be an effective way, is to use .htaccess rules. We’ve also had some more modest success with forcibly returning redirects to requests with known url parameters associated with yandex.ru, along with some level of success by blocking specific IPs associated with them via an ignore rule in HoneyPoint.

If you are battling yandex.ru crawling and want to get some additional help, drop us a comment or get in touch via Twitter (@lbhuston, @microsolved). You can also give an account representative a call to arrange for a more technical discussion. We hope this post helps some folks who are suffering increased bandwidth use or problems with their sites/apps due to this and other indexing crawler issues. Until next time, stay safe out there!

Which Application Testing is Right for Your Organization?

Millions of people worldwide bank, shop, buy airline tickets, and perform research using the World Wide Web. Each transaction usually includes sharing private information such as names, addresses, phone numbers, credit card numbers, and passwords. They’re routinely transferred and stored in a variety of locations. Billions of dollars and millions of personal identities are at stake every day. In the past, security professionals thought firewalls, Secure Sockets Layer (SSL), patching, and privacy policies were enough to protect websites from hackers. Today, we know better.

Whatever your industry — you should have a consistent testing schedule completed by a security team. Scalable technology allows them to quickly and effectively identify your critical vulnerabilities and their root causes in nearly any type of system, application, device or implementation.

At MSI, our reporting presents clear, concise, action-oriented mitigation strategies that allows your organization to address the identified risks at the technical, management and executive levels.

There are several ways to strengthen your security posture. These strategies can help: application scanning, application security assessments, application penetration testing, and risk assessments.

Application scanning can provide an excellent and affordable way for organizations to meet the requirements of due diligence; especially for secondary, internal, well-controlled or non-critical applications.

Application security assessments can identify security problems, catalog their exposures, measure risk, and develop mitigation strategies that strengthen your applications for your customers. This is a more complete solution than a scan since it goes deeper into the architecture.

Application penetration testing uses tools and scripts to mine your systems for data and examine underlying session management and cryptography. Risk assessments include all policies and processes associated with the specific application, and will be reviewed depending on the complexity of your organization.

In order to protect your organization against security breaches (which are only increasing in frequency), consider conducting an application scan, application  assessment, application penetration test, or risk assessment on a regular basis. If you need help deciding which choice is best for you, let us know. We’re here to help!