Oracle CSO Online Interview

My interview with CSO Online became available over the weekend. It discusses vendor trust and information security implications of the issues with password security in the Oracle database. You can read more about it here. Thanks to CSO Online for thinking of us and including us in the article.

Columbus OWASP Meeting Presentation

Last week, I presented at the Columbus OWASP meeting on defensive fuzzing, tampering with production web applications as a defensive tactic and some of the other odd stuff we have done in that arena. 

The presentation was called “Hey, You Broke My Web Thingee :: Adventures in Tampering with Production” and I had a lot of fun giving the talk. The crowd interaction was excellent and a lot of folks have asked for the slide deck from the talk, so I wanted to post it here

If you missed the talk in person, feel free to reach out on Twitter (@lbhuston) and engage with me about the topics. I’d love to discuss them some more. Please support OWASP by joining it as a member. These folks do a lot of great work for the community and the local chapter is quite active these days! 

OWASP Talk Scheduled for Sept 13 in Columbus

I have finally announced my Columbus OWASP topic for the 13th of September (Thursday). I hope it turns out to be one of the most fun talks I have given in a long while. I am really excited about the chance to discuss some of this in public. Here’s the abstract:

Hey, You Broke My Web Thingee! :: Adventures in Tampering with Production

Abstract:
The speaker will tell a few real world stories about practical uses of his defensive fuzzing techniques in production web applications. Examples of fighting with things that go bump in the web to lower deployment costs, unexpected application errors and illicit behavior will be explained in some detail. Not for the “play by the book” web team, these techniques touch on unconventional approaches to defending web applications against common (and not so common) forms of waste, fraud and abuse. If the “new Web” is a thinking admin’s game, unconventional wisdom from the trenches might just be the game changer you need.

You can find out more about attending here. Hope to see you in the crowd!

PS – I’ll be sharing the stage with Jim Manico from White Hat Security, who is always completely awesome. So, come out and engage with us!

Terminal Services Attack Reductions Redux

Last week, we published a post about the high frequency of probes, scans and attacks against exposed Windows Terminal Services from the Internet. Many folks commented on Twitter to me about some of the things that can be done to minimize the risk of these exposures. As we indicated in the previous post, the best suggestions are to eliminate them altogether by placing Terminal Services exposures behind VPN connections or through the implementation of tokens/multi-factor authentication. 

Another idea is to implement specific firewall rules that block access to all but a specific set of IP addresses (such as the home IP address range of your admins or that of a specific jump host, etc.) This can go a long way to minimizing the frequency of interaction with the attack surfaces by random attacker tools, probes and scans. It also raises the bar slightly for more focused attackers by forcing them to target specific systems (where you can deploy increased monitoring).

In addition, a new tool for auditing the configuration of Terminal Services implementations came to our attention. This tool, called “rdp-sec-check”, was written by Portcullis Security and is available to the public. Our testing of the tool showed it to be quite useful in determining the configuration of exposed Terminal Services and in creating a path for hardening them wherever deployed. (Keep in mind, it is likely useful to harden the Terminal Services implementations internally to critical systems as well…)

Note that we particularly loved that the tool could be used REMOTELY. This makes it useful to audit multiple customer implementations, as well as to check RDP exposures during penetration testing engagements. 

Thanks to Portcullis for making this tool available. Hopefully between this tool to harden your deployments and our advice to minimize the exposures, we can all drive down some of the compromises and breaches that result from poor RDP implementations.

If you would like to create some threat metrics for what port 3389 Terminal Services exposures might look like for your organization, get in touch and we can discuss either metrics from the HITME or how to use HoneyPoint to gather such metrics for yourself

PS – Special thanks to @SecRunner for pointing out that many cloud hosting providers make Terminal Server available with default configurations when provisioning cloud systems in an ad-hoc manner. This is likely a HUGE cause for concern and may be what is keeping scans and probes for 3389/TCP so active, particularly amongst cloud-hosted HITME end points.

PSS – We also thought you might enjoy seeing a sample of the videos that show entry level attackers exactly how to crack weak passwords via Terminal Services using tools easily available on the Internet. These kinds of videos are common for low hanging fruit attack vectors. This video was randomly pulled from the Twitter stream with a search. We did not make it and are not responsible for its content. It may not be safe for work (NSFW), depending on your organization’s policies. 

 

Yandex.ru Indexing Crawler Issues

The yandex.ru crawler is an indexing application that spiders hosts and puts the results into the yandex.ru search engine. Like Google, Bing and other search engines, the system searches out new contents on the web continually and adds the content to the search engine database. Usually, these types of activities cause little issues for those whose sites are being indexed, and in fact, over the years an etiquette system based on rules placed in the robots.txt file of a web site has emerged.

Robots.txt files provide a rule set for search engine behaviors. They indicate what areas of a site a crawler may index and what sections of the site are to be avoided. Usually this is used to protect overly dynamic areas of the site where a crawler could encounter a variety of problems or inputs that can have either bandwidth or application issues for either the crawler, the web host or both. 

Sadly, many web crawlers and index bots do not honor the rules of robots.txt. Nor do attackers who are indexing your site for a variety of attack reasons. Given the impacts that some of these indexing tools can have on bandwidth, CPU use or database connectivity, other options for blocking them are sometimes sought. In particular, there are a lot of complaints about yandex.ru and their aggressive parsing, application interaction and deep site inspection techniques. They clearly have been identified as a search engine that does not seem to respect the honor system of robots.txt. A Google search for “yandex.ru ignores robots.txt” will show you a wide variety of complaints.

In our monitoring of the HITME traffic, we have observed many deep crawls by yandex.ru from a variety of IP ranges. In the majority of them, they either never requested the robots.txt file at all, or they simply ignored the contents of the file altogether. In fact, some of our HITME web applications have experienced the same high traffic cost concerns that other parts of the web community have been complaining about. In a couple of cases, the cost for supporting the scans of yandex.ru represent some 30+% of the total web traffic observed by the HITME end point. From our standpoint, that’s a pain in the pocketbook and in our attention span, to continually parse their alert traffic out of our metrics.

Techniques for blocking yandex.ru more forcibly than robots.txt have emerged. You can learn about some of them by searching “blocking yandex.ru”. The easiest and what has proven to be an effective way, is to use .htaccess rules. We’ve also had some more modest success with forcibly returning redirects to requests with known url parameters associated with yandex.ru, along with some level of success by blocking specific IPs associated with them via an ignore rule in HoneyPoint.

If you are battling yandex.ru crawling and want to get some additional help, drop us a comment or get in touch via Twitter (@lbhuston, @microsolved). You can also give an account representative a call to arrange for a more technical discussion. We hope this post helps some folks who are suffering increased bandwidth use or problems with their sites/apps due to this and other indexing crawler issues. Until next time, stay safe out there!

Which Application Testing is Right for Your Organization?

Millions of people worldwide bank, shop, buy airline tickets, and perform research using the World Wide Web. Each transaction usually includes sharing private information such as names, addresses, phone numbers, credit card numbers, and passwords. They’re routinely transferred and stored in a variety of locations. Billions of dollars and millions of personal identities are at stake every day. In the past, security professionals thought firewalls, Secure Sockets Layer (SSL), patching, and privacy policies were enough to protect websites from hackers. Today, we know better.

Whatever your industry — you should have a consistent testing schedule completed by a security team. Scalable technology allows them to quickly and effectively identify your critical vulnerabilities and their root causes in nearly any type of system, application, device or implementation.

At MSI, our reporting presents clear, concise, action-oriented mitigation strategies that allows your organization to address the identified risks at the technical, management and executive levels.

There are several ways to strengthen your security posture. These strategies can help: application scanning, application security assessments, application penetration testing, and risk assessments.

Application scanning can provide an excellent and affordable way for organizations to meet the requirements of due diligence; especially for secondary, internal, well-controlled or non-critical applications.

Application security assessments can identify security problems, catalog their exposures, measure risk, and develop mitigation strategies that strengthen your applications for your customers. This is a more complete solution than a scan since it goes deeper into the architecture.

Application penetration testing uses tools and scripts to mine your systems for data and examine underlying session management and cryptography. Risk assessments include all policies and processes associated with the specific application, and will be reviewed depending on the complexity of your organization.

In order to protect your organization against security breaches (which are only increasing in frequency), consider conducting an application scan, application  assessment, application penetration test, or risk assessment on a regular basis. If you need help deciding which choice is best for you, let us know. We’re here to help!