My interview with CSO Online became available over the weekend. It discusses vendor trust and information security implications of the issues with password security in the Oracle database. You can read more about it here. Thanks to CSO Online for thinking of us and including us in the article.
Category Archives: Emerging Threats
Terminal Services Attack Reductions Redux
Last week, we published a post about the high frequency of probes, scans and attacks against exposed Windows Terminal Services from the Internet. Many folks commented on Twitter to me about some of the things that can be done to minimize the risk of these exposures. As we indicated in the previous post, the best suggestions are to eliminate them altogether by placing Terminal Services exposures behind VPN connections or through the implementation of tokens/multi-factor authentication.
Another idea is to implement specific firewall rules that block access to all but a specific set of IP addresses (such as the home IP address range of your admins or that of a specific jump host, etc.) This can go a long way to minimizing the frequency of interaction with the attack surfaces by random attacker tools, probes and scans. It also raises the bar slightly for more focused attackers by forcing them to target specific systems (where you can deploy increased monitoring).
In addition, a new tool for auditing the configuration of Terminal Services implementations came to our attention. This tool, called “rdp-sec-check”, was written by Portcullis Security and is available to the public. Our testing of the tool showed it to be quite useful in determining the configuration of exposed Terminal Services and in creating a path for hardening them wherever deployed. (Keep in mind, it is likely useful to harden the Terminal Services implementations internally to critical systems as well…)
Note that we particularly loved that the tool could be used REMOTELY. This makes it useful to audit multiple customer implementations, as well as to check RDP exposures during penetration testing engagements.
Thanks to Portcullis for making this tool available. Hopefully between this tool to harden your deployments and our advice to minimize the exposures, we can all drive down some of the compromises and breaches that result from poor RDP implementations.
If you would like to create some threat metrics for what port 3389 Terminal Services exposures might look like for your organization, get in touch and we can discuss either metrics from the HITME or how to use HoneyPoint to gather such metrics for yourself.
PS – Special thanks to @SecRunner for pointing out that many cloud hosting providers make Terminal Server available with default configurations when provisioning cloud systems in an ad-hoc manner. This is likely a HUGE cause for concern and may be what is keeping scans and probes for 3389/TCP so active, particularly amongst cloud-hosted HITME end points.
PSS – We also thought you might enjoy seeing a sample of the videos that show entry level attackers exactly how to crack weak passwords via Terminal Services using tools easily available on the Internet. These kinds of videos are common for low hanging fruit attack vectors. This video was randomly pulled from the Twitter stream with a search. We did not make it and are not responsible for its content. It may not be safe for work (NSFW), depending on your organization’s policies.
Yandex.ru Indexing Crawler Issues
The yandex.ru crawler is an indexing application that spiders hosts and puts the results into the yandex.ru search engine. Like Google, Bing and other search engines, the system searches out new contents on the web continually and adds the content to the search engine database. Usually, these types of activities cause little issues for those whose sites are being indexed, and in fact, over the years an etiquette system based on rules placed in the robots.txt file of a web site has emerged.
Robots.txt files provide a rule set for search engine behaviors. They indicate what areas of a site a crawler may index and what sections of the site are to be avoided. Usually this is used to protect overly dynamic areas of the site where a crawler could encounter a variety of problems or inputs that can have either bandwidth or application issues for either the crawler, the web host or both.
Sadly, many web crawlers and index bots do not honor the rules of robots.txt. Nor do attackers who are indexing your site for a variety of attack reasons. Given the impacts that some of these indexing tools can have on bandwidth, CPU use or database connectivity, other options for blocking them are sometimes sought. In particular, there are a lot of complaints about yandex.ru and their aggressive parsing, application interaction and deep site inspection techniques. They clearly have been identified as a search engine that does not seem to respect the honor system of robots.txt. A Google search for “yandex.ru ignores robots.txt” will show you a wide variety of complaints.
In our monitoring of the HITME traffic, we have observed many deep crawls by yandex.ru from a variety of IP ranges. In the majority of them, they either never requested the robots.txt file at all, or they simply ignored the contents of the file altogether. In fact, some of our HITME web applications have experienced the same high traffic cost concerns that other parts of the web community have been complaining about. In a couple of cases, the cost for supporting the scans of yandex.ru represent some 30+% of the total web traffic observed by the HITME end point. From our standpoint, that’s a pain in the pocketbook and in our attention span, to continually parse their alert traffic out of our metrics.
Techniques for blocking yandex.ru more forcibly than robots.txt have emerged. You can learn about some of them by searching “blocking yandex.ru”. The easiest and what has proven to be an effective way, is to use .htaccess rules. We’ve also had some more modest success with forcibly returning redirects to requests with known url parameters associated with yandex.ru, along with some level of success by blocking specific IPs associated with them via an ignore rule in HoneyPoint.
If you are battling yandex.ru crawling and want to get some additional help, drop us a comment or get in touch via Twitter (@lbhuston, @microsolved). You can also give an account representative a call to arrange for a more technical discussion. We hope this post helps some folks who are suffering increased bandwidth use or problems with their sites/apps due to this and other indexing crawler issues. Until next time, stay safe out there!
CSO Online Interview
Our founder & CEO, Brent Huston (@lbhuston) just had a quick interview with CSO Online about the Gauss malware. Look for discussions with Brent later today or tomorrow on the CSO site. Our thanks to CSO Online for thinking of us!
Update 1: The article has been posted on CSO Online and you can find it here.
Brent would also like to point out that doing the basics of information security, and doing them well, will help reduce some of the stomach churning, hand wringing and knee-jerk reactions to hyped up threats like these. “Applying the MSI 80/20 Rule of InfoSec throughout your organization will really give folks better results than trying to manage a constant flow of patches, updates. hot fixes and signature tuning.” Huston said.
Malware Alert: Will You Lose Your Internet Access On Monday?
We’re always keeping our eyes and ears open when it comes to malware. If you’ve not heard of this report before now, it would be good to check your computer to see if it has been infected with a nasty piece of malware whose creators were finally caught and shut down by the FBI late in 2011.
From AllThingsD:
Next week, the Internet connections of about a quarter-million people will stop working because years ago their computers became infected with malware.
The malware is called DNSChanger, and it was the centerpiece of an Internet crime spree that came to an end last November when the FBI arrested and charged seven Eastern European men with 27 counts of wire fraud and other computer crimes. At one point, the DNSChanger malware had hijacked the Internet traffic of about a half-million PCs around the world by redirecting the victims’ Web browsers to Web sites owned by the criminals. They then cashed in on ads on those sites and racked up $14 million from the scheme. When the crackdown came, it was hailed as one of the biggest computer crime busts in history.
The listed site for checking if you have the malware is (not surprising) getting slammed. Try to refresh the address a few times and it will show you if your system is infected or not, plus will give you a link for how to fix your site.
Here’s to seeing “green” for everyone!
Audio Blog Post: Spear Phishing
Brent Huston, CEO and Founder of MicroSolved, Inc., discusses with Chris Lay, Account Executive, the new trends with spear phishing. In this audio blog post, you’ll learn:
- How traditional spear phishing has changed
- The new approach attackers are now using
- The LinkedIn password breach and how it could be used in phishing attacks
- Some non-traditional spear phishing campaigns
Grab a drink and take a listen. As always, let us know what you think!
Audio Blog Post: How to Safeguard Your Data From Credit Card Theft
Cybercriminals continue to seek new opportunities to steal credit card data, highlighted recently in the largest credit card theft seen in two years — a 1.5 million loss from Global Payments, a third-party processor of transactions for Visa and Mastercard.
What can companies do? Also, what can you do to protect your credit card data?
I sat down with Brent Huston, CEO and Security Evangelist with MicroSolved, Inc. to discuss such questions. In this audio blog post, you’ll hear:
- The current state of identity theft
- Two primary ways credit cards get stolen
- Skimming as a preferred model for theft and how to prevent it
- Why being PCI-compliant is not a silver bullet
And more!
Take a listen to this informative 15-minute interview and learn how you can protect your organization from data theft!
Resources:
- The 80/20 Rule of Information Security
- HoneyPoint Security Server (a superior detection product)
- PCI Security Standards Council
Threat and Vulnerability: Pay Attention to MS12-020
Microsoft today released details and a patch for the MS12-020 vulnerability. This is a remotely exploitable vulnerability in most current Windows platforms that are running Terminal Server/RDP. Many organizations use this service remotely across the Internet, via a VPN, or locally for internal tasks. It is a common, prevalent technology, and thus the target pool for attacks is likely to make this a significant issue in the near future.
Malicious Exploits: Hitting the Internet Waves with CSRF, Part Two
If you’re the “average Web user” using unmodified versions of the most popular browsers can do relatively little to prevent cross-site request forgery.
Logging out of sites and avoiding their “remember me” features can help mitigate CSRF risk, in addition — not displaying external images or not clicking links in spam or untrusted e-mails may also help. Browser extensions such as RequestPolicy (for Mozilla Firefox) can prevent CSRF by providing a default-deny policy for cross-site requests. However, this can significantly interfere with the normal operation of many websites.
The CsFire extension (also for Firefox) can mitigate the impact of CSRF with less impact on normal browsing, by removing authentication information from cross-site requests.
Web developers, however have a better fighting chance to protect their users by implementing counter-measures such as:
- Requiring a secret, user-specific token in all form submissions, and side-effect URLs prevents CSRF; the attacker’s site cannot put the right token in its submissions
- Requiring the client to provide authentication data in the same HTTP Request used to perform any operation with security implications (money transfer, etc.)
- Limiting the lifetime of session cookies
- Checking the HTTP Referer header
- Ensuring that there is no clientaccesspolicy.xml file granting unintended access to Silverlight controls
- Ensuring that there is no crossdomain.xml file granting unintended access to Flash movies
- Verifying that the request’s header contains a X-Requested-With. Used by Ruby on Rails (before v2.0) and Django (before v1.2.5). This protection has been proven insecure under a combination of browser plugins and redirects which can allow an attacker to provide custom HTTP headers on a request to any website, hence allow a forged request.
One simple method to mitigate this vector is to use a CSRF filter such as OWASP’s CSRFGuard. The filter intercepts responses, detects if it is an html document, and inserts a token into the forms and optionally inserts script-to-insert tokens in ajax functions. The filter also intercepts requests to check that the token is present. One evolution of this approach is to double submit cookies for users who use JavaScript. If an authentication cookie is read using JavaScript before the post is made, JavaScript’s stricter (and more correct) cross-domain rules will be applied. If the server requires requests to contain the value of the authentication cookie in the body of POST requests or the URL of dangerous GET requests, then the request must have come from a trusted domain, since other domains are unable to read cookies from the trusting domain.
Checking the HTTP Referer header to see if the request is coming from an “authorized” page is a common tactic employed by embedded network devices due to the low memory requirements. However, a request that omits the Referer header must be treated as unauthorized because an attacker can suppress the Referer header by issuing requests from FTP or HTTPS URLs. This strict Referer validation may cause issues with browsers or proxies that omit the Referer header for privacy reasons. Also, old versions of Flash (before 9.0.18) allow malicious Flash to generate GET or POST requests with arbitrary http request headers using CRLF Injection. Similar CRLF injection vulnerabilities in a client can be used to spoof the referrer of an http request. To prevent forgery of login requests, sites can use these CSRF countermeasures in the login process, even before the user is logged in. Another consideration, for sites with especially strict security needs, like banks, often log users off after (for example) 15 minutes of inactivity.
Using the HTTP specified usage for GET and POST, in which GET requests never have a permanent effect, while good practice is not sufficient to prevent CSRF. Attackers can write JavaScript or ActionScript that invisibly submits a POST form to the target domain. However, filtering out unexpected GETs prevents some particular attacks, such as cross-site attacks using malicious image URLs or link addresses and cross-site information leakage through <script> elements (JavaScript hijacking); it also prevents (non-security-related) problems with some web crawlers as well as link prefetching.
I hope this helps when dealing with this malicious exploit. Let me know how it works out for you. Meanwhile, stay safe out there!
Malicious Exploits: Hitting the Internet Waves with CSRF, Part One
Cross-site request forgery, also known as a “one-click attack”, “session riding”, or “confused deputy attack”, and abbreviated as CSRF (sometimes pronounced “sea-surf”) or XSRF, is a type of a website malicious exploit where unauthorized commands are transmitted from a user that the website trusts.
Unlike cross-site scripting (XSS), which exploits the trust a user has for a particular site, CSRF exploits the trust that a site has in a user’s browser. Because it is carried out in the browser (from the user’s IP address), this attack method becomes quite difficult to log. A successful CSRF attack is carried out when an attacker entices a user to “click the dancing gnome” which does some dirty gnom-ish v00d00 magic (no offence to any gnomes in the readership) on another site where the user is, or has recently been, authenticated.
As we’ll see in our video example, by tricking a user into clicking on a link in, we are able to create a new administrator user which allows us to log in at will and further our attack.
According to the United States Department of Homeland Security the most dangerous CSRF vulnerability ranks as the 909th most dangerous software bug ever found, making this vulnerability more dangerous than most buffer overflows. Other severity metrics have been issued for CSRF vulnerabilities that result in remote code execution with root privileges as well as a vulnerability that can compromise a root certificate; which will completely undermine a public key infrastructure.
If that’s not enough, while typically described as a static-type of attack, CSRF can also be dynamically constructed as part of a payload for a cross-site scripting attack, a method seen used by the Samy worm. These attacks can also be constructed on the fly from session information leaked via offsite content and sent to a target as a malicious URL or leveraged via session fixation or other vulnerabilities, just to name a few of the creative ways to launch this attack.
Some other extremely useful and creative approaches to this attack have evolved in recent history. In 2009 Nathan Hamiel and Shawn Moyer discussed “Dynamic CSRF”, or using a per-client payload for session-specific forgery at the BlackHat Briefings, and in January 2012 Oren Ofer presented A new vector called “AJAX Hammer” for composing dynamic CSRF attacks at a local OWASP chapter meeting.
So we know this type of attack is alive and well. What can you do about it? Stay tuned — I’ll give you the solutions tomorrow in Part Two!