Threat Data Sharing in ICS/SCADA Needs Improvement

I had an interesting discussion on Twitter with a good friend earlier this week. The discussion was centered around information sharing in ICS/SCADA environments – particularly around the sharing of threat/attack pattern/vulnerability data. 

It seems to us that this sharing of information – some might call it “intelligence”, needs to improve. My friend argues that regulation from the feds and local governments have effectively made utilities and asset owners so focused on compliance, that they can’t spare the resources to share security information. Further, my friend claims that sharing information is seen as dangerous to the utility, as if the regulators ever found out that information was shared that wasn’t properly reported “up the chain”, that it could be used against the utility to indicate “negligence” or the like. I can see some of this, and I remember back to my DOE days when I heard some folks talk along the same lines back when we showed up to audit their environments, help them with incidents or otherwise contribute to their information security improvement.

When I asked on open Twitter with the #ICS/#SCADA hashtags about what hampered utilities from sharing information, the kind Twitter folks who replied talked about primarily three big issues: the lack of a common language for expressing security information (we have some common languages for this (mitre’s work, VERIS, etc.)), legal/regulatory concerns (as above) and the perceived lack of mitigations available (I wonder if this is apathy, despair or a combination of both?). 

I would like to get some wider feedback on these issues. If you don’t mind, please let me know either in comments, via private email or via Twitter (@lbhuston) what you believe the roadblocks are to information sharing in the ICS/SCADA community.

Personally, I see this as an area where a growth of “community” itself can help. Maybe if we can build stronger social ties amongst utilities, encourage friendship and sharing at a social level, empower ourselves with new mechanisms to openly share data (perhaps anonymously) and create an air of trust and equity, we can solve this problem ourselves. I know the government and industry has funded ISACs and other organizations, but it seems to me that we need something else – something more easily participatory, more social. It has to be easier and safer to share information between us than it is today. Maybe, if we made such a thing, we could all share more openly. That’s just my initial 2 cents. Please, share yours.

Thanks for reading, and until next time, stay safe out there!  

Surface Mapping Pays Off

You have heard us talk about surface mapping applications during an assessment before. You have likely even seen some of our talks about surface mapping networks as a part of the 80/20 Rule of InfoSec. But, we wanted to discuss how that same technique extends into the physical world as well. 

In the last few months, we have done a couple of engagements where the customer really wanted a clear and concise way to discuss physical security issues, possible controls and communicate that information to upper management. We immediately suggested a mind-map style approach with photos where possible for the icons and a heat map approach for expressing the levels of attack and compromise.

In one case, we surface mapped a utility substation. We showed pictures of the controls, pictures of the tools and techniques used to compromise them and even shot some video that demonstrated how easily some of the controls were overcome. The entire presentation was explained as a story and the points came across very very well. The management team was engaged, piqued their interest in the video and even took their turn at attempting to pick a couple of simple locks we had brought along. (Thanks to @sempf for the suggestion!) In my 20+ years of information security consulting, I have never seen a group folks as engaged as this group. It was amazing and very well received.

Another way we applied similar mapping techniques was while assessing an appliance we had in the lab recently. We photographed the various ports, inputs and pinouts. We shot video of connecting to the device and the brought some headers and tools to the meetings with us to discuss while they passed them around. We used screen shots as slides to show what the engineers saw and did at each stage. We gave high level overviews of the “why” we did this and the other thing. The briefing went well again and the customer was engaged and interested throughout our time together. In this case, we didn’t get to combine a demo in, but they loved it nonetheless. Their favorite part were the surface maps.

Mapping has proven its worth, over and over again to our teams and our clients. We love doing them and they love reading them. This is exactly how product designers, coders and makers should be engaged. We are very happy that they chose MSI and our lab services to engage with and look forward to many years of a great relationship!

Thanks for reading and reach out on Twitter (@lbhuston) or in the comments if you have any questions or insights to share.

Terminal Services Attack Reductions Redux

Last week, we published a post about the high frequency of probes, scans and attacks against exposed Windows Terminal Services from the Internet. Many folks commented on Twitter to me about some of the things that can be done to minimize the risk of these exposures. As we indicated in the previous post, the best suggestions are to eliminate them altogether by placing Terminal Services exposures behind VPN connections or through the implementation of tokens/multi-factor authentication. 

Another idea is to implement specific firewall rules that block access to all but a specific set of IP addresses (such as the home IP address range of your admins or that of a specific jump host, etc.) This can go a long way to minimizing the frequency of interaction with the attack surfaces by random attacker tools, probes and scans. It also raises the bar slightly for more focused attackers by forcing them to target specific systems (where you can deploy increased monitoring).

In addition, a new tool for auditing the configuration of Terminal Services implementations came to our attention. This tool, called “rdp-sec-check”, was written by Portcullis Security and is available to the public. Our testing of the tool showed it to be quite useful in determining the configuration of exposed Terminal Services and in creating a path for hardening them wherever deployed. (Keep in mind, it is likely useful to harden the Terminal Services implementations internally to critical systems as well…)

Note that we particularly loved that the tool could be used REMOTELY. This makes it useful to audit multiple customer implementations, as well as to check RDP exposures during penetration testing engagements. 

Thanks to Portcullis for making this tool available. Hopefully between this tool to harden your deployments and our advice to minimize the exposures, we can all drive down some of the compromises and breaches that result from poor RDP implementations.

If you would like to create some threat metrics for what port 3389 Terminal Services exposures might look like for your organization, get in touch and we can discuss either metrics from the HITME or how to use HoneyPoint to gather such metrics for yourself

PS – Special thanks to @SecRunner for pointing out that many cloud hosting providers make Terminal Server available with default configurations when provisioning cloud systems in an ad-hoc manner. This is likely a HUGE cause for concern and may be what is keeping scans and probes for 3389/TCP so active, particularly amongst cloud-hosted HITME end points.

PSS – We also thought you might enjoy seeing a sample of the videos that show entry level attackers exactly how to crack weak passwords via Terminal Services using tools easily available on the Internet. These kinds of videos are common for low hanging fruit attack vectors. This video was randomly pulled from the Twitter stream with a search. We did not make it and are not responsible for its content. It may not be safe for work (NSFW), depending on your organization’s policies. 

 

Exposed Terminal Services Remains High Frequency Threat

GlobalDisplay Orig

Quickly reviewing the HITME data gathered from our global deployment of HoneyPoint continues to show that exposed Terminal Services (RDP) on port 3389 remains a high frequency threat. In terms of general contact with the attack surface of an exposed Terminal Server connection, direct probes and attacker interaction is seen on an average approximately two times per hour. Given that metric, an organization who is using exposed Terminal Services for remote access or management/support, may be experiencing upwards of 48 attacks per day against their exposed remote access tool. In many cases, when we conduct penetration testing of organizations using Terminal Services in this manner, remote compromise of that service is found to lead to high levels of access to the organization’s data, if not complete control of their systems.

Many organizations continue to use Terminal Services without tokens or VPN technologies in play. These organizations are usually solely dependent on the security of login/password combinations (which history shows to be a critical mistake) and the overall security of the Terminal Services code (which despite a few critical issues, has a pretty fair record given its wide usage and intense scrutiny over the last decade). Clearly, deploying remote access and remote management tools is greatly preferred behind VPN implementations or other forms of access control. Additionally, upping Terminal Services authentication controls by requiring tokens or certificates is also highly suggested. Removing port 3389 exposures to the Internet will go a long way to increasing the security of organizations dependent on RDP technology.

If you would like to discuss the metrics around port 3389 attacks in more detail, drop us a line or reach out on Twitter (@microsolved). You can also see some real time metrics gathered from the HITME by following @honeypoint on Twitter. You’ll see lots of 3389 scan and probe sources in the data stream.

Thanks for reading and until next time, stay safe out there!

Threat and Vulnerability: Pay Attention to MS12-020

Microsoft today released details and a patch for the MS12-020 vulnerability. This is a remotely exploitable vulnerability in most current Windows platforms that are running Terminal Server/RDP. Many organizations use this service remotely across the Internet, via a VPN, or locally for internal tasks. It is a common, prevalent technology, and thus the target pool for attacks is likely to make this a significant issue in the near future. 

 
Please identify your exposures to this vulnerability. Exploits are likely currently being developed. We have not yet (3/13/12 – 2.15pm Eastern) seen exploitation or an increase in probes for port 3389, but both are expected to occur shortly.
 
Please let us know if you have any questions or if we may be of any assistance with this issue.
 
UPDATE: 
 
 
This article makes reference to a potential worm attack vector, which we see as increasingly likely. Our team believes the exploitation development time to be significantly less than 30 days and more like 1-3 days for resourced attackers. As such, PLEASE TREAT THIS AS A SIGNIFICANT INTERNAL VULNERABILITY as well. Certainly, IMMEDIATE consideration is needed for Internet exposed systems, but INTERNAL systems should be patched as soon as manageable as well.
 
UPDATE II:
 
 
This confirms the scope and criticality of this issue.
 
UPDATE III:
 
Just a quick note – we are seeing vast work on the MS12-020 exploit. Some evidence points to 2 working versions. Not public, yet, but PATCH NOW. Internal & protected networks too.
 
UPDATE IV:
 
MSI is proud to announce the immediate availability of a FREE version of HoneyPoint, called HPRDP2012 to help organizations monitor for ongoing scans and potential future worm activity. The application listens on port 3389/TCP and is available for OS X (Intel), Windows & Linux. This application is similar to our releases for Conficker & Morto, in that it will be operational for a set time (specifically until October 1, 2012). Simply unzip the application to where you would like to run and execute it. We hope this helps organizations manage this vulnerability and detect impacts should scans, probes or a worm emerge. Traditional HoneyPoint customers can use Agent and/or Wasp to listen for these connections and report them centrally by dilating TCP listener HoneyPoints on port 3389. Please let us know if you have any questions.
 
 
 
 
 

Malicious Exploits: Hitting the Internet Waves with CSRF, Part Two

 

If you’re the “average Web user” using unmodified versions of the most popular browsers can do relatively little to prevent cross-site request forgery.

Logging out of sites and avoiding their “remember me” features can help mitigate CSRF risk, in addition —  not displaying external images or not clicking links in spam or untrusted e-mails may also help. Browser extensions such as RequestPolicy (for Mozilla Firefox) can prevent CSRF by providing a default-deny policy for cross-site requests. However, this can significantly interfere with the normal operation of many websites. 

The CsFire extension (also for Firefox) can mitigate the impact of CSRF with less impact on normal browsing, by removing authentication information from cross-site requests.

Web developers, however have a better fighting chance to protect their users by implementing counter-measures such as:

  • Requiring a secret, user-specific token in all form submissions, and side-effect URLs prevents CSRF; the attacker’s site cannot put the right token in its submissions
  • Requiring the client to provide authentication data in the same HTTP Request used to perform any operation with security implications (money transfer, etc.)
  • Limiting the lifetime of session cookies
  • Checking the HTTP Referer header
  • Ensuring that there is no clientaccesspolicy.xml file granting unintended access to Silverlight controls
  • Ensuring that there is no crossdomain.xml file granting unintended access to Flash movies
  • Verifying that the request’s header contains a X-Requested-With. Used by Ruby on Rails (before v2.0) and Django (before v1.2.5). This protection has been proven insecure under a combination of browser plugins and redirects which can allow an attacker to provide custom HTTP headers on a request to any website, hence allow a forged request.

One simple method to mitigate this vector is to use a CSRF filter such as OWASP’s CSRFGuard. The filter intercepts responses, detects if it is an html document, and inserts a token into the forms and optionally inserts script-to-insert tokens in ajax functions. The filter also intercepts requests to check that the token is present. One evolution of this approach is to double submit cookies for users who use JavaScript. If an authentication cookie is read using JavaScript before the post is made, JavaScript’s stricter (and more correct) cross-domain rules will be applied. If the server requires requests to contain the value of the authentication cookie in the body of POST requests or the URL of dangerous GET requests, then the request must have come from a trusted domain, since other domains are unable to read cookies from the trusting domain.

Checking the HTTP Referer header to see if the request is coming from an “authorized” page is a common tactic employed by embedded network devices due to the low memory requirements. However, a request that omits the Referer header must be treated as unauthorized because an attacker can suppress the Referer header by issuing requests from FTP or HTTPS URLs. This strict Referer validation may cause issues with browsers or proxies that omit the Referer header for privacy reasons. Also, old versions of Flash (before 9.0.18) allow malicious Flash to generate GET or POST requests with arbitrary http request headers using CRLF Injection. Similar CRLF injection vulnerabilities in a client can be used to spoof the referrer of an http request. To prevent forgery of login requests, sites can use these CSRF countermeasures in the login process, even before the user is logged in. Another consideration, for sites with especially strict security needs, like banks, often log users off after (for example) 15 minutes of inactivity.

Using the HTTP specified usage for GET and POST, in which GET requests never have a permanent effect, while good practice is not sufficient to prevent CSRF. Attackers can write JavaScript or ActionScript that invisibly submits a POST form to the target domain. However, filtering out unexpected GETs prevents some particular attacks, such as cross-site attacks using malicious image URLs or link addresses and cross-site information leakage through <script> elements (JavaScript hijacking); it also prevents (non-security-related) problems with some web crawlers as well as link prefetching.

I hope this helps when dealing with this malicious exploit. Let me know how it works out for you. Meanwhile, stay safe out there!

Malicious Exploits: Hitting the Internet Waves with CSRF, Part One

 

 

 

 

 

Cross-site request forgery, also known as a “one-click attack”, “session riding”, or “confused deputy attack”, and abbreviated as CSRF (sometimes pronounced “sea-surf”) or XSRF, is a type of a website malicious exploit where unauthorized commands are transmitted from a user that the website trusts.

Unlike cross-site scripting (XSS), which exploits the trust a user has for a particular site, CSRF exploits the trust that a site has in a user’s browser. Because it is carried out in the browser (from the user’s IP address), this attack method becomes quite difficult to log. A successful CSRF attack is carried out when an attacker entices a user to “click the dancing gnome” which does some dirty gnom-ish v00d00 magic (no offence to any gnomes in the readership) on another site where the user is, or has recently been, authenticated.

As we’ll see in our video example, by tricking a user into clicking on a link in, we are able to create a new administrator user which allows us to log in at will and further our attack.

 

 

According to the United States Department of Homeland Security the most dangerous CSRF vulnerability ranks as the 909th most dangerous software bug ever found, making this vulnerability more dangerous than most buffer overflows. Other severity metrics have been issued for CSRF vulnerabilities that result in remote code execution with root privileges as well as a vulnerability that can compromise a root certificate; which will completely undermine a public key infrastructure. 

If that’s not enough, while typically described as a static-type of attack, CSRF can also be dynamically constructed as part of a payload for a cross-site scripting attack, a method seen used by the Samy worm. These attacks can also be constructed on the fly from session information leaked via offsite content and sent to a target as a malicious URL or leveraged via session fixation or other vulnerabilities, just to name a few of the creative ways to launch this attack. 

Some other extremely useful and creative approaches to this attack have evolved in recent history. In 2009 Nathan Hamiel and Shawn Moyer discussed “Dynamic CSRF”, or using a per-client payload for session-specific forgery at the BlackHat Briefings, and in January 2012 Oren Ofer presented A new vector called “AJAX Hammer” for composing dynamic CSRF attacks at a local OWASP chapter meeting.

So we know this type of attack is alive and well. What can you do about it? Stay tuned — I’ll give you the solutions tomorrow in Part Two!

Part Two here.

The 5 Big C’s Of Fail

From Brent Huston’s recent webinar, “How To Create A Threat-Centric Focus For Your Information Security Initiatives”:

Want to know why many information security programs are failing today? Yesterday, on our webinar, we got a lot of feedback on these issues and most folks agreed on these causes. A few said it was high time some one said what we did. For those of you who want to know why the attackers are winning, here is quick summary of the slide that caused all of the rukus on the webinar. Wanna see what all of the fuss was about? Drop us a line if you would like to be in the next session or stay tuned for a video of the talk in the next couple of weeks!

As always, thanks for the feedback. We are glad you enjoyed the talk and we look forward to giving it more often. It’s time we all started talking candidly about the problems we face and the real reasons that attackers are winning the race!