Threat and Vulnerability: Pay Attention to MS12-020

Microsoft today released details and a patch for the MS12-020 vulnerability. This is a remotely exploitable vulnerability in most current Windows platforms that are running Terminal Server/RDP. Many organizations use this service remotely across the Internet, via a VPN, or locally for internal tasks. It is a common, prevalent technology, and thus the target pool for attacks is likely to make this a significant issue in the near future. 

 
Please identify your exposures to this vulnerability. Exploits are likely currently being developed. We have not yet (3/13/12 – 2.15pm Eastern) seen exploitation or an increase in probes for port 3389, but both are expected to occur shortly.
 
Please let us know if you have any questions or if we may be of any assistance with this issue.
 
UPDATE: 
 
 
This article makes reference to a potential worm attack vector, which we see as increasingly likely. Our team believes the exploitation development time to be significantly less than 30 days and more like 1-3 days for resourced attackers. As such, PLEASE TREAT THIS AS A SIGNIFICANT INTERNAL VULNERABILITY as well. Certainly, IMMEDIATE consideration is needed for Internet exposed systems, but INTERNAL systems should be patched as soon as manageable as well.
 
UPDATE II:
 
 
This confirms the scope and criticality of this issue.
 
UPDATE III:
 
Just a quick note – we are seeing vast work on the MS12-020 exploit. Some evidence points to 2 working versions. Not public, yet, but PATCH NOW. Internal & protected networks too.
 
UPDATE IV:
 
MSI is proud to announce the immediate availability of a FREE version of HoneyPoint, called HPRDP2012 to help organizations monitor for ongoing scans and potential future worm activity. The application listens on port 3389/TCP and is available for OS X (Intel), Windows & Linux. This application is similar to our releases for Conficker & Morto, in that it will be operational for a set time (specifically until October 1, 2012). Simply unzip the application to where you would like to run and execute it. We hope this helps organizations manage this vulnerability and detect impacts should scans, probes or a worm emerge. Traditional HoneyPoint customers can use Agent and/or Wasp to listen for these connections and report them centrally by dilating TCP listener HoneyPoints on port 3389. Please let us know if you have any questions.
 
 
 
 
 

4 Tips for Teaching Your Staff About Social Engineering

If there is one thing that is tough to prevent, it is a person whose curiosity overrides their better judgement. Human nature leans toward discovery. If someone believes a valuable piece of information is available, there’s a very good chance she will satisfy her curiosity.

Social engineering, the process of obtaining confidential information through tricking people to do things they should not do; is on the rise. So how can you help your staff recognize social engineering before it’s too late?

Here are a few tips:

1. Create a process for validating outside inquiries.

Often, an attacker has done their homework in obtaining certain pieces of information such as having another employee’s name or their calendar to establish credibility. Create a process for inquiries, making someone the gatekeeper for such calls. Tell staff to not give out confidential information before checking with the gatekeeper.

2. Secure access into the organization.

Does your organization have guards? If not, it is the job of every employee to be alert to outsiders.

Name badges are another way to do this and require everyone to keep it visible. Explain to staff that it is perfectly legitimate to say, “I’m sorry, who did you say you were with again?” Teach awareness through fun exercises and safety posters.

3. Train staff to resist picking up strange USB keys.

This is difficult because it is where a person’s curiosity can get the best of them. However, a person has no idea what is on a found USB key. Would they eat food left on the floor of the kitchen? (Some, unfortunately, might!) Why would anyone take a found USB key and plug it into their computer? Curiosity. Create an incentive program for employees to return found keys to an IT administrator.

4. Fine tune a sense of good customer service.

Most people are helpful. This helpful nature is especially nurtured by organizations who want to provide good customer service to both internal staff and external contacts. Attackers take advantage of this by insisting that it would “be very helpful” if they could get someone’s confidential information in order to do their job. Train your staff to stick to the plan of verifying all inquiries by going through the proper channels. Help employees understand that this approach is truly the most “helpful” since they’ll be saving the company countless dollars if it’s an attack.

Consistent awareness is the key to resisting social engineering attacks. Use these tips and decrease your probability of an attack. Stay safe!

Malicious Exploits: Hitting the Internet Waves with CSRF, Part Two

 

If you’re the “average Web user” using unmodified versions of the most popular browsers can do relatively little to prevent cross-site request forgery.

Logging out of sites and avoiding their “remember me” features can help mitigate CSRF risk, in addition —  not displaying external images or not clicking links in spam or untrusted e-mails may also help. Browser extensions such as RequestPolicy (for Mozilla Firefox) can prevent CSRF by providing a default-deny policy for cross-site requests. However, this can significantly interfere with the normal operation of many websites. 

The CsFire extension (also for Firefox) can mitigate the impact of CSRF with less impact on normal browsing, by removing authentication information from cross-site requests.

Web developers, however have a better fighting chance to protect their users by implementing counter-measures such as:

  • Requiring a secret, user-specific token in all form submissions, and side-effect URLs prevents CSRF; the attacker’s site cannot put the right token in its submissions
  • Requiring the client to provide authentication data in the same HTTP Request used to perform any operation with security implications (money transfer, etc.)
  • Limiting the lifetime of session cookies
  • Checking the HTTP Referer header
  • Ensuring that there is no clientaccesspolicy.xml file granting unintended access to Silverlight controls
  • Ensuring that there is no crossdomain.xml file granting unintended access to Flash movies
  • Verifying that the request’s header contains a X-Requested-With. Used by Ruby on Rails (before v2.0) and Django (before v1.2.5). This protection has been proven insecure under a combination of browser plugins and redirects which can allow an attacker to provide custom HTTP headers on a request to any website, hence allow a forged request.

One simple method to mitigate this vector is to use a CSRF filter such as OWASP’s CSRFGuard. The filter intercepts responses, detects if it is an html document, and inserts a token into the forms and optionally inserts script-to-insert tokens in ajax functions. The filter also intercepts requests to check that the token is present. One evolution of this approach is to double submit cookies for users who use JavaScript. If an authentication cookie is read using JavaScript before the post is made, JavaScript’s stricter (and more correct) cross-domain rules will be applied. If the server requires requests to contain the value of the authentication cookie in the body of POST requests or the URL of dangerous GET requests, then the request must have come from a trusted domain, since other domains are unable to read cookies from the trusting domain.

Checking the HTTP Referer header to see if the request is coming from an “authorized” page is a common tactic employed by embedded network devices due to the low memory requirements. However, a request that omits the Referer header must be treated as unauthorized because an attacker can suppress the Referer header by issuing requests from FTP or HTTPS URLs. This strict Referer validation may cause issues with browsers or proxies that omit the Referer header for privacy reasons. Also, old versions of Flash (before 9.0.18) allow malicious Flash to generate GET or POST requests with arbitrary http request headers using CRLF Injection. Similar CRLF injection vulnerabilities in a client can be used to spoof the referrer of an http request. To prevent forgery of login requests, sites can use these CSRF countermeasures in the login process, even before the user is logged in. Another consideration, for sites with especially strict security needs, like banks, often log users off after (for example) 15 minutes of inactivity.

Using the HTTP specified usage for GET and POST, in which GET requests never have a permanent effect, while good practice is not sufficient to prevent CSRF. Attackers can write JavaScript or ActionScript that invisibly submits a POST form to the target domain. However, filtering out unexpected GETs prevents some particular attacks, such as cross-site attacks using malicious image URLs or link addresses and cross-site information leakage through <script> elements (JavaScript hijacking); it also prevents (non-security-related) problems with some web crawlers as well as link prefetching.

I hope this helps when dealing with this malicious exploit. Let me know how it works out for you. Meanwhile, stay safe out there!

Malicious Exploits: Hitting the Internet Waves with CSRF, Part One

 

 

 

 

 

Cross-site request forgery, also known as a “one-click attack”, “session riding”, or “confused deputy attack”, and abbreviated as CSRF (sometimes pronounced “sea-surf”) or XSRF, is a type of a website malicious exploit where unauthorized commands are transmitted from a user that the website trusts.

Unlike cross-site scripting (XSS), which exploits the trust a user has for a particular site, CSRF exploits the trust that a site has in a user’s browser. Because it is carried out in the browser (from the user’s IP address), this attack method becomes quite difficult to log. A successful CSRF attack is carried out when an attacker entices a user to “click the dancing gnome” which does some dirty gnom-ish v00d00 magic (no offence to any gnomes in the readership) on another site where the user is, or has recently been, authenticated.

As we’ll see in our video example, by tricking a user into clicking on a link in, we are able to create a new administrator user which allows us to log in at will and further our attack.

 

 

According to the United States Department of Homeland Security the most dangerous CSRF vulnerability ranks as the 909th most dangerous software bug ever found, making this vulnerability more dangerous than most buffer overflows. Other severity metrics have been issued for CSRF vulnerabilities that result in remote code execution with root privileges as well as a vulnerability that can compromise a root certificate; which will completely undermine a public key infrastructure. 

If that’s not enough, while typically described as a static-type of attack, CSRF can also be dynamically constructed as part of a payload for a cross-site scripting attack, a method seen used by the Samy worm. These attacks can also be constructed on the fly from session information leaked via offsite content and sent to a target as a malicious URL or leveraged via session fixation or other vulnerabilities, just to name a few of the creative ways to launch this attack. 

Some other extremely useful and creative approaches to this attack have evolved in recent history. In 2009 Nathan Hamiel and Shawn Moyer discussed “Dynamic CSRF”, or using a per-client payload for session-specific forgery at the BlackHat Briefings, and in January 2012 Oren Ofer presented A new vector called “AJAX Hammer” for composing dynamic CSRF attacks at a local OWASP chapter meeting.

So we know this type of attack is alive and well. What can you do about it? Stay tuned — I’ll give you the solutions tomorrow in Part Two!

Part Two here.

Audio Interview with a CIO: Dual Control of Computers for Security

Recently, Brent Huston, CEO and Security Evangelist for MicroSolved, had the opportunity to sit down with Dave, a CIO who has been working with dual control for network security. 

Brent and Dave talk about intrusion detection, dual control, and a few other information security topics, including these questions:

  • What is collusion and how can it pay off?
  • How does it work with dual control?
  • What are some dual control failures?

Click here to listen in and let us know what you think. Are you using dual control?

MSI Strategy & Tactics Talk Ep. 25: An Introduction to Cloud Computing – What to Choose and Why

Cloud computing has become a buzzword over the past few years. Some organizations wonder if it would benefit them or not. What are some of the questions an organization should be asking?  In this episode of MSI Strategy & Tactics, Adam Hostetler and Phil Grimes discuss the various aspects of “the cloud” and how it can affect an organization.  If you are considering transitioning your data to the cloud, you’ll want to listen! Discussion questions include:

  • How can you determine which cloud computing model is right for you?
  • What are some of the security issues with cloud deployment?
  • How can moving data to the cloud help an organization’s overall efficiency? 
Resources:
 
Panelists:
Adam Hostetler, Network Engineer, Security Analyst
Phil Grimes, Security Analyst
Mary Rose Maguire, Marketing Communication Specialist and moderator
 

Click the embedded player to listen. Or click this link to access downloads. Stay safe!

Brute-Force Attacks Reveal Band Tour Dates Before Official Announcement

Wikipedia File: Live Phish 7-15-03 (cover art)As many of my friends know, I have a slightly unhealthy obsession with the band Phish.  Yes, that Phish from Vermont. The band whose reputation rides the coattails of Jerry Garcia & Co., traveling from city to city and playing wanked-out, 30-minute versions of songs to a bunch of patchouli and Birkenstock-wearing hippies.

While only partially true, (or a majority for that matter) many “phans” or “glides” are actually quite resourceful and technically cunning.

Since the bands inception, (and taking a cue from The Grateful Dead) they’ve encouraged concert-goers to audio tape performances and trade those tapes, thus spreading their music far and wide. 

More recently, the band has included a free MP3 download code on each ticket and fans can actually listen to “crispy board” literally hours after the show has ended.  A co-operative Google spreadsheet was established to document and source every known performance of the bands storied career and links to digital and audience performances. For those who aren’t interested in downloading every show in the band’s archive, one can actually obtain an external hard drive with the music and a handful of video taped performances already loaded.  

If owning the entire catalog is too much music to sift through, fans have put together a number of compilations including “Machine Gun Trey,” “The Covers Project,” and a “Best Of’” version of the bands songs, chronologically — all labeled, sourced and ready to be downloaded into iTunes, with album art, of course.

While access to the band’s previous shows does quell the senses of their rabid fans, it does nothing but amplify the anticipation of upcoming tours.  For a band that has a reputation of traveling from town to town, fans have come to expect Fall/Holiday/Spring/Summer Tour announcements to come about the same time each year.

Rumors began to speculate weeks ago about where and when the band might be playing this summer. Recon missions for tour dates along with some good old-fashioned social engineering confirmed a date here or there.  Unsuspecting venue employees’ word about a “2 day hold on a venue” were placed together like puzzle pieces.

On February 28th, 2012, anticipation reached a near fevered pitch as the anticipated “Tour Dates at Noon” came and went without official announcement from the band and band’s management.  With only 1 official date announced for Bonnaroo (a four-day, multi-stage camping festival held on a 700-acre farm in Manchester, Tennessee) on www.phish.com, internet savvy fans began a brute-force attack on the website, with surprisingly accurate results.

By changing the URL of the band’s website using the intelligence and rumored concert dates gathered during the social engineering exercise, a more accurate touring calendar began to reveal itself. A simple change to the URL didn’t reveal a “404 – Not Found” web page but the message “You don’t have access rights to this page.” Fans knew they were on to something and my Facebook friends began to make travel plans for the tour that hadn’t been officially announced. This “leg up” could possibly make the difference between a hotel bed close to the venue or car-camping on a hot July evening in a field somewhere nearby. It also could mean a difference in airfare, days off work, or even rental car availability.

The official announcement came shortly after 12:00 PM on Leap Day 2012, (which is perfectly fitting for Red, Henrietta, Leo, and Cactus) complete with a professionally produced video of Phish Drummer Jon “Greasy Physique” Fishman preparing like Rocky for an upcoming bout. At the bottom, of the video, the band’s summer tour plans streamed with surprising accuracy of what many knew 24 hours beforehand.

Not bad for bunch of stinky hippies!

Reflections on a Past Vulnerability, Kind Of…

 Recently, someone asked me about a vulnerability I had found in a product 15 years ago. The details of the vulnerability itself are in CVE-1999-1141 which you can read for yourself here.

Apparently, some of these devices are still around in special use cases and some of them may not have been updated, even now, 15 years after this issue came to light and more than 13 years after Mitre assigned it a 7.5 out of 10 risk rating and an associated CVE id. That, in itself, is simply shocking, but is not what this post is about.

This post is about the past 15 years since I first made the issue public. At that time, both the world of infosec and I were different. I still believed in open disclosure, for example. However, shortly after this vulnerability research experience, I started to choke back on that belief. Today, I still research and discover vulnerabilities routinely, but I handle them differently.
 
I work with the vendor directly, consult with their developers and project teams as much as they let me, and then allow them to work through fixing their products. Some of these fixes take a very, very long time and some of them are relatively short. Sometimes the vendors/projects give me or MicroSolved public credit, but often they do not. Both are OK under the right circumstances, and I am much happier when the vendors ask us if we want to be credited publicly, but I am content if they fix the problems we find in many cases. We do our very best to be non-combative and rational with all of them in our discussions. I think it is one of the reasons why application and device testing in our lab is so popular — better service and kindness go a long way toward creating working relationships with everyone.
 
Now, I don’t want to dig into the debate about open disclosure and non-disclosure. You may have different opinions about it than I do, and I am perfectly fine with that and willing to let you have them. I choose this path in vulnerability handling because in the end, it makes the world a safer place for all of us. And make no mistake, that’s why I do what I do nearly every day and have done what I have done for more than 20 years now in information security.
 
That’s really what this post is about. It’s about change and commitment. I’m not proud of releasing vulnerability data in 1997, but I’m not ashamed of it either. Times have changed and so have I. So has my understanding of the world, crime and security. But at the bottom of all of that change, what remains rock solid is my commitment to infosec. I remain focused, as does MicroSolved, on working hard every day to make the world a safer place for you and your family.
 
In November of 2012, MSI will enter its 20th year in business. Twenty years of laser focus on this goal, on the work of data protection, and on our customers. It’s an honor. There is plenty of tradition, and plenty of change to reflect on. Thanks to all of you for giving me the opportunity to do so.
 
Now that I have nostalgia out of the way, if you are still using those old routers (you know who you are), replace those things! 
 
As always, thanks for reading and stay safe out there! 

Credit Unions and Small Banks Need Strong Security Relationships

With all of the attention in the press these days on the large banks, hacking, and a variety of social pressures against the financial institutions, it’s a good time to remember that credit unions and small banks abound around the world, too. They may offer an alternative to the traditional big banking you might be seeking, but they sometimes offer an alternative to the complex, well staffed information security teams that big banks have to bear against attackers and cyber-criminals, too.
 
While this shouldn’t be a worry for you as a consumer (in that your money is secure in a properly licensed and insured institution), it should be a concern for those tasked with protecting the data assets and systems of these organizations.
 
That’s where strong vendor relationships come in. Partnerships with good solution providers, security partners, virtual security teams and monitoring providers can be very helpful when there are a small number of technical resources at the bank or credit union. Ongoing training with organizations like SANS, CUISPA and our State of the Threat series is also very likely to assist the resources they do have in being focused against the current techniques used by attackers. Whether with peers or vendors, relationships are a powerful tool that help security admins in the field.
 
Smaller organizations need to leverage simple, effective and scalable solutions to achieve success. They simply won’t have the manpower to manage overwhelming alerts, too many log entries or some of the other basic mechanisms of infosec. They either must invest in automation or strategically outsource some of those high resource functions to get them done. If your bank has a single IT person who installs systems, manages software, secures the network, helps users, and never goes on vacation; you have one overwhelmed technician. Unfortunately, this all too common. Even worse is that many times, the things that can’t be easily done sometimes end up forgotten, pushed off or simply ignored. 
 
In some cases, where some of the security balls may have been dropped, attackers take advantage. They use malware, bots, social engineering and other techniques to scout out a foothold and go to work on committing fraud. That’s a bad way to learn the lessons of creating better security solutions.
 
So, the bottom line is if you are one of these smaller organizations, or one of the single technicians in question, you need to find some relationships. I suggest you start with your peers, work with some groups in your area (ISSA, ISACA, ISC2, etc.) and get together with some trusted vendors who can help you. Better to get your ducks in a row ahead of time than to have your ducks in the fire when attackers come looking for trouble. 

HoneyPoint Tales: Conficker Still Out There

I had an interesting conversation this week over email with a security admin still fighting Conficker.

If you haven’t recalled Conficker in a while, take a moment and read the wikipedia entry here: (http://en.wikipedia.org/wiki/Conficker). Back in 2008, this nasty bugger spread across the net like wild fire. It was and is, quite persistent. 

Back in those days, we even put out a free version of HoneyPoint called HPConficker to act as a scatter sensor for detecting infected hosts on networks around the world. That tool expired eventually, and to be honest, we stopped really tracking Conficker back in 2010 to move on to studying other vectors and exploits. I hadn’t even thought about the HPConficker tool since then, until this week. 
 
In order to help this admin out as they battled the worm, I came in on a vacation day, dug the old code out of the source vault and updated it to run through the end of 2012. I then built a quick compile, zipped it (in my hurry forgetting to remove the OS X file noise) and sent it on to the sales person who was helping the client directly. When I heard that the zip file with OS X noise was a problem, I quickly cleaned the zip and sent it back up to the server for them to re-download, install and use. Sadly, I haven’t yet had time to build a readme file or the like, but the tool is pretty easy to use. Unzip it with folder extraction enabled, execute it and follow the GUI instructions. I haven’t heard back from my new security admin friend, but I hoped it helped them fight the good fight.
 
I took a couple of key points from this: 1) Conficker is still around and causing trouble; and 2) Helping people with HoneyPoint is still one of the core reasons I do what I do.
 
I may not say it often enough, but, thanks to all of you for playing with my toys. Since 2006, the knowledge gained, the insights and the outright chance to help people with my software has been a great joy. I look forward to pursuing it for many years to come. 
 
Keep playing with HoneyPoint. Keep talking to us. We want to engage, and we want to help YOU solve YOUR problems. At the core, that’s what MSI is all about. As always, thanks for reading and stay safe out there!
 
PS – We haven’t decided if we are going to release the tool again. If you want it and it can help you, drop me a line in the comments, send me a tweet (@lbhuston) or get in touch. Even if we don’t push it out in public on the site, it’s here if you need it…