Yet More on SockStress…

OK gang, the story gets interesting again….

Check this out for some deeply technical details on some level of the basics of the attack. Fyodor has done an excellent write up of his guess.

You can also check out the response from the relevant researchers here.

I do like and understand Fyodor’s point that this smells like marketing. Perhaps we are supposed to believe that the vendors will have their responses coodinated and completed before the talk and disclosure? If not, then what is the point of waiting to disclose except to sell tickets to the conference?

This is a pretty HUGE can of worms that seems to have been opened by Kaminsky during the recent DNS issue. I guess it is just another nuance of this new age of attackers that we have entered. We will have to deal with more “huge holes” accompanied by media-frenzy, hype, researcher infighting and security vendor blather until the public and the press grow tired of it.

My point yesterday was that one of these days we will reach a point when some of these major vulnerabilities will not be able to be easily repaired or patched. When that becomes so, we may have to find a way to teach every day users how to plan for, and engineer for, acceptable failures. Until then, we should probably hone those skills and ideas, because it looks like where we are headed may just be fraught with scenarios where some levels of ongoing vulnerability and compromise may be a fact of life.

I believe strongly that we can engineer for failure. We can embrace data classification, appropriate controls and enclave computing in such a way that we can live with a fairly high level of comprise and still keep primary assets safe. I believe that because it seems to be the way we have dealt with other threats throughout history that we could not contain, eliminate or mitigate. We simply evolved our society and ourselves to the point where we could live with them as “accepted risks”. Some day, maybe even soon, we will be able to spend a lot less time worrying about whether or not users click on the “dancing gnome”, keep their workstations patched or if there is a vulnerability in some deep protocol…

The Protocol Vulnerability Game Continues…

First it was the quaking of the Earth under the weight of the DNS vulnerability that kept us awake at night. Experts predicted the demise of the Internet and cast doomsday shadows over the length of the web. Next came a laser focus on BGP and the potential for more damage to the global infrastructure. Following that came the financial crisis – which looks like it could kill the Internet from attrition when vendor, customer, banking and government dollars simply strangle it to death with a huge gasp!

Likely, we haven’t even seen the end of these other issues when a new evil raises it’s head. There has been a ton of attention on the emerging “sockstress” vulnerability. According to some sources this manipulation of TCP state tables will impact every device that can plug into a network and allow an attacker to cause denial of service outages with small amounts of bandwidth. If this is truly a protocol issue across implementations, as the researchers claim, then the effects could be huge for businesses and consumers alike.

What happens when vulnerabilities are discovered in things that can’t be patched? What happens when everyday devices that depend on networking become vulnerable to trivial exploits without mitigation? These are huge issues that impact everything from blenders to refrigerators to set top cable boxes, modems, routers and other critical systems.

Imagine the costs if your broadband ISP had to replace every modem or router in their client’s homes and businesses. What choice would they have if there were a serious vulnerability that couldn’t be fixed with a remote firmware upgrade? Even if the vulnerability could be minimized by some sort of network filtering, what else would those filters break?

It doesn’t take long to understand the potential gravity of attackers finding holes deep inside accepted and propagated protocols and applications.TCP is likely the widest used protocol on the planet. A serious hole in it, could impact risk in everything from power grid and nuclear control systems to the laundromat dryers that update a Twitter stream when they are free.

How will organizations that depend on huge industrial control systems handle these issues? What would the cost be to update/upgrade the robots that build cars at a factory to mitigate a serious hole? How many consumers would be able or willing to replace the network firewall or wireless router that they bought two years ago with new devices that were immune to a security issue?

Granted there should always be a risk versus reward equation in use, and the sky is definitely NOT falling today. But, that said, we know researchers and attackers are digging deeper and deeper into the core protocols and applications that our networks depend on. Given that fact, it seems only reasonable to assume that someday, we may have to face the idea of a hole being present in anything that plugs into a network – much of which does not have a mechanism to be patched, upgraded or protected beyond replacement. Beginning to consider this issue today just might give us some epiphanies or breakthroughs between now and the tomorrow that makes this problem real…

Morfeus Scanner soapCaller.bs Scans

Our HoneyPoint deployments have been picking up a recently added (August 08) scan signature from Morfeus, the bot-based web scanner, that has been around for a long time. The new scans were first detected on our consumer grade DSL/Cable segments in late August and have now also been seen on our Corporate environment sensors as well.

The scans check for “soapCaller.bs” and then “/user/soapCaller.bs”. Returning a 200 result code did not bring any additional traffic or attacks from the original source within 96 hours of the initial scans. In fact, returning the 200 did not seem to cause any change in behavior of the scans or any additional attacks from any source. Likely, this means that vulnerable hosts are being cataloged for later mass exploitation.

Morfeus scans are quite prevalent and can include searches for a number of common PHP and other web application vulnerabilities. Google searches on “morfeus” return about 259,000 results, including quite a few mentions of ongoing scans from the bot-net.

Here is a blog post that discusses using .htaccess rules to block scans with the morfeus user agent.

Morfeus has shown itself to be quite adaptive and seems to be updated pretty frequently by the bot-masters with new application attack signatures. The scanning is very widespread and can be observed on a regular basis across platforms and ISP types.

The soapCaller.bs page is a file often associated with Drupal content management system. There have been a number of vulnerabilities identified in this package in the past, including during our recent content manager testing project. Users of Drupal should be vigilant in patching the systems and in performing application assessments.

Patched DNS Servers Still Not Safe!?!

OK, now we have some more bad news on the DNS front. There have been new developments along the exploit front that raise the bar for protecting DNS servers against the cache poisoning attacks that became all the focus a few weeks ago.

A new set of exploits have emerged that allow successful cache poisoning attacks against BIND servers, even with the source port randomization patches applied!

The new exploits make the attack around 60% likely to succeed in a 12 hour time period and the attack is roughly equivalent in scope to a typical brute force attack against passwords, sessions or other credentials. The same techniques are likely to get applied to other DNS servers in the coming days and could reopen the entire DNS system to further security issues and exploitation. While the only published exploits we have seen so far are against BIND, we feel it is likely that additional targets will follow in the future.

It should be noted that attackers need high speed access and adequate systems to perform the current exploit, but a distributed version of the attack that could be performed via a coordinated mechanism such as a bot-net could dramatically change that model.

BTW – according to the exploit code, the target testing system used fully randomized source ports, using roughly 64,000 ports, and the attack was still successful. That means that if your server only implemented smaller port windows (as a few did), then the attack will be even easier against those systems.

Please note that this is NOT a new exploit, but a faster, more powerful way to exploit the attack that DK discovered. You can read about Dan’s view of the issue here (**Spoiler** He is all about risk acceptance in business. Alex Hutton, do you care to weigh in on this one?)

This brings to mind the reminder that ATTACKERS HAVE THE FINAL SAY IN THE EVOLUTION OF ATTACKS and that when they change the paradigm of the attack vector, bad things can and do happen.

PS – DNS Doberman, the tool we released a couple of days ago, will detect the cache poisoning if/when it occurs! You can get more info about our tool here.

MSI Releases DNS Doberman to the Public

Now your organization can have a 24/7 guard dog to monitor key DNS resolutions and protect against the effects of DNS cache poisoning, DNS tampering and other resolution attacks. Our tool is an easy to use, yet quite flexible and powerful solution to monitoring for attacks that have modified your (or your upstream ISPs’) resolutions for sites such as search engines, software updates, key business partners, etc.

DNS Doberman is configured with a set of trusted host names and IP address combinations (yes, you can have more than one IP per host…) which are then checked on a timed basis. If any of your monitored hosts returns an IP that the DNS Doberman doesn’t trust – then it alerts you and your security team. It supports a variety of alerting methods to support every environment from home users to enterprises.

You can learn more about the tool and download the FREE version from the link below. The FREE version is completely useable and if it suits your needs, you are welcome to continue to use it indefinitely. The FREE version is restricted to 5 hosts and only checks each host once per hour. Registered users ($99.95) will receive support, minor version upgrades and the ability to check an unlimited number of hosts every 15 minutes!

To learn more or get your copy today, please visit the MSI main web site, here.

Wait a Minute, You’re Using the Wrong DNS Exploit!

Attackers are apparently zigging when we thought they would be zagging again. An article posted yesterday talks about how attackers have passed on using the exploits published by the common frameworks and instead, have been pretty widely using a more advanced, capable and less known tool to exploit the DNS vulnerabilities that have been in the news for the last few weeks.

In the article, HD Moore, a well known security professional (and author of Metasploit), discusses how the attackers seem to be bypassing the exploit that he and his team published and instead have been using another exploit to perform illicit attacks. In fact, the attackers used their own private exploit to attack the Breakingpoint company that Moore works for during the day. I was very interested in this approach by the attackers, and it seems almost ironic somehow, that they have bypassed the popular Metasploit tool exploits for one of their own choosing.

This is interesting to me because when an exploit appears in Metasploit, one would assume that it will be widely used by attackers. Metasploit, after all, makes advanced attacks and compromise techniques pretty much “click and drool” for even basic attackers. Thus, when an exploit appears there, many in the security community see that as a turning point in the exploitability of an attack – meaning that it becomes widely available for mischief. However, in this case, the availability of the Metasploit exploit was not a major factor in the attacks. Widespread attacks are still not common, even as targeted attacks using a different exploit has begun. Does this mean that the attacker community has turned its back on Metasploit?

The answer is probably no. A significant number of attackers are likely to continue to use Metasploit to target their victims. Our HoneyPoint deployments see plenty of activity that can be traced back to the popular exploit engine. Maybe, in this case, the attackers who were seriously interested had a better mechanism available to them. Among our team there is speculation that some of the private, “black market” exploit frameworks may be stepping up their quality and effectiveness. These “exploits for sale” authors may be increasing their skills and abilities in order to ensure that their work retains value as more and more open source or FREE exploit frameworks emerge into the market place. After all, they face the same issues as any other software company – they have to have high value in order to compete effectively with low cost. For exploit sellers this means more zero-day exploits, more types of evasion, more options for post-exploitation and higher quality of the code they generate.

In some ways, tools like Metasploit help the security community by giving security teams exploitation capabilities on par with basic attackers. In other ways, perhaps they also hurt the security effort by enabling more basic attackers to do complex work and by driving up the quality and speed of exploit availability on the black market. It is hard to argue that such black market efforts would not be present anyway as the attackers strive to compete amongst themselves, but you have to wonder if Metasploit and tools like it serve to speed up the pace.

There will always be tools available to attackers. If they aren’t widely available, then they will be available to a specific few. The skills to create attack tools are no longer the arcane knowledge known to a small circle of security mystics that they were a decade ago. Vendors and training companies have sliced and diced the skills into a myriad of books, classes, training sessions, conventions and other mechanisms in order to “monetize” their dissemination. As such, there are many many many more folks with the skills needed to develop attack tools, code exploits and create malware that has ever increasing capability.

This all comes back to the idea that in today’s environment, keeping anything secret, is nearly impossible. The details of the DNS vulnerability were doomed to be known even as they were being initially discovered. There are just too many smart people with skills to keep security issues private when there is any sort of disclosure to the public. There are too many parties interested in making a name, gaining some fame or turning a buck to have any chance at keeping vulnerabilities secret. I am certainly not a fan of total non-disclosure, but we have to assume that even some level of basic public knowledge will eventually equal full disclosure. We also have to remember, in the future, that the attacker pool is wider and deeper than ever before and that given those capabilities, they may well find mechanisms and tools that are beyond what we expect. They may reject the popular wisdom of “security pundits from the blogosphere” and surprise us all. After all, that is what they do – surf the edges and perform in unexpected ways – it just seems that some of us security folks may have forgotten it….

Some Potential DNS Poisoning Scenarios

We have kind of been breaking down the DNS cache poisoning exploit scenarios and have been dropping them into 3 different “piles”.

1) Massive poisoning attacks that would be used a denial of service style attack to attempt to “cut an organization off from the Internet” or at least key sites – the damage from this one could be low to medium and is obviously likely to be discovered fairly quickly, though tracking down the issue could be difficult for organizations without adequate technical support or on-site IT teams

2) Large scale attacks with malware intent – these would also be largely executed in an attempt to introduce malware into the organization, browser exploits, client-side exploits or forms of social engineering could be used to trick users into activating malware, likely these attempts would introduce bot-net agents into the organization giving attackers remote control of part or all of the environment

3) Surgical poisoning attacks – these attacks would be more focused and much more difficult to identify, in this case, the attackers would poison the cache of sites that they knew could be critical- this could be as obvious as the windows update sites or as focused as the banking sites or stock trade sites of executives, this attack platform is likely to be focused on specific effects and will likely be combined with social engineering to get insight into the specifics of the target

There certainly may be a myriad of additional scenarios or specific focus points for the attacks, but we wanted to give some examples so that folks can be aware of where attackers may go with their new toys and techniques.

Doing incident response and forensics on these attacks could be difficult depending on the levels of the cache time to live and logging that is done on the DNS systems. Now might be a good time to review both of these variables to make sure they will be adequate to examine any attack patterns should they be discovered now, or in the future from this or any other poisoning attack vector.

As we stated earlier, please do not rely on the idea that recursion is only available from internal systems as a defense. That might help protect you from the “click and drool” exploits, but WILL NOT PROTECT YOU from determined, capable attackers!

Myriad of Ways to Trigger Internal DNS Recursion – Please Patch Now!

For those organizations who have decided not to patch their DNS servers because they feel protected by implemented controls that only allow recursion from internal systems, we just wanted to point out that there a number of ways that an attacker can cause a recursive query to be performed by an “internal” host.

Here is just a short list of things that an attacker could do to cause internal DNS recursion to occur:

Send an email with an embedded graphic from the site that they want to poison your cache for, which will cause your DNS to do a lookup for that domain if it is not already known by your DNS

Send an email to a mail server that does reverse lookups on the sender domain (would moving your reverse lookup rule down in the rule stack of email filters help minimize this possibility???)

Embed web content on pages that your users visit that would trigger a lookup

Trick users through social engineering into visiting a web site or the like

Use a bot-net (or other malware) controlled system in your environment to do the lookup themselves (they could also use this mechanism to perform “internal” cache poisoning attacks)

The key point here is that many organizations believe that the fact that they don’t allow recursion from external hosts makes them invulnerable to the exploits now circulating in the wild for the DNS issue at hand. While they may be resilient to the “click and drool” hacks, they are far more vulnerable than they believe to a knowledgeable, focused, resourced attacker who might be focused on their environment.

The bottom line solution, in case you are not aware, is to PATCH YOUR DNS SYSTEMS NOW IF THEY ARE NOT PATCHED ALREADY.

Please, do not wait, active and wide scale exploitation is very likely in the very near future, if it is not underway right now!

DNS Exploit is in the Wild – Patch NOW!!!

Unfortunately, the blackout period for the DNS issues has been broken. The exploit details have been made public and have been in the wild for a number of hours. While the security researchers involved have tried to remove the details and analysis, Google had already cached the site and the details are now widely known.

Please patch IMMEDIATELY if you have not already done so!

If you can not patch your existing DNS product, please switch to a patched public DNS (for Internet resolution) or deploy OPENDNS as soon as possible.

Here is a quick and dirty plan of action:

1. Catalog the DHCP Servers you use on the Internet and internally. Be sure you check all branch locations, firewalls and DHCP servers to ensure that you have a complete picture. If you find any Internet facing DNS with recursive enabled, disable it ASAP!

2. Verify that each of these DNS implementations are patched or not vulnerable. You can check vulnerability by using the “Check DNS” tool at Mr. Kaminski’s page, here.

3. Test the patch and get it implemented as quickly as possible.

4. Note that you may have to upgrade firmware and software for firewalls, packet filters and other security controls to enable them to understand the new DNS operations and keep them from interfering with the new way that DNS “acts”.

Please note that the exploit for this cache poisoning attack in now public and exploitation on a wide scale could already be underway. PATCH AS SOON AS POSSIBLE!

Symptoms to look for include:

Vulnerability: unpatched and non-random source ports for DNS query responses.

Exploit: check for a large number of non-existent subdomains in your DNS records (or subdomain requests in your logs) if you are an authoritative DNS for a domain, attackers will be poisoning the cache with subdomain records at least according to some researchers.

If you have questions or concerns, please contact MSI for more information or assistance.
Updates to our DNS paper and other details will be released soon, so check back with stateofsecurity.com for updates.

DNS Patches May Break Some Things…

I just had a quick conversation with an IT technician who alluded to the idea that more than Zone Alarm may be broken by the new port randomization behaviors of “patched DNS”. These fundamental changes to the ports allocated for DNS traffic may confuse existing firewalls and other filtering devices that are unaware of the changes to DNS behaviors.

For example, if you have filtering devices that specific port ranges defined for egress or ingress of DNS traffic, especially if you are using a non-stateful device, this configuration may need to be changed to allow for the greater port range applied to the “patched DNS” setup. Systems that are also “DNS aware” might not expect the randomization of the ports that the patching is creating. As such, filtering devices, especially at the perimeter may well need to be reconfigured or upgraded as well to allow for continued operation of unimpeded DNS traffic.

There may be SEVERAL other nuances that become evident in some environments as the patch process for the DNS issue continues to evolve. Stay tuned to stateofsecurity.com and other security venues for information and guidance as it becomes available.