Hackers Hate HoneyPoint

HackersHateHPlogoed200.jpg

We have been getting so much great feedback and positive response to our HoneyPoint products that Mary Rose, our marketing person, crafted this logo and is putting together a small campaign based on the idea.

We are continuing to work on new capabilities and uses for HoneyPoint. We have several new tricks up our sleeve and several new ways to use our very own “security swiss army knife”. The capabilities, insights and knowledge that the product brings us is quickly and easily being integrated into our core service offerings. Our assessments and penetration testing brings this “bleeding edge” attack knowledge, threat analysis and risk insight to our work. We are routinely integrating the attack patterns and risk data from our deployed HoneyPoints back into the knowledge mix. We are adding new tools, techniques and risk rating adjustments based on the clear vision we are obtaining from HoneyPoint.

This is just one of the many ways that HoneyPoint and the experience, methodology and dedication of MSI separate us from our competitors. Clients continue to love our rapport, reporting formats, flexibility and deep knowledge – but now, thanks to HoneyPoint, they also enjoy our ability to work with them to create rational defenses to bleeding edge threats.

You can bet that you will see more about HoneyPoint in the future. After all, hackers hate HoneyPoint, and in this case, being hated is fine with us!

Spam Bots

We are continuing to see more and more spam bots. Spammers are not letting up and are still actively researching and breaking “captchas”. We have seen several of them broken within the past few weeks. It seems it’s about time to adopt a new system of anti-bot measures for registration forms, or increase the complexity of the captcha (while also increasing user frustration).

That reminds me of a study I was reading about spam though. The researchers in this study found that only about 1 in 12.5 million spams result in a sale of whatever was being spammed about. However, even with this atrocious rate, the spammers are estimated to be generating around $7,000 a day!

MS08-067 – The Worm That Wasn’t – Wait… Might Be?

So, the worm based on MS08-067 was rumored last week and now SANS confirms that the worm is spreading from at least one host. SANS is blaming 61.218.147.66. We also have seen scans from 208.23.24.52, 66.100.224.113, 97.89.26.99, 219.158.0.96, 88.178.18.41, 91.142.209.26, 189.20.48.210, 212.122.95.217, 131.118.74.244, 84.3.125.99, 81.57.69.99 and a ton more. Those started to increase dramatically starting this morning around 9:25 am Eastern and have continued throughout the day.

HoneyPoints on consumer bandwidth networks and commercial ISP’s alike are picking up a spike in 445 scans and traffic.

Obviously, given the metasploit framework’s improvement of the exploit in the last week or so and the myriad of proof of concept tools that have been filtering around the underground, the threat of a worm is a reality. Worm code was first announced several days ago, but seemed to fail to propagate likely due to the lack of port 445 being available on most Internet connections. However, it appears that some victims have been found and have been slowly accumulating.

While we are not yet seeing the massive scans and probes associated with the worms of the past, we are beginning to see traffic levels that indicate increasing worm behaviors.

Obviously, if you have not yet ensured that port 445 is blocked at your Internet connection, you should immediately do so. HoneyPoint users can also setup TCP listeners or basic TCP HornetPoints to discover and attempt to “defensive fuzz” the worm code. Mixed results of causing termination have been shown so far, but our lab is working on a HornetPoint configuration to cause exceptions in the worm code in a stable manner.

HoneyPoint TCP listeners can be deployed on Linux boxes and other platforms where port 445 is undialated and used to identify hosts performing 445 scans and probes. This is an excellent approach to finding laptops and portable devices that might be infected on the internal network.

Microsoft Patches Now Have an Exploitability Rating

Microsoft patches now include a new exploitability index. This new rating attempts to quantify when/if an exploit is likely to become available for a given vulnerability. The rating also attempts to take into consideration how stable a given exploit is likely to be.

Personally, I think this is a good idea, especially if they keep their methods for rating issues consistent and transparent. Already, a number of vendors have said that they will be adding support for the new index value in their tools and software. As might be expected, reaction has been mixed from the community, though, I have yet to see any response that included how such information could be truly harmful.

You can read Microsoft’s published information here.

I hope more vendors embrace this seemingly small detail. I think it is helpful for more than a few organizations overwhelmed by patch cycles. It may not be the “holy grail of patch risk”, but it is likely better than what we have now.

How does your organization plan to use this new information, if at all? Drop us a comment and let us know!

Yet More on SockStress…

OK gang, the story gets interesting again….

Check this out for some deeply technical details on some level of the basics of the attack. Fyodor has done an excellent write up of his guess.

You can also check out the response from the relevant researchers here.

I do like and understand Fyodor’s point that this smells like marketing. Perhaps we are supposed to believe that the vendors will have their responses coodinated and completed before the talk and disclosure? If not, then what is the point of waiting to disclose except to sell tickets to the conference?

This is a pretty HUGE can of worms that seems to have been opened by Kaminsky during the recent DNS issue. I guess it is just another nuance of this new age of attackers that we have entered. We will have to deal with more “huge holes” accompanied by media-frenzy, hype, researcher infighting and security vendor blather until the public and the press grow tired of it.

My point yesterday was that one of these days we will reach a point when some of these major vulnerabilities will not be able to be easily repaired or patched. When that becomes so, we may have to find a way to teach every day users how to plan for, and engineer for, acceptable failures. Until then, we should probably hone those skills and ideas, because it looks like where we are headed may just be fraught with scenarios where some levels of ongoing vulnerability and compromise may be a fact of life.

I believe strongly that we can engineer for failure. We can embrace data classification, appropriate controls and enclave computing in such a way that we can live with a fairly high level of comprise and still keep primary assets safe. I believe that because it seems to be the way we have dealt with other threats throughout history that we could not contain, eliminate or mitigate. We simply evolved our society and ourselves to the point where we could live with them as “accepted risks”. Some day, maybe even soon, we will be able to spend a lot less time worrying about whether or not users click on the “dancing gnome”, keep their workstations patched or if there is a vulnerability in some deep protocol…

The Protocol Vulnerability Game Continues…

First it was the quaking of the Earth under the weight of the DNS vulnerability that kept us awake at night. Experts predicted the demise of the Internet and cast doomsday shadows over the length of the web. Next came a laser focus on BGP and the potential for more damage to the global infrastructure. Following that came the financial crisis – which looks like it could kill the Internet from attrition when vendor, customer, banking and government dollars simply strangle it to death with a huge gasp!

Likely, we haven’t even seen the end of these other issues when a new evil raises it’s head. There has been a ton of attention on the emerging “sockstress” vulnerability. According to some sources this manipulation of TCP state tables will impact every device that can plug into a network and allow an attacker to cause denial of service outages with small amounts of bandwidth. If this is truly a protocol issue across implementations, as the researchers claim, then the effects could be huge for businesses and consumers alike.

What happens when vulnerabilities are discovered in things that can’t be patched? What happens when everyday devices that depend on networking become vulnerable to trivial exploits without mitigation? These are huge issues that impact everything from blenders to refrigerators to set top cable boxes, modems, routers and other critical systems.

Imagine the costs if your broadband ISP had to replace every modem or router in their client’s homes and businesses. What choice would they have if there were a serious vulnerability that couldn’t be fixed with a remote firmware upgrade? Even if the vulnerability could be minimized by some sort of network filtering, what else would those filters break?

It doesn’t take long to understand the potential gravity of attackers finding holes deep inside accepted and propagated protocols and applications.TCP is likely the widest used protocol on the planet. A serious hole in it, could impact risk in everything from power grid and nuclear control systems to the laundromat dryers that update a Twitter stream when they are free.

How will organizations that depend on huge industrial control systems handle these issues? What would the cost be to update/upgrade the robots that build cars at a factory to mitigate a serious hole? How many consumers would be able or willing to replace the network firewall or wireless router that they bought two years ago with new devices that were immune to a security issue?

Granted there should always be a risk versus reward equation in use, and the sky is definitely NOT falling today. But, that said, we know researchers and attackers are digging deeper and deeper into the core protocols and applications that our networks depend on. Given that fact, it seems only reasonable to assume that someday, we may have to face the idea of a hole being present in anything that plugs into a network – much of which does not have a mechanism to be patched, upgraded or protected beyond replacement. Beginning to consider this issue today just might give us some epiphanies or breakthroughs between now and the tomorrow that makes this problem real…

Morfeus Scanner soapCaller.bs Scans

Our HoneyPoint deployments have been picking up a recently added (August 08) scan signature from Morfeus, the bot-based web scanner, that has been around for a long time. The new scans were first detected on our consumer grade DSL/Cable segments in late August and have now also been seen on our Corporate environment sensors as well.

The scans check for “soapCaller.bs” and then “/user/soapCaller.bs”. Returning a 200 result code did not bring any additional traffic or attacks from the original source within 96 hours of the initial scans. In fact, returning the 200 did not seem to cause any change in behavior of the scans or any additional attacks from any source. Likely, this means that vulnerable hosts are being cataloged for later mass exploitation.

Morfeus scans are quite prevalent and can include searches for a number of common PHP and other web application vulnerabilities. Google searches on “morfeus” return about 259,000 results, including quite a few mentions of ongoing scans from the bot-net.

Here is a blog post that discusses using .htaccess rules to block scans with the morfeus user agent.

Morfeus has shown itself to be quite adaptive and seems to be updated pretty frequently by the bot-masters with new application attack signatures. The scanning is very widespread and can be observed on a regular basis across platforms and ISP types.

The soapCaller.bs page is a file often associated with Drupal content management system. There have been a number of vulnerabilities identified in this package in the past, including during our recent content manager testing project. Users of Drupal should be vigilant in patching the systems and in performing application assessments.

Patched DNS Servers Still Not Safe!?!

OK, now we have some more bad news on the DNS front. There have been new developments along the exploit front that raise the bar for protecting DNS servers against the cache poisoning attacks that became all the focus a few weeks ago.

A new set of exploits have emerged that allow successful cache poisoning attacks against BIND servers, even with the source port randomization patches applied!

The new exploits make the attack around 60% likely to succeed in a 12 hour time period and the attack is roughly equivalent in scope to a typical brute force attack against passwords, sessions or other credentials. The same techniques are likely to get applied to other DNS servers in the coming days and could reopen the entire DNS system to further security issues and exploitation. While the only published exploits we have seen so far are against BIND, we feel it is likely that additional targets will follow in the future.

It should be noted that attackers need high speed access and adequate systems to perform the current exploit, but a distributed version of the attack that could be performed via a coordinated mechanism such as a bot-net could dramatically change that model.

BTW – according to the exploit code, the target testing system used fully randomized source ports, using roughly 64,000 ports, and the attack was still successful. That means that if your server only implemented smaller port windows (as a few did), then the attack will be even easier against those systems.

Please note that this is NOT a new exploit, but a faster, more powerful way to exploit the attack that DK discovered. You can read about Dan’s view of the issue here (**Spoiler** He is all about risk acceptance in business. Alex Hutton, do you care to weigh in on this one?)

This brings to mind the reminder that ATTACKERS HAVE THE FINAL SAY IN THE EVOLUTION OF ATTACKS and that when they change the paradigm of the attack vector, bad things can and do happen.

PS – DNS Doberman, the tool we released a couple of days ago, will detect the cache poisoning if/when it occurs! You can get more info about our tool here.

MSI Releases DNS Doberman to the Public

Now your organization can have a 24/7 guard dog to monitor key DNS resolutions and protect against the effects of DNS cache poisoning, DNS tampering and other resolution attacks. Our tool is an easy to use, yet quite flexible and powerful solution to monitoring for attacks that have modified your (or your upstream ISPs’) resolutions for sites such as search engines, software updates, key business partners, etc.

DNS Doberman is configured with a set of trusted host names and IP address combinations (yes, you can have more than one IP per host…) which are then checked on a timed basis. If any of your monitored hosts returns an IP that the DNS Doberman doesn’t trust – then it alerts you and your security team. It supports a variety of alerting methods to support every environment from home users to enterprises.

You can learn more about the tool and download the FREE version from the link below. The FREE version is completely useable and if it suits your needs, you are welcome to continue to use it indefinitely. The FREE version is restricted to 5 hosts and only checks each host once per hour. Registered users ($99.95) will receive support, minor version upgrades and the ability to check an unlimited number of hosts every 15 minutes!

To learn more or get your copy today, please visit the MSI main web site, here.

Wait a Minute, You’re Using the Wrong DNS Exploit!

Attackers are apparently zigging when we thought they would be zagging again. An article posted yesterday talks about how attackers have passed on using the exploits published by the common frameworks and instead, have been pretty widely using a more advanced, capable and less known tool to exploit the DNS vulnerabilities that have been in the news for the last few weeks.

In the article, HD Moore, a well known security professional (and author of Metasploit), discusses how the attackers seem to be bypassing the exploit that he and his team published and instead have been using another exploit to perform illicit attacks. In fact, the attackers used their own private exploit to attack the Breakingpoint company that Moore works for during the day. I was very interested in this approach by the attackers, and it seems almost ironic somehow, that they have bypassed the popular Metasploit tool exploits for one of their own choosing.

This is interesting to me because when an exploit appears in Metasploit, one would assume that it will be widely used by attackers. Metasploit, after all, makes advanced attacks and compromise techniques pretty much “click and drool” for even basic attackers. Thus, when an exploit appears there, many in the security community see that as a turning point in the exploitability of an attack – meaning that it becomes widely available for mischief. However, in this case, the availability of the Metasploit exploit was not a major factor in the attacks. Widespread attacks are still not common, even as targeted attacks using a different exploit has begun. Does this mean that the attacker community has turned its back on Metasploit?

The answer is probably no. A significant number of attackers are likely to continue to use Metasploit to target their victims. Our HoneyPoint deployments see plenty of activity that can be traced back to the popular exploit engine. Maybe, in this case, the attackers who were seriously interested had a better mechanism available to them. Among our team there is speculation that some of the private, “black market” exploit frameworks may be stepping up their quality and effectiveness. These “exploits for sale” authors may be increasing their skills and abilities in order to ensure that their work retains value as more and more open source or FREE exploit frameworks emerge into the market place. After all, they face the same issues as any other software company – they have to have high value in order to compete effectively with low cost. For exploit sellers this means more zero-day exploits, more types of evasion, more options for post-exploitation and higher quality of the code they generate.

In some ways, tools like Metasploit help the security community by giving security teams exploitation capabilities on par with basic attackers. In other ways, perhaps they also hurt the security effort by enabling more basic attackers to do complex work and by driving up the quality and speed of exploit availability on the black market. It is hard to argue that such black market efforts would not be present anyway as the attackers strive to compete amongst themselves, but you have to wonder if Metasploit and tools like it serve to speed up the pace.

There will always be tools available to attackers. If they aren’t widely available, then they will be available to a specific few. The skills to create attack tools are no longer the arcane knowledge known to a small circle of security mystics that they were a decade ago. Vendors and training companies have sliced and diced the skills into a myriad of books, classes, training sessions, conventions and other mechanisms in order to “monetize” their dissemination. As such, there are many many many more folks with the skills needed to develop attack tools, code exploits and create malware that has ever increasing capability.

This all comes back to the idea that in today’s environment, keeping anything secret, is nearly impossible. The details of the DNS vulnerability were doomed to be known even as they were being initially discovered. There are just too many smart people with skills to keep security issues private when there is any sort of disclosure to the public. There are too many parties interested in making a name, gaining some fame or turning a buck to have any chance at keeping vulnerabilities secret. I am certainly not a fan of total non-disclosure, but we have to assume that even some level of basic public knowledge will eventually equal full disclosure. We also have to remember, in the future, that the attacker pool is wider and deeper than ever before and that given those capabilities, they may well find mechanisms and tools that are beyond what we expect. They may reject the popular wisdom of “security pundits from the blogosphere” and surprise us all. After all, that is what they do – surf the edges and perform in unexpected ways – it just seems that some of us security folks may have forgotten it….