A Word About Site Takedown Vendors

I just talked with a client who had been using an unnamed “take down service provider” for some time now. These vendors specialize in removing sites used in phishing attacks and drive-by-download attacks from the Internet. Many claim to have elite connections at various hosting providers that they can call upon to quickly remove sites from production.

Using a take down vendor is basically a bet on outsourcing. You are betting your payment to them that they can get a site taken down with less time, damage and effort than you could if you were doing it yourself and that working with them will reduce your time requirements in periods of incident response, when cycles are at a premium. In the real world, however, many times these bets may not pay off as well as you might think…

For example, take down companies that really have a lot of clients, may have a number of cases and sites that they are working at any given time. If they don’t sufficiently staff their teams at all times, there may be long delays caused by resource constraints on their side. Getting them “into action” is also a complaint about more than a few of these vendors in various infosec forums. Often, their customers claim that getting the information needed by the take down vendor to get them to investigate and act is basically about the same amount of hassle as working with registrars and hosting providers to get sites taken down.

Of course, not all take down vendors are difficult. There are a few of them out there who get glowing reviews by their customers, but a little quick Internet research showed there were a lot more that got bad reviews than good. In addition, the old adage of “you get what you pay for” seemed to apply to the quick checks we did. Many of the lower cost vendors did not have very good commentary about them and the bad references seemed to diminish as you went up the pay scale.

Another tip from a client of ours was to beware the take down vendors that want a retainer or monthly fee. You may only need their services a few times a year and you are likely to save money using a per-occurrence approach over the long run. Additionally, the monthly service fee vendors also appear to be some of the most commonly complained about – likely because they may have a tendency to oversell and under staff in the ebb-and-flow world of incident response.

The bottom line is that take down vendors may be of use to you or they may not be. Identifying your needs and internal capabilities are good places to start. If you do choose to partner with a take down vendor, make sure you do your research and that includes customer references, Internet searches and pricing comparisons. You can probably find a couple of vendors to fit your needs and your budget. It would probably not hurt to give their response line a call before the purchase as well and see just what level of service you can expect.

BTW – my original client that started this discussion found that simply opening a call and trouble ticket with the ISP was enough to get them to accept incoming take down requests with lists of sites in near real time via email or fax. The couple of folks I talked to who have been through this said that many of the largest ISPs and hosting providers have gotten a lot easier to work with and more responsive in the last couple of years. They suggested that if the attacks seem to revolve around large, common providers – you might want to take an initial stab at talking with them and if they seem to be responsive and engaged – save your incident response budget and work directly with the providers. Save your take down dollars for those obscure, hard to reach or unresponsive providers.

InfoSec Spring Cleaning

spring5.jpg

It’s that time of year again, spring is in the air in much of the US. That usually means it’s time to do a little clean up work around your organization.

Now is a good time to:

  • Review policies, processes and exceptions and make sure they are current and all still apply.
  • Check for expired accounts or accounts that should have their passwords changed – especially service accounts.
  • Update your awareness program and plan for activities and areas of key focus for the rest of the year
  • Review all cryptographic certificates and such to make sure none have expired or close to expiration
  • Begin to plan your staff coverage for IT vacations, the summer events and the time when staff is usually reduced for the summer
  • Begin the process of hiring those summer interns
  • Review the logs and archives and back them up or destroy them as needed
  • Any other periodic or seasonal security planning activities

Now is a very good time to do all of these things. It is also a good time to put together your plans for the rest of year and make sure that first quarter hasn’t broken your budget already. 😉

Are there other security spring cleaning items your team does every year? If so, drop us a comment and share your plans with others. More brains are better than one!

An Ouchie for “The Self Defending Network”

As we covered in an earlier post, there appears to be a security issue with Cisco Works.

Now, more information has emerged about what appears to be a back door that allows anyone who can telnet to a port on the Cisco Works box to execute OS commands with high levels of privilege. Essentially turning the Cisco configuration and monitoring tool into a pretty powerful weapon for an attacker.

No word yet on how this back door got into the code, what steps have been taken to make sure this doesn’t happen again or anything else beyond the “ooops and here is a patch” statement. Cisco is hopefully increasing their code management, security testing and QA processes to check for this and other forms of application security before they release code to the public.

Once again, Cisco has shown, in my opinion, a serious lack of attention to detail in security. Given their mission-critical role in many enterprise networks and the global Internet, we should and do expect more from them than from an average software developer. Please, Cisco, invest in code testing and application security cycles in the SDLC before something really bad happens to a whole bunch of us…

Yet More SSH Fun – This Time With Humans!

2b.jpg

OK, so last week we took an overview of SSH scans and probes and we dug a bit deeper by examining one of our HoneyPoints and the SSH scans and probes it received in a 24 hour period.

This weekend, we reconfigured that same SSH HoneyPoint to appear as a known vulnerable version. And, just in time for some Monday morning review activity and our blog posting, we got what appears to be an automated probe and then about an hour later, a few attempts to access the vulnerable “service” by a real human attacker.

Here is some of the information we gathered:

The initial probe occurred from a 62.103.x.x IP address. It was the same as before, a simple connection and banner grab. The probe was repeated twice, as per the usual activity, just a few seconds apart.

This time, ~40 minutes later, we received more connections from the same source IP. The IP address only connected to port 22, they did no port scanning, web probes or other activity from that address or in that time frame.

The attacker made several connections using the DropBear SSH client. The attacker seemed to be using 0.47, which has a couple of known security issues, according to the banner the client sent to the HoneyPoint.

The attacker performed various SSH handshake attempts and a couple more versions of banner grabbing tests. Over the next ~20 minutes, the attacker connected 5 times to the HoneyPoint, each time, probing the handshake mechanism and grabbing the banner.

Finally, the attacker decided to move on and no more activity has been seen from the source IP range for a day and a half.

The attacker source IP was from a Linux system in Athens, Greece that appears to belong to an ISP. That system has both OpenSSH 3.9p1 and regular telnet exposed to the Internet. The system advertises itself by hostname via the telnet prompt and the name matches its reverse DNS entry.

We contacted the abuse contact of the ISP about the probes, but have not received any comment as of yet.

The interesting thing about this specific set of probes was that the human connections originated from the same place as one of the banner grabbing scans. This is not usual and is not something that we have observed in the recent past. Usually, the probes come from various IP addresses (likely some form of worm/bot-net) and we rarely see any specifically identifiable human traffic. So, getting the attention of the human attacker is certainly a statistical anomaly.

The other interesting behavior piece here was that the attacker did not bother to perform even a basic port scan of the target. They specifically focused on SSH and when it did not yield to their probes, they moved on. There were several common ports populated with interesting HoneyPoints, but this attacker did not even look beyond the initial approach. Perhaps they were suspicious of the SSH behavior, perhaps they were lazy or simply concentrating on SSH only attacks. Perhaps, their field of targets is simply so deep that they just moved on to easier – more usual targets. It is likely we will never know, but it is certainly interesting, no doubt.

Thanks for the readers who dropped me emails about their specific history of SSH problems. I appreciate your interest in the topic and I very much appreciate the great feedback on the running commentary! I hope this helps some security administrators out there, as they learn more about understanding threats against their networks, incident handling and basic event research. If there are other topics you would like to see covered in the future, don’t hesitate to let me know.

Deeper Dive into Port 22 Scans

Today, I wanted to take a deeper dive into several port 22 (SSH) scans that a single HoneyPoint deployment received over the last 24 hours. SSH scanning is very common thing right now and our HoneyPoints and firewalls continually experience scans from hosts around the world.

The particular HoneyPoint we are using for this look at the issue is located outside of the US on a business-class network in South America.

Over the last 24 hours this HoneyPoint received SSH probes from 4 specific hosts. These hosts are detailed below:

60.191.x.x – a Linux system located in China on a telecomm company’s network

83.16.x.x – an unknown system located on a consumer (DHCP) iDSL segment in Poland – we could go no further with this host since it is likely to have changed IP addresses since the probe…

218.108.x.x – another Chinese Linux system on yet another Chinese telecomm company’s network (is there anything else in China??? )

216.63.x.x – a NAT device that is front-ending a business network and web server deployment for an optical company in El Paso, TX, USA

The pattern of the probes in each case was the same. Each host completed the 3 way TCP handshake and waited for the banner of the responding daemon. The system then disconnected and repeated the process again in about 90-120 seconds. Basically, simple banner grabbing. The probing system did not send any traffic, just grabbed the banner and moved on.

The HoneyPoint in question was configured to emulate the current version of OpenSSH, so the banner may not have been what the probing attack tool was looking for. It has since been reconfigured to emulate historic versions with known security vulnerabilities.

But, what of the hosts performing the scans? Well, all 3 of them that could be reliably analyzed were found to be running OpenSSH. Two were running 3.6.1p2 and the other was running 3.4p1. Both of these are older versions with known issues.

It is very likely that these are worm/bot infected hosts and the malware is merely looking for new hosts to spread to. Interestingly, 2 of these hosts appeared to be used for regular commerce. Both were acting as a primary web server for the company and one of them even had an e-commerce site running (it also had MySQL exposed to the Internet). No doubt, any commercial activity taking place on the device is also compromised.

MSI has alerted the relevant owners of these systems and at least one of them is moving to handle the security incident. Hopefully, their damage will be minimal and they can rebuild the system easily, since at this point it is likely to also be infected with a root kit. We will advise them as they need help and assist them until they get their problem solved.

In the meantime, I hope this gives you a better look at some of the SSH scanning that goes on routinely. On average, this particular HoneyPoint deployment is scanned for SSH every 5.25 hours. This time varies from locale to locale, with US sites getting scanned more often, particularly on commercial networks. The majority of these scans come from China, with Eastern Europe pulling a distant second. In some cases, some of our US HoneyPoint deployments get scanned for SSH every 1.5 hours on average, so it is a very common attack, indeed.

Obviously, you should check your own network for SSH exposures. You should also take a look at your logs and see if you can identify how your site stacks up against the average time between scans. Feel free to post comments with any insights or time averages you come up. It could make for some interesting reading.

Hardware Hacking Gets All Too Real

Hardware and wireless hacking have combined in a pretty scary way. This article talks about security researchers that have found ways to monitor, attack and exploit the most popular of pacemakers used today. According to the article, the attackers were able to gain remote access to the data and control system of the device. Once they tapped into it, they were able to siphon off health-related information and even cause the pacemaker to apply voltage or shutdown – essentially killing the human host of the device.

flatline.jpeg

It really doesn’t get more scary than that. While the odds of such an attack occurring in real life against a specific person are very slim, it is simply another side effect of the integration of technology into our daily lives. As I have written about many times before, the integration of technology into so many aspects of our lives is a powerful thing. On one hand, it frees us up to do other work, makes our lives easier, more healthy, perhaps even longer than life would have been otherwise. However, many vendors simply fail to realize the implications of the risks that are inherent in their products. They fail to comprehend the basic methodologies of attackers and certainly fail to grasp how the combination of technologies in many of their products can create new forms of risk for the consumer.

I am quite sure that the company who created the pacemaker was truly interested in advancing the art of healthcare and extending the human life. They simply wanted to make things better and saw how adding remote management and monitoring to their device would allow patients to be diagnosed and the device operation modified without the need for surgery. That is quite an honorable thing and is sure to make patients lives easier and even reduce the rate of death since patients would no longer undergo the stressful and dangerous operations that used to be needed to make changes to the implanted pacemakers. These are very noble ideas indeed.

Unfortunately, the creators of the heart system were so focused on saving lives and so focused on medical technology, that they seem to have missed the idea of securing their pacemaker against improper access. This is certainly understandable, given that they are a medical company and not an IT firm, where such risks have been more public in their discussion. The problem is, in many cases today, there is essentially no difference between IT and other industries, since many of the same technologies are present in both.

Again, there is little to truly be immediately concerned about here. While the attack is possible, it does require technical knowledge and the vendors will undoubtably work on improving the product. However, upgrading existing users is unlikely. But, unless you happen to be a high profile target, you are obviously much safer with the device than without it. The big lesson here and the one I hope vendors, consumers and the public are learning is that we must add risk management and security testing processes to any device with a critical role, regardless of industry. Today, there are simply too many technologies that can impact our daily lives to continue to ignore their risks.

Cisco Embraces the Scheduled Patch Cycle – Ummmm, Twice a Year???

Well, I think we all knew it was coming. More and more vendors are moving to the scheduled patch cycle instead of releasing as-needed patches. This both a boon and a disaster, depending on your point of view/level of risk tolerance.

In this article, Cisco announces that they will now release their patches every 6 months. I suppose they consider twice a year patching to be enough for the critical components of the network such as routers, switches and other devices. Heck, they are even going to move Linksys patching to every 6 months, so the home users of the product line can ignore them 2 times per year, on schedule, instead of ignoring the patch releases all “willy-nilly” like they presently do.

Why do all the vendors think scheduled patching is such a good idea? I suppose the only answer is that it helps them better schedule their own resources and such, since it CERTAINLY CAN’T BE ABOUT MINIMIZING THE RISK WINDOW BETWEEN VULNERABILITY DISCOVERY AND MITIGATION. Resource scheduling is also the most common cause I hear from IT folks who support this process of patch releases. I just hope that we can convince attackers to manage their resources a little better too, since it would be very nice if their vulnerability research, exploit development and wide-scale attacks could magically coincide with the appropriate patching processes. Then everything would be better for everyone and the world would be a very nice place indeed…

The problem is, the real world just doesn’t work like that. Exploits and vulnerabilities will continue to be discovered in real time, just as before, except now attackers will know the timeline for the value of their new attacks. In many ways, this serves to bolster the underground economy of attack development since you don’t need 0-day for Cisco products, 179-day exploits will do just fine!

I get the desire of IT and vendors to stabilize their work forces and to better schedule and manage their resources. I really do. Police would like to be able to schedule crime as well, so that they could have weekends and nights off to spend with their families. But, being a law enforcement officer comes with some requirements and schedule flexibility is one of them. The same goes for IT folks. In my opinion, scheduled patching, especially patching every 6 months, is simply a reinforcement of traditional IT thought processes. If my readers know one thing about the MSI vision, it is that thinking differently is the key to information security, since what we are doing to date does not seem to be working so well.

Cisco is a huge company. I know many consider them to be unresponsive to customer concerns, but I truly hope that IT professionals reach out to them on this and that they listen. Cisco devices truly do form the core of many, many, many networks. Their products literally power much of the Internet as we know it today. That gives them immense power, but also makes them a HUGE target. Given their critical role, six month patching just does not seem to be a reasonable solution to me. If you feel the same way, let them know!

0wned by Anti-Virus

virus.jpeg

A quick review of vulnerability postings to the emerging threats content of this blog is sure to make clear just how popular the anti-virus as exploitation vector has become. Major levels of security research and exploit development continue to be aimed at the anti-virus vendors and their products. And, why not? It stands to reason from the attacker view point. All of these years infosec folks have been staging education and awareness programs to make sure that nearly every PC on the planet has anti-virus software installed.

It stands to reason, that given the near ubiquity of AV tools, that it would be a very easy, albeit traditional, way to compromise systems at large. Vulnerabilities in anti-virus tools are an insidious mechanism for attack, often run with enhanced privileges and carry enough “in your face” and “gotcha” temptation to be a very interesting target. No wonder they have become a favorite attack vector.

On the other hand, from the security standpoint, who else besides anti-virus vendors and purveyors of critical applications linked into the defensive infrastructure should be the poster children for secure development. Every piece of code has bugs, mine included. But, shouldn’t anti-virus vendors be doing extensive code reviews, application assessments and testing? Isn’t this especially true of vendors with large corporate names, deep budgets and pockets and extensive practices in application security and testing?

Anti-virus tools are still needed for nearly every PC on the planet. Malware still remains a large concern. AV has its value and is still a CRITICAL component of information security processes, initiatives and work. Vendors just have to understand that, now more than ever, they are also a target. They have to do a better job of testing their AV applications and they have to embrace the same secure coding tools and processes that many of their own consultants are shouting from the virtual hills to the cyber-valleys. We still need AV, we just need better, stronger, more secure AV.

What’s On Your Key?

As a follow up to yesterday’s post about the Windows management tool, several people have asked me about what Windows tools I use most often. I, like many technical folks, carry a simple USB key in my pocket and it is packed with the core critical tools I use whenever I run into a support-type issue.

This led me to ask – what’s on your key?

USBKey graphic

Mine has some pretty interesting stuff. Here is a sample of the contents focused on Windows tools.

I keep an installs directory with some of the basic tools that I need, like to use and would want people to use. It has stuff like:

Cain and Able – you never know when you may need to recover or crack a basic password

Comodo Firewall – I try to never leave a home system without a firewall installed and configured, this one is free, easy to manage and with a quick 5 minute lesson – even basic Windows users can keep it going safely…

Filezilla – a pretty great Win32 FTP GUI

FoxitReader – a quick replacement for the bloated Adobe PDF reader

Genius – an old swiss army knife tool for Win32 that has a ton of Internet and network clients, plus some basic power tools for users

and of course the ubiquitous FireFox, WinZip, freeware Anti-virus and SpyBot Search & Destroy installers!

I also keep some basic tools for troubleshooting, security and analysis:

BinText – a GUI “strings” for Win32

Filealyze – a file analyzer, great for looking at unknown pieces of software and doing potential malware analysis on the fly

FPipe – Foundstone’s port redirector

Scanline – a quick and dirty command line port scanner for Win32 from Foundstone

Various Windows resource kit elements – kill, netdom, sysinternals tools, shutdown, etc.

Of course, netcat, the do it all with sockets tool 😉

winvi – easy to use text editor

whosip and whoiscl – two whois emulators for Windows

a tools simply called Startup – a really easy to use GUI for managing what is starting up each time the system starts and the various users login

Those are really the essentials… I carry a bunch of normal stuff around too, but the basics are here for those quick fix scenarios that invariably start with something like “My computer is acting kinda funny ever since I …”

So, I have shown you some of mine. Now you do the same, let us know what’s on your key that you carry in your own pocket. Use the comment system to tell us all about your own set of indispensable tools!