About Brent Huston

I am the CEO of MicroSolved, Inc. and a security evangelist. I have spent the last 20+ years working to make the Internet safer for everyone on a global scale. I believe the Internet has the capability to contribute to the next great leap for mankind, and I want to help make that happen!

A Very Good Idea – Open Source SQL Application Firewall

A few weeks ago I ran across this project, called GreenSQL. It is an open source database firewall to help organizations mitigate application vulnerabilities due to common SQL attacks like SQL injection and such.

It is a list-based heuristic proxy firewall that you can use to filter SQL traffic between the web server and the database server. This is a pretty powerful tool, even being list-based. As this project evolves, perhaps it will also include more powerful approaches such as anomaly-based analysis.

For now though, black listing, white listing and their approach to transaction risk weighting is a very powerful approach and much better than nothing.

That said, MSI has has not tested the application or performed any formal review, we just liked the idea that they were working on. Perhaps, in the future we will donate some lab cycles to a review and some testing, but we wanted to help them at least get the word out about their project.

If you are using MySQL for your web-based applications, it might be a good thing to spend some time looking at this project and testing the capabilities of the tool for your environment. Eliminating SQL attacks from web-applications will reduce a significant amount of risk from their deployment. By some estimates, that risk could be as high as 25% of the aggregate risk an application causes. No matter the metrics, this project is certainly a step forward.

Playing with VoIP Hopper

I have spent just a little time playing with VoIP Hopper, which was updated in mid-February. Thus far, this seems like a pretty useful tool for doing penetration testing and enumeration of your VLAN segments and VoIP deployments.

The tool is very capable. It can easily help you scan your installations with CDP discovery and can be very useful in testing VLAN architectures for common security holes.

It is a command line tool written in C, but you should have no problem compiling it in your favorite Linux environment. It even works nicely on a default BackTrack install, so it playing with it should be easy on your lab schedule.

There has been a lot of attention paid to VoIP security over the last couple of years and this is certainly a nice quick and dirty tool for looking around your install. It also sheds a little light on the mistaken idea that some service providers like to pretend is the gospel – VLANs really won’t keep your VoIP secure. You can use this tool to prove them wrong if they just won’t listen to reason…

Play nice with it and make sure you only use it in the lab or on authorized networks…

Be Careful Who You Trust…

j0289379.jpg

This usually goes without saying, but trusting the wrong people, organizations of mechanisms can seriously bite you.

Take for example, the current situation with ORDB.org. They are one of the older spam blacklists and they have been around a while. So long in fact, that when they shut down in 2006 few people took notice. But, we should have.

It turns out that a few organizations and a few vendors used the blacklist provider as another source for spam prevention. Since the project was shut down, the list was un-updated since the end of 2006. Mostly, that is no harm – no foul – unless you happened to have inherited one of those IP addresses on the list, then you might be a little mad…

But, as of this week, the ORDB list suddenly changed behavior for an as-of-yet-unknown reason. All of a sudden the blacklist started to block ALL IP addresses!

Now many folks would say, if the list shutdown in 2006, why do we care? Well, it turns out that a lot of vendor products and a few careless admins had left the list in their systems. They were still trusting the contents of the blacklist as a spam prevention tool. As you might imagine, what has ensued is a TON of blocked e-mails, a few mad customers and some bewildered troubleshooting technicians…

But, this is just that same old IT problem. Often, we build systems with trusts, configurations and dependencies that exist today. Maybe (most likely) they will not exist in the future. What happens when/if they don’t? Usually, things break. Maybe, if you are lucky, they break in big ways so that people notice. But, if they break in some small way, say in a subtle way that goes unnoticed, they could have dire affects on confidentiality, integrity and availability. As a quick example, what if you were scraping financial data from a website for use in a calculation – maybe an exchange rate. What happens if no one is checking and that website stops updating? Could your calculations be wrong? How would you know? If the exchange rate didn’t vary grossly, but only had small changes over time, what would the effect be? You see, even small issues like this could have HUGE impact. In this scenario, you could lose, mis-bill or the like by millions of dollars over time…

Trust for abandoned projects also raises another security issue. It is pretty likely that projects, systems and applications that are abandoned could become lack on being patched or maintained. If this were to occur and you are still dependent on the data – what would happen if an attacker took control of the project or system hosting it? I am not saying this happened at ORDB, but suppose it did. It seems to me that attacking and compromising old abandoned projects that people might still be dependent on is a pretty creative approach to causing some amount of chaos.

I guess the big question that the ORDB situation raises is; what other things like it are out there? What other abandoned projects or technologies are we dependent upon? How might this mechanism come to be used against us in the future?

3 Application Security Must Dos Presentation Now Available

We are pleased to announce the general availability of the slides and audio of our presentation from March 25, 2008.

The event was focused on three strategies for application security.

You can download the slides and audio MP3 from the links below.

PDF of the slides:

http://microsolved.com/files/3AppSecMustDo.pdf

MP3 URL:

http://microsolved.com/files/3AppSecMustDos032508.mp3

** Please Note: the audio MP3 did not come out as well as our others due to a mic issue. The problem has been resolved, but please remember to lower the volume on your MP3 player as the clip is overly loud and a bit “clipped”. We apologize for the issue.

Quick and Dirty Account Change Auditing in Windows – Maybe Even Monitoring???

OK gang, after a conversation last night helping a client keep track of changes in domain accounts, here is a quick and easy way to do so for domains or local machines.

First, use the command line “net user” while logged in as an admin or “net user /domain” for the domain accounts. Once you see the output and have a chance to be familiar with it, you can watch for changes pretty easily.

Use the “net user /domain >> output_date.txt” command to redirect the output to a file. You should replace date with the numeric date just as a reference. Once you have this file created, you can create a new one as often as you like. Once you have one or more, simply drop them into your favorite text editor and use the file compare or diff functions to spot any changes between versions.

I suggest you use the editor Context for Windows, but there are a ton of freeware and open source tools to compare files – so choose the one of your liking.

If you wanted to get clever with this approach, you could automate it with a batch file that used command tools and run it as routinely using task scheduler on your security monitoring system or workstation. Advanced users might even add in email alerting using some command line mailer – why, the ideas are endless for automating often tedious user account monitoring with this approach.

If you haven’t played with the net commands in a while in Windows, now might be a good time for a quick refresher. You might even find some more quick and dirty things you could monitor in this manner. Who knows, you might just automate so many items that you get to actually take a vacation once a year again. That, truly, would be worthwhile… 😉

Drop us a comment if you have any other “quick and dirty” monitoring tricks that you use to keep an eye on your organization.

Random Thoughts on VM Security

VirtualMachine.gif

Virtualization is really a hot topic. It is gaining in popularity and has moved well into the IT mainstream. Of course, it comes with its challenges.

Virtual network visibility was/is a big challenge. Typical network security and troubleshooting tools are essentially blind to traffic that occurs on virtual switches and between virtualized machines. Several vendors have emerged in this space and appliances and enhancements to the virtualization products are likely to minimize this issue in the next 12 months for most organizations. There are already several mechanisms available to observe virtual network traffic, repeat it or analyze it in place. As long as systems and network engineers take this into consideration during design phases, there should be little impact on security architecture. Of course, that may take a few gentle reminders – but overall this seems to be working for the majority of companies embracing virtualization while maintaining tight controls.

The second issue is ensuring that virtualized systems meet established baselines for configuration, security and patching. This is largely a process issue and as long as your policies and processes follow the same flows for virtual machines as real hardware-based systems then there should be few unusual issues. Here the big risk is that an attacker who gains access to one “guest” virtual machine may (MAY) be able to attack the hypervisor that is the “brain” of the virtualization software. If the attacker can break the hypervisor, they MAY be able to compromise the whole real machine and potentially ALL of the virtual systems that the real system hosts or manages. These are conditional statements because the risk exists, but to a large extent, the threats have been unrealized. Sure, some proof of concepts exist and attackers are hard at work on cracking huge holes in the virtualization tools we use – but far, wide and deep compromises of virtualization software and hypervisors have still not emerged (which is a good thing).

I have been asked on several occasions about hypervisor malware attacks and such. I still think these are very likely to be widely seen in the future. Malware can already easily detect VM installs through a variety of mechanisms and attackers have gotten much better at implementing rootkits and other malware technologies. In the meantime, more and more attack vectors have been identified by researchers that allow access to the hypervisor, underlying OS and other virtual guests. It is, in my opinion, quite likely that we will see virtualization focused malware in the near future.

Another common question I get is about the possibilities of extending anti-virus and other existing tools to the hypervisor space for additional protection. I am usually against this – mostly due to the somewhat limited effectiveness of heuristic-based technologies and out of fear of creating yet another “universal attack vector”. Anti-virus exploits abound, so there is no reason to believe that hypervisor implementations wouldn’t be exploitable in some way too. If that were to be the case, then your silver bullet hypervisor AV software that protects the whole system and all of the guests, just turns into the vector for the “one sploit to rule them all”.

I truly believe that the options for protecting the hypervisor should NOT lie in adding more software, more complexity and more overhead to the computing environment. As usual, complexity increases come with risk increases. Instead, I think we have to look toward simplification and hardening of virtualization software. We have to implement detective mechanisms as well, but they should like outside of the hypervisor somehow. I am not saying I have all of the answers, I am just saying that some of the current answers are better than some of the others…

What can you do? Get involved. Get up to speed on VM tools and your organization’s plans to deploy virtualization. Evangelize and work with your IT team to make sure they understand the security issues and that they have given security the thought it deserves. Share what works and what doesn’t with others. Together, we can all contribute to making sure that the revolution that virtualization represents does not come at the price of severe risk!

A Word About Site Takedown Vendors

I just talked with a client who had been using an unnamed “take down service provider” for some time now. These vendors specialize in removing sites used in phishing attacks and drive-by-download attacks from the Internet. Many claim to have elite connections at various hosting providers that they can call upon to quickly remove sites from production.

Using a take down vendor is basically a bet on outsourcing. You are betting your payment to them that they can get a site taken down with less time, damage and effort than you could if you were doing it yourself and that working with them will reduce your time requirements in periods of incident response, when cycles are at a premium. In the real world, however, many times these bets may not pay off as well as you might think…

For example, take down companies that really have a lot of clients, may have a number of cases and sites that they are working at any given time. If they don’t sufficiently staff their teams at all times, there may be long delays caused by resource constraints on their side. Getting them “into action” is also a complaint about more than a few of these vendors in various infosec forums. Often, their customers claim that getting the information needed by the take down vendor to get them to investigate and act is basically about the same amount of hassle as working with registrars and hosting providers to get sites taken down.

Of course, not all take down vendors are difficult. There are a few of them out there who get glowing reviews by their customers, but a little quick Internet research showed there were a lot more that got bad reviews than good. In addition, the old adage of “you get what you pay for” seemed to apply to the quick checks we did. Many of the lower cost vendors did not have very good commentary about them and the bad references seemed to diminish as you went up the pay scale.

Another tip from a client of ours was to beware the take down vendors that want a retainer or monthly fee. You may only need their services a few times a year and you are likely to save money using a per-occurrence approach over the long run. Additionally, the monthly service fee vendors also appear to be some of the most commonly complained about – likely because they may have a tendency to oversell and under staff in the ebb-and-flow world of incident response.

The bottom line is that take down vendors may be of use to you or they may not be. Identifying your needs and internal capabilities are good places to start. If you do choose to partner with a take down vendor, make sure you do your research and that includes customer references, Internet searches and pricing comparisons. You can probably find a couple of vendors to fit your needs and your budget. It would probably not hurt to give their response line a call before the purchase as well and see just what level of service you can expect.

BTW – my original client that started this discussion found that simply opening a call and trouble ticket with the ISP was enough to get them to accept incoming take down requests with lists of sites in near real time via email or fax. The couple of folks I talked to who have been through this said that many of the largest ISPs and hosting providers have gotten a lot easier to work with and more responsive in the last couple of years. They suggested that if the attacks seem to revolve around large, common providers – you might want to take an initial stab at talking with them and if they seem to be responsive and engaged – save your incident response budget and work directly with the providers. Save your take down dollars for those obscure, hard to reach or unresponsive providers.

InfoSec Spring Cleaning

spring5.jpg

It’s that time of year again, spring is in the air in much of the US. That usually means it’s time to do a little clean up work around your organization.

Now is a good time to:

  • Review policies, processes and exceptions and make sure they are current and all still apply.
  • Check for expired accounts or accounts that should have their passwords changed – especially service accounts.
  • Update your awareness program and plan for activities and areas of key focus for the rest of the year
  • Review all cryptographic certificates and such to make sure none have expired or close to expiration
  • Begin to plan your staff coverage for IT vacations, the summer events and the time when staff is usually reduced for the summer
  • Begin the process of hiring those summer interns
  • Review the logs and archives and back them up or destroy them as needed
  • Any other periodic or seasonal security planning activities

Now is a very good time to do all of these things. It is also a good time to put together your plans for the rest of year and make sure that first quarter hasn’t broken your budget already. 😉

Are there other security spring cleaning items your team does every year? If so, drop us a comment and share your plans with others. More brains are better than one!

An Ouchie for “The Self Defending Network”

As we covered in an earlier post, there appears to be a security issue with Cisco Works.

Now, more information has emerged about what appears to be a back door that allows anyone who can telnet to a port on the Cisco Works box to execute OS commands with high levels of privilege. Essentially turning the Cisco configuration and monitoring tool into a pretty powerful weapon for an attacker.

No word yet on how this back door got into the code, what steps have been taken to make sure this doesn’t happen again or anything else beyond the “ooops and here is a patch” statement. Cisco is hopefully increasing their code management, security testing and QA processes to check for this and other forms of application security before they release code to the public.

Once again, Cisco has shown, in my opinion, a serious lack of attention to detail in security. Given their mission-critical role in many enterprise networks and the global Internet, we should and do expect more from them than from an average software developer. Please, Cisco, invest in code testing and application security cycles in the SDLC before something really bad happens to a whole bunch of us…

Yet More SSH Fun – This Time With Humans!

2b.jpg

OK, so last week we took an overview of SSH scans and probes and we dug a bit deeper by examining one of our HoneyPoints and the SSH scans and probes it received in a 24 hour period.

This weekend, we reconfigured that same SSH HoneyPoint to appear as a known vulnerable version. And, just in time for some Monday morning review activity and our blog posting, we got what appears to be an automated probe and then about an hour later, a few attempts to access the vulnerable “service” by a real human attacker.

Here is some of the information we gathered:

The initial probe occurred from a 62.103.x.x IP address. It was the same as before, a simple connection and banner grab. The probe was repeated twice, as per the usual activity, just a few seconds apart.

This time, ~40 minutes later, we received more connections from the same source IP. The IP address only connected to port 22, they did no port scanning, web probes or other activity from that address or in that time frame.

The attacker made several connections using the DropBear SSH client. The attacker seemed to be using 0.47, which has a couple of known security issues, according to the banner the client sent to the HoneyPoint.

The attacker performed various SSH handshake attempts and a couple more versions of banner grabbing tests. Over the next ~20 minutes, the attacker connected 5 times to the HoneyPoint, each time, probing the handshake mechanism and grabbing the banner.

Finally, the attacker decided to move on and no more activity has been seen from the source IP range for a day and a half.

The attacker source IP was from a Linux system in Athens, Greece that appears to belong to an ISP. That system has both OpenSSH 3.9p1 and regular telnet exposed to the Internet. The system advertises itself by hostname via the telnet prompt and the name matches its reverse DNS entry.

We contacted the abuse contact of the ISP about the probes, but have not received any comment as of yet.

The interesting thing about this specific set of probes was that the human connections originated from the same place as one of the banner grabbing scans. This is not usual and is not something that we have observed in the recent past. Usually, the probes come from various IP addresses (likely some form of worm/bot-net) and we rarely see any specifically identifiable human traffic. So, getting the attention of the human attacker is certainly a statistical anomaly.

The other interesting behavior piece here was that the attacker did not bother to perform even a basic port scan of the target. They specifically focused on SSH and when it did not yield to their probes, they moved on. There were several common ports populated with interesting HoneyPoints, but this attacker did not even look beyond the initial approach. Perhaps they were suspicious of the SSH behavior, perhaps they were lazy or simply concentrating on SSH only attacks. Perhaps, their field of targets is simply so deep that they just moved on to easier – more usual targets. It is likely we will never know, but it is certainly interesting, no doubt.

Thanks for the readers who dropped me emails about their specific history of SSH problems. I appreciate your interest in the topic and I very much appreciate the great feedback on the running commentary! I hope this helps some security administrators out there, as they learn more about understanding threats against their networks, incident handling and basic event research. If there are other topics you would like to see covered in the future, don’t hesitate to let me know.