About Brent Huston

I am the CEO of MicroSolved, Inc. and a security evangelist. I have spent the last 20+ years working to make the Internet safer for everyone on a global scale. I believe the Internet has the capability to contribute to the next great leap for mankind, and I want to help make that happen!

Take Time to Check Your Remote Access Tools

Over the last several months we have worked a ton of incidents where compromise of systems and networks was accomplished via Internet exposed terminal servers, VNC and other remote access applications. Often, these same administration-friendly tools are used in internal compromises as well. While there is certainly a value in terminal server and VNC, they can be configured and your implementations hardened to minimize the chances of attack and compromise.

Careful consideration should be given to having any form of remote desktop access Internet exposed. Attackers are very good at slow and low password grinds, social engineering and other techniques that make these exposures good targets for gateways into an environment. Unless you have a serious plan for managing the risk and you have excellent levels of controls, raw exposures of these tools to the Internet should be avoided. If you need to use them for remote access, consider some form of IP address restriction, authentication at a router for dynamic ACLs or forcing a VPN connection to gain access to them. Neither terminal server or VNC should be considered a replacement for a robust VPN and with tools like OpenVPN offering free or low cost alternatives, it is just silly to not leverage them over simple port exposures.

Even if you do not Internet expose your terminal servers, it is likely a great idea to make sure that they are hardened. Here is a great powerpoint that covers hardening both terminal servers and Citrix deployments. You can also find more guidance in the CIS baseline tools and documents. There are several good documents around the net for hardening TS in line with various baselines.

VNC can also be configured to be more secure than a “base install”. Starting with which VNC implementation you run, UltraVNC and TightVNC have some very powerful security configurations that can help you minimize your risks. Choosing stronger authentication mechanisms and implementing IP address controls, even inside, can really help you keep an attacker from running “hog wild”, even if they do gain some sort of user access or compromise a workstation with a bot-net client. Consider the use of “jump boxes” dedicated to being the terminal server or VNC gateway to all other machines. If you implement these “choke points” then you can uber-harden them and monitor them closely for bad behaviors and be assured that without accessing them, an attacker can’t easily use your remote access servers against you.

Just take a few moments and think it through. Sure these tools make it easy for admins. It makes it convenient for them to do their work and admin remote machines, but it also makes it easy for an attacker. Hardening these tools and your architecture is a great way to achieve that balance between usability and security. You can get work done, but you can do so knowing that you have enough controls in place to make sure that it really is you who is doing the work.

Hackers Hate HoneyPoint

HackersHateHPlogoed200.jpg

We have been getting so much great feedback and positive response to our HoneyPoint products that Mary Rose, our marketing person, crafted this logo and is putting together a small campaign based on the idea.

We are continuing to work on new capabilities and uses for HoneyPoint. We have several new tricks up our sleeve and several new ways to use our very own “security swiss army knife”. The capabilities, insights and knowledge that the product brings us is quickly and easily being integrated into our core service offerings. Our assessments and penetration testing brings this “bleeding edge” attack knowledge, threat analysis and risk insight to our work. We are routinely integrating the attack patterns and risk data from our deployed HoneyPoints back into the knowledge mix. We are adding new tools, techniques and risk rating adjustments based on the clear vision we are obtaining from HoneyPoint.

This is just one of the many ways that HoneyPoint and the experience, methodology and dedication of MSI separate us from our competitors. Clients continue to love our rapport, reporting formats, flexibility and deep knowledge – but now, thanks to HoneyPoint, they also enjoy our ability to work with them to create rational defenses to bleeding edge threats.

You can bet that you will see more about HoneyPoint in the future. After all, hackers hate HoneyPoint, and in this case, being hated is fine with us!

A Web Application Cheat Sheet & More

I got a lot of response from folks about my last cheat sheet post. Here is another one that many folks might find useful.

This one, from Microsoft, describes quick references for the Microsoft Web App Security Framework. This is a pretty useful sheet and one that our techs use a lot.

I also find this one for Nessus and Nmap to be pretty useful.

I found this one for you CISSP study folk.

This one for PMP study folk.

And, lastly, for all the new waxers of armchair economics, this one about sub-prime mortgages…

OK,OK, I could not resist this one, THE INTERACTIVE SIX DEGREES OF KEVIN BACON CHEAT SHEET!

Hope you enjoy these, and now back to your regularly scheduled infosec blogs… 🙂

Webcollage Agent Proxy Scans – Likely a Bot

Here is a quick example of a scan that we have been seeing a lot of lately, especially in our HoneyPoints deployed around consumer ISP networks. The example is about month old, but proxy scans are a very common occurrence.

HoneyPoint shows the following aler (some data modified for privacy)t:

XXX received an alert from 92.240.68.152 at 2008-11-08 09:57:07 on port 80
Alert Data: GET http://www.morgangirl.com/pics/land/land1.jpg HTTP/1.0
User-Agent: webcollage/1.135a
Host: www.morgangirl.com

Now, the XXX replaces the HoneyPoint location, so it remains obscured from the public.

This is a web server emulating HoneyPoint and it is listening on port 80.

The Alert Data: field shows the request received, which appears to be a proxy attempt to get a graphic.

The source of the request was 92.240.68.152 which the whois plugin shows to be (trimmed):

% Information related to ‘92.240.68.149 – 92.240.68.159’

inetnum: 92.240.68.149 – 92.240.68.159

netname: ADDIO-LTD-20080414

descr: ADDIO Ltd.

descr: Server farm Daype.com

country: LV

admin-c: AS11278-RIPE

tech-c: AS11278-RIPE

status: ASSIGNED PA

org: ORG-IOMA1-RIPE

mnt-by: lumii-mnt

source: RIPE # Filtered

organisation: ORG-IoMa1-RIPE

org-name: Institute of Mathematics and Computer Science of University of Latvia

org-type: LIR

Interesting in that the agent is likely faked as webcollage, a screen saver type application for displaying random graphics from the web. Another possibility on this event is that a previous scanner took the bait of the 200 return code from the HoneyPoint and added it as an open proxy. If that is true, then we may be on a proxy list and get to see many requests from people attempting to use open proxies. Getting a HoneyPoint added into these lists has given us great insight to web attacks, scams and phishing attacks in the past.

Now you have a variety of actions, you could block the source IP address to kill further scans and probes from that host. You could report the suspicious activities to the ISP in question. If a review of the web site that was the target showed illicit activity, you could also analyze and proceed to take actions to alert its owners as well. Many times these quick investigations have identified compromised hosts on both ends or compromised web hosts that are spreading malware. Plugins are available or can be created to automate many, if not all of these activities.

In this case, since this is simply a quick proxy attempt, and a cursory review of the target web site does not show any overt malicious activity, we will pass on this one and just use it as an example.

HoneyPoint can be used in a variety ways. Internet exposed HoneyPoints can give you deep insights into the types of targeting and exploit activity your networks are experiencing without the need to troll through immense log files or dig through noisy NIDS event patterns. HoneyPoint is great at collecting black list hosts, scanners and bot patterns. The longer clients use HoneyPoint, the more they discover that they can do with it. It becomes like a security swiss army knife to many clients.

Check out more information about HoneyPoint here. Follow me on twitter here to learn more about HoneyPoint, the threats we capture and other security and non-security info.

3 Improvements for Financial Applications

Our tech lab reviews several financial applications every year from a variety of vendors that are focused on the financial institution market space. The majority of these applications perform poorly to some extent in either security and/or usability. Here are three key tips for vendors to keep in mind when they or their clients ask us to do an assessment of their application.

1. Make sure the application actually works as it would in a production environment. Make sure it is reasonable in terms of performance. The idea of performing our lab assessment is to model risks in a real world simulation. Thus, if the system is not configured and working as it would in a real deployment, then the validity of the test is poor. Many of the applications we test simply do not function as expected. Many times, their performance is so slow and horrible that it impacts the availability metric. Basically, by the time it is submitted for the complete application assessment or risk assessment, it should work and be installed in a QA environment just as it would be in production. If there are any variances, be prepared with a document that explains them and their anticipated effects. Be ready to discuss and defend your assertions with a team of deeply technical engineers.

2. Do the basics. Make sure you meet an established baseline like PCI, ISO or some other basic security measure. That means ensuring that controls are in use to provide for confidentiality, integrity and availability. That means that you are protecting the data properly during transit, storage and processing. That means that you and/or your client have an idea about how to provide preventative, detective and responsive capabilities around your product. Make sure your documentation clearly explains any security assumptions or add-on products required.

3. Be ready to handle issues. If/When we find a security issue, be it overflows, input problems, and/or best practice variances, be ready to mitigate the issue and submit a fix. Many times it takes months for vendors to handle the issues we find and this is certainly NOT good for their relationship with the client. Almost every full assessment our lab conducts involves some kind of deployment timeline and crunch from the customer. Nothing seems to go worse for vendors whose products we test as when an issue is found and they become unresponsive to us and/or their client. Seriously, JUST DON’T DO THIS. Be prepared to apply resources to fix issues when we test the application. Very few applications (less than 2%) pass through the lab process without some sort of issue. This is NOT a basic process, it is a seriously deep, complex and heavily leveraged process for finding holes and measuring impact. Be prepared.

I hope this post helps both clients and vendors be better prepared for their testing. I think it gives the basic ideas for the approaches that we know do not work. We really want your applications to be secure, thus the level of detail we apply. Let us know if you have any questions. We are also about to open the lab registration window for 1Q09, so if you have applications you would like tested, let us know and we will try and get them on the schedule.

Security Cheat Sheets

One of the best tools that the technicians at MSI rave about is a series of information security “cheat sheets” that they keep around the lab. These small, easy to view posters make quick visual references for common commands, tool parameters, etc. They can be an excellent source for remembering those specific commands or settings that always seem to elude techs or that are just so convoluted that you have to look them up anyway.

The MSI techs suggest checking out this site for a whole library of these tools.

If there are other sites out there that your team uses to obtain these helpful posters, please reply with a comment.

If you have made your own cheat sheets, please send us a link if they are public and we will post the ones we compile at a later date. Thanks for reading!

Finding Reputable IT Firms

How do organizations, especially SMEs, find reputable, dependable IT support help?

For example, I have a client in Cleveland that really needs a strong network and system management company that they can depend on. The problem is, that they are a small to mid-size financial institution, so trust really matters. Of course, I am aware of all the vendor management mechanisms and such, but we need to know how to find reputable vendors to even approach.

The client is reaching out to their peers for references, but I was hoping that one of our readers might know of a mechanism or an “angie’s list” style site for determining relevant capabilities and such for IT firms. If those pieces are not out there, then maybe this is a business idea for you budding entrepreneurs.

Please, let me know you thoughts and ideas!

RE: SANS Are We Doomed?

This kind of stuff is, in my opinion, exactly why management and consumers grow sick of hearing about information security and cyber-risk in general. For years now, security folks have been shouting to high heaven about the end of the world, cyber-terrorism, the cyber-jihad and all of the other creative phrasings for increased levels of risk and attacks.

SANS at least asks for good things too that represent hope, but the list is always small. It is always, as they point out, so much easier to create a list of threats and attack points than a list of what we have done, and are doing right. That’s human nature, to point to the short comings.

My point is that just as many real world risk pundits have said, we have to look at things through a higher level lens. We have to create RATIONAL security. Yes, we have to protect against increases in risk, black swans, 0 day exploits, huge bot-nets and all of the other examples of “bleeding edge threats”, but we have to realize that we have only so many resources to bring to bear and that risk will NEVER approach ZERO!

Here is a real world example:

I recently worked an incident where a complete network compromise was likely to have occurred. In that event, the advice of another analyst was to completely shut down and destroy the entire network, rebuild each and every device from the ground up and come back online only when a state of security was created. The problem: the business of the organization would have been decimated by such a task. Removing the IT capability of the organization as a whole was simply not tenable.

Additionally, even if all systems were “turned and burned” and the architecture rebuilt from the ground up, security “Nirvana” would likely not have been reached anyway. Any misstep, misconfigured system or device or mobile system introduced into the network would immediately raise the level of risk again. So would connecting the newly built “secure” network to the Internet. If 1 minute after the network went live a user clicked on the “dancing gnome” from a malicious email, then the network is in a risk state again. Not to mention or even dive into the idea that an internal attacker or rogue admin could exist inside the environment, even as it was being rebuilt.

Thus, the decision was made to focus not on mitigation of the risk, but on MINIMIZING it. Steps were taken to replace the known compromised systems. Scans and password changes became the order of the day and entire segments of the network were removed from operation to minimize the risk during a particularly critical 12 hour cycle where critical data was being processed and services performed. Today, this IT environment remains in a semi-trusted state, but they are quickly implementing a phased approach to restore full trust to the environment and bring it into compliance with security best practices.

Has there been some downtime? Sure. Has there been some cost? Sure. How about user and business process pain? Of course! But the impact on their organization, business bottom line and reputation has been absolutely less than if they had taken the “turn and burn” approach. They still have risk. They still have threats. They still have vulnerabilities, BUT they are moving to deal with them in a RATIONAL fashion.

RATIONAL response to risk is what we need, NOT gloom, doom and FUD. Finding the holes in security will always be easy, but understanding what holes need to be prevented, wrapped in detection and protected by response is the key. Only when we can clearly communicate to management and consumers alike that we have RATIONAL approaches to solving the security problems are they likely to start listening again. After all, who does anything different when the Internet security level moves from “mochachino” to “dirty martini” or vice versa???

HoneyPoint Event Stats

I have gotten a few inquiries about the average number of events per day that HoneyPoint Security Server deployments catch on average networks. While this question is pretty hard to answer in a general sense, since most networks differ by size, deployment security, policies and processes, we can talk about averages across multiple client networks and our own HoneyPoint sensor networks.

On average, Internet visible HoneyPoint deployments usually experience around 4 events per HoneyPoint deployed per day. This can vary depending on services emulated, but in general, adding smtp and web (the two largest receivers by far) against those deployed on rarely scanned ports yields this average over time. Those are amazing statistics when you consider that each of those is a genuine probe/scan event or attack! Many clients use Internet facing sensors as a means of populating black hole lists, web application address filters and other prevention focused mechanisms. Less often, clients use this information as means to perform risk assessment and response, meaning that they actively track this data and the sources and take a manual action. Usually clients use Internet exposed HoneyPoints as a source of threat intelligence, trend tracking for frequency and source variations and automated blocking configurations.

Internally, most clients experience 3-4 events per month on average. These events are usually treated very seriously, since any HoneyPoint traffic internally is suspicious at best and malicious at worst. Most security teams leveraging HoneyPoint use these events as triggers for true security incidents. They launch full investigations and either mitigate or minimize the discovered issues. They are able to do this and focus on these critical events due to the low number of them they experience, the lack of false positive events they see and the placement of the HoneyPoints close to the actual assets they are tasked with protecting. Many clients have moved away from using NIDS as any type of action item at all, and refer to their NIDS deployments only as forensic and correlation data for incidents triggered from HoneyPoints and log analysis/log management solutions.

While HoneyPoint Security Server is not a panacea for information security, it is a very strong addition to a security program. Clients are continually discovering new uses, new capabilities and new ways to leverage the system to further reduce their resource requirements. HPSS has proven to be a low noise, high signal, effective, traditional approach to providing threat management, security intelligence and detective capabilities for organizations of any size.

If you are interested in hearing more about the averages and what you can expect from a HoneyPoint deployment, just let us know. Give us a call or drop us a line and we will be happy to share the metrics we have with you!

Port Mining with HoneyPoints

Myself and a client have been playing around with a new technique that we are calling port mining. In this approach, we use HoneyPoint Security Server and HoneyPoints deployed in key locations to mess with worms, scans and tools.

The process is very very basic. We basically configure a simple HoneyPoint so that instead of sending the various text files it usually sends down the connection it sends a large binary file like an MP3, ISO or other binary data. Then we deploy the HoneyPoint and have it listen on a port for incoming traffic.

When the HoneyPoint gets a completed TCP connection, it immediately shoves the binary content down the pipe. It then waits for a response and sends either the same file again or another file. Very basic, right? Yes, indeed. However, we have seen three effects from this process:

1. In many cases, the file transfer of the first huge file completes and the connection dies with a timeout. In our lab testing, this was due to the unexpected input size and content of the data sent from the HoneyPoint, which has caused multiple forms of tools and malware to simply crash.

2. In other cases, we have seen the file transfer complete and the tool or malware respond only to get the file again down the pipe. We have watched this process act like a LaBrea scenario where the tool, scan or malware is significantly slowed by the data (of course, we are also using a lot of our own bandwidth) and in some cases we were able to cause the MS08-067 scans we were seeing to wait up to 50 mins for each 8 MB MP3 we sent and do this hundreds of times! Effectively, we slowed down that system from further scans while it kept playing with our HoneyPoint.

3. In very few cases, we see the connection terminate upon partial sending of the binary data. In about half of these cases, the connection terminates properly (so likely we had no effect) but in the other half, we see odd disconnections (unknown, but possible crash of the malware). In the lab, we have seen this happen with a few tools due to unexpected inputs causing exceptions in the code.

Now, it should be said, that we are just “playing” with this approach. We are not sure how or if this will be beneficial to anyone, but it was a fun idea to mess with scanners and such in such an easy way. Give it a try and let us know what you think!

PS – Extra points (and fun) can be had for finding the worst MP3 of the most horrible songs that have the largest effective use as a port mine defensive component. So, bust out your one hit wonders MP3 collection and see how your milage varies. 🙂