About Brent Huston

I am the CEO of MicroSolved, Inc. and a security evangelist. I have spent the last 20+ years working to make the Internet safer for everyone on a global scale. I believe the Internet has the capability to contribute to the next great leap for mankind, and I want to help make that happen!

Google Trends Look at Vulnerabilities

This morning I ran a quick Google Trends look at three types of vulnerabilities: buffer overflows, SQL injections and cross-site scripting (XSS). The results are interesting to me, though certainly no shock.

You can view the graphs and data here.

What we see are spikes of interest in injections while both XSS and buffer overflow searches remain about the same level as they have for the last year or so. This is of course, no surprise, given the recent spate of injection compromises, defacements and malware attacks. What is interesting to me is the news graph. I did not think it would be so quite so spiky. There are a number of places where mentions in the press of both injections and XSS spike heavily. That is really good news, because it means that the mainstream press is covering those topics to a larger extent. The more mainstream press coverage the issues get, theoretically, the more awareness there should be of the topic.

Also interesting to me is that Indonesia shows up as the largest source for searches in injection and Malaysia is number 7. In XSS, Indonesia shows up at number 7, while Malaysia does not make the list. More than likely these search results are good indicators of the research and work involved between both countries in their “hacking war”, a sort of online cyber-conflict that has been taking place for the last few years without much mainstream media attention.

South Korea shows up on all of the lists as a popular source of search activity around the vulns, and some of the other countries on the list that should be drawing interest are Iran, Israel and India. Obviously, some groups in these countries are building some cyber capabilities, as they are searching on enough data to be listed. This brings up some interesting questions.

With detail analysis over long periods, perhaps this data would be useful for tracking the growth of capabilities in a given locale? Also, from a corporate security stance, is there any way that the data could be used in the short term to provide a focal lens for risk management? How could analysis of short term data be used to “forecast” potential areas of trouble in the near future? Does increased research of a topic area correlate with near future attack increases in that particular vulnerability family?

Further study into gaining intelligence from Google Trends is needed. There just might be a way to tap it for insight into emerging patterns and a deeper understanding of attack type prevalence and other indicators of underground strategic changes. Only time will tell how much power we can wring from this juicy cache of information. In the meantime, play with it a bit and see what interesting stuff you can find.

A Taste of Our Own Medicine…Real Life DR/BC Experience for the Assessors

It has been a turbulent last few days around MSI and in Central Ohio at large. On Sunday, we experienced the left overs of Hurricane Ike. Being in the mid-west, we were not quite prepared for what came our way. Wide swaths of the state experienced winds as high as those associated with a minor hurricane or tropical storm. Hundreds of thousands were left without power and damage to homes and property took a heavy toll. While the damage to our area was a minor issue compared to the beating that Houston and parts of Texas took – it was, nevertheless, a major event for us.

Today is 3 full days after the storm event here in Columbus and many remain without power and other “conveniences”. Grocery stores, gas stations and restaurants are just beginning to reopen in many parts of the city. Problems with various services and businesses abound. For example, many schools are still closed, several doctor and dentist offices still have no power and there are ongoing ups and downs for telephone services and ISPs.

Around MSI, we have been fighting the damage from the storm and Mr. Murphy. While we have been among the lucky ones to keep power on a stable basis, many of our team have been spending long hours in the conference room watching TV and playing video games after business hours. Many of them have no electricity at home, so this seems to be an easy way for them to spend some time. Our ISP has had two outages in the last two days. One was around 5 hours due to a power failure for some of the equipment that manages the “last mile” while the other was less than an hour this morning when their generator for their local data center developed an oil leak. Thankfully, both have been repaired within their SLA and none have interfered with our progress on engagements.

We have prepped our warm site for extended outages and just as we were about to activate it for these ISP outages, the connectivity returned. We have learned some lessons over the last couple of days about dealing with email outages, web presence outages and certainly gained some deeper insights into a few slight dependencies that had escaped us, even during our 2x per year DR testing. We still have some kinks to work out, but thankfully, our plans and practice paid off. We were prepared, the team knew our SLA windows for our vendors and our clients and our processes for ensuring continuation of engagements worked well!

We got to know first hand, exactly how good prep and good processes for DR/BC pay off. We took our own medicine and the taste wasn’t all that bad.

The moral of the story, I guess, is that DR/BC is a very worthwhile process. So the next time, we are doing an assessment for you and ask some tough questions about yours – don’t take it personally – as we said, we have learned first hand, just how worthwhile front-end investment can be.

Learn more about the storm:

News about the storm here.

American Electric Power outage map.

Local news.

PS – Special thanks to the folks who signed up for the State of the Threat presentation this morning. Sorry for the need to postpone it. We worked with Platform Labs throughout the day yesterday attempting to coordinate the event, but at the end of the day yesterday they still had no power. Thus, the postponement. Thanks for your patience and understanding on the issue. The good news is that Steve at Platform says they are back up and running as of this morning! Good news for Steve and everyone else!

“Secure Code” Will Save Us — Right????????

I know we have always preached that application security is much more cost effective when it is baked in. But, the reality of today’s application horizon is that security is an afterthought, at best, for a majority of web applications. A variety of reasons ranging from inexperienced developers to legacy technologies and from apathetic customers to security issues in core technologies have made this so. In fact, in our application security testing services we often encounter applications in production environments that fail to protect against attacks from 10 years ago!

The average development team we work with seems to be interested in fixing the problems, but often lack the basic understanding of how common attacks like SQL injection and XSS work. Without a basic understanding of the threats, how on earth can they be expected to protect against them? Recently, we spent more than four hours explaining the white list vs black list approaches to a certain development team who shall remain nameless. It took almost a half day of conference calls and email exchanges for them to understand how these basic approaches to filtering user input could be employed to protect their application against input validation attacks. It was not that they were not trying. The problem seemed to be that their application was developed by a small group of intern level programmers and the team members with real programming experience (the one(s) who had done the engineering and application design) were long since gone from the company or reassigned to other projects. Without experienced oversight and guidance, the interns had produced working code for sure, but without any underlying controls, security, availability or reliability!

Today, if we look at the marketplace, there are a ton of solutions attempting to bolt on security after the fact. Everything from code scanners to web application firewalls are emerging as controls to help organizations deal with web application security. One of the big problems with these technologies is that they require changes to the underlying source code, logic or web server environments. At the very least, WAFs act as a filtering device at the protocol layer, and many applications simply perform unreliably when a WAF is “protecting them”. What we really need is a reliable way to add security to web applications without changes to protocols, environments or logic.

Of course, the ultimate argument is that what we really need is secure code. I have read a lot of security pundits lately talking about how “secure code” is the solution. “Secure code” has become the latest battle cry, silver bullet, smoke and mirror trick and marketing hype. However, “secure coding” is not easy. It is not immediately available for most organizations – there is no switch to flip on your developers to make them churn out “secure code” – there is no ONE class or seminar you can send them to make them write “secure code”. Instead, it takes ongoing education, updates to existing code tools, frameworks and development environments. It will be a long, slow process. It will be a human process and it will be expensive.

Even then, once we get all of our programmers making “secure code”, there will still be problems. New attack formats will arrive. Legacy applications will still have old issues – some may be rewritten, but it won’t be cost effective for all web applications. New technologies and web enhancements will certainly degrade the security posture of even today’s most hardened web application. In other words – as you have heard me say before – Security is a journey, not a destination. We won’t get all of our code secure. We will never cross the line when all web apps are safe. We will simply move closer to a time when a majority of web applications are secure enough to meet our level of risk tolerance. Even then, that moment is likely fleeting. The center will not hold. The universe breaks down upon itself. Entropy is a true constant in the universe, even the security universe. The center will just not hold……

Want More Random Content and Noise in Your Life?

For those of you who want more random noise and content but just can’t get enough of it on the web, I have decided to take up this “Twitter thing”. 😉

Everyone keeps telling me that I need to do it, so I am trying hard to embrace it.

If you are interested in “following me” you can do so here: http://twitter.com/lbhuston

Two caveats: 1. There is likely to be a random element to the content and frequency of the feed. 2. There is likely to be a mix of personal stuff, marketing, security thought, musings, rants, etc. The twitter-sphere seems to encourage this mixture of transparency and actual deep work thought.

Lastly, I have decided to only follow two people thus far: Guy Kawasaki and Timothy Ferriss – two of my favorite authors. That said, if you don’t see me following you directly or decide to get offended if I don’t follow you – please excuse me ahead of time. So far, keeping up with the twitters from those two, attempting to add my own content and doing the day to day work of leading MSI seems to be consuming all of the cycles I have, so please don’t take it personally if I don’t follow you in return….

OK that’s about it for this post. If the last thing you would want in your life is more Brent Huston, don’t feel bad or anything. Believe me, I understand. 🙂

BTW – If you want to check out books by those two authors, they are both excellent. Their books have helped me build MSI and have even helped me gain some level of illusion of control over my personal life.

Forget Solutions for a While, Let’s Think Differently About Security

As many of you may know, this has been my mantra for the last couple of years. It was the perspective that gave birth to HoneyPoint and many of our service offerings that we have launched in the last couple of years.

I was very pleased when SANS ran this article a few days ago and when they made their initial call for ideas.

Many of the ideas that they uncovered were excellent! I especially think that there might be a future in organized education of young people around cyber-ethics, security behaviors and deeper understandings of privacy in the physical and online world. I am an obvious believer in new technical frameworks and thought processes that dynamically change the nature of the game from responsive to proactive. Further, I am a stronger and stronger believer in Honey-based technologies and in adapting attacker techniques and strategies for use against attackers. The last two years have incredibly strengthened my belief that a true key to future security is to manipulate the ability for threat agents to tell the real assets from the pseudo-assets and the true exposures from the ones that only lead to capture. I am a true evangelist of the idea that active manipulation of threat agents is a both a productive mechanism for defense and an effective control for differentiating between real, dangerous risks and non-persistent “noise” risks. While these solutions do not apply to every situation, their leverage and power do apply to a number of them and provide both excellent feedback and education as well as an intense level of engagement.

The ideas of adopting principles of genetic engineering are excellent and should be a basis for research in the future. I think the cyber world could learn a lot about data analysis, correlation and visualization by looking at the physical and medical worlds as a baseline for exploration. The data sets of the cyber world are large, but nearly as large, complex or dynamic as some human and physiological systems that scientists are tackling.

I think that if we step back from the day to day security problems we face and spend some time considering and researching “game changing” ideas, we might just find some amazing ways to change the very essence of what we do. I know attackers will always have a say in how the game is played. I know how history shines and enumerates the role of the defender. But, I also know that true evolutionary leaps are possible. True change is powerful, violent and often obvious once it has been discovered, branded and explained to us. Maybe what we need now is more discovery, more exploration and more application of free flowing thought.

As always, let me know what you think about it. You can send email responses to me or comment through the blog. The more brains thinking about the problem – the better!

Web Proxy Scanning – Attack or Desperate Search for Free Information Flow

I remember when I was coming up in the infosec world, there used to be a rallying cry among “hackers” that “information wants to be free”. Certainly, we know from history and the present that information freedom has a high value to democratic society. The fact that unrestrained communications can be used to cause social, economic and political change is a given.

I often encounter hundreds of web proxy probes against our HoneyPoints every day. As I look through the logs, research the various traffic and analyze any new events, I am in the habit of largely ignoring these simple probes. Today, however, it occurred to me that many, likely not all (but many), of these probes were folks in less open countries trying to find access mechanisms to get unrestricted access to the web. They may well be searching for an SSL wrapped pipe to retrieve current news, conversations, applications and other data from sources that the “powers that be” in their country would rather not have them see.

Of course, I know that not all proxy scans are for the purpose of escaping political oppression. I know that there are attackers, cyber-stalkers, pr0n fanatics and criminals all looking for proxies too. I also know, first hand, from our HoneyPoints that when they think they find them, many of these probes turn out to be less “CNN” and more attempts to break into the organization offering the proxy. I have seen more than my share of proxied, “internal” probes when attackers believe that their new “proxy” is real and useful.

But, even with the idea that some folks use these tools for illicit purpose, I think, some folks must be dependent on them for free access to uncensored information. Of course, the big question is, how can we help the folks that would like to use the proxy for legitimate public access to free information while refusing illicit access through our system. This is very very difficult without resorting to blacklisting, if we want to offer access to the net as a whole.

However, one of my engineer friends chimed in that perhaps access to the entire web is not really needed. What if you somehow created a system that had proper controls in place to prevent most attacks, but had a white list of sites that traffic could be proxied to. You would still be acting as a sort of “information moderator” in that you could control the sources, but what if the default page listed the sites that were allowed, and you allowed the most common news sites or other commonly sought sources for information that somehow had been vetted beforehand. Not a totally optimal situation, I understand, but better than the current scenario for some folks.

The question is, how could such a solution be created? How could it be established and managed? How would sites get vetted and could existing software be used to create these mechanisms or would new tools require development cycles?

If you have thoughts on this idea, please drop us a line. I would be very interested in your feedback!

Changes to the State Of Security Blog

I just wanted to take a moment and update folks on some changes that we are making beginning next week on this blog.

We have decided, after much consideration, to discontinue the routine process of vulnerability announcements on the blog. This was changed over to the blog platform when we shifted from WatchDog, our vulnerability intelligence product. The time for those announcement services has passed. Today, thousands of sites give up to the moment vulnerability announcements and RSS feeds make this an all to easy source of information. As such, we feel that other folks do a fine job of that work and we can focus on other things.

Beginning Monday, the blog will transition to a more thoughtful platform and be used by our team of Security Mentors to add to the security conversation and education, instead of the flat process of announcing new significant vulnerabilities. Our team will blog several times per week, with each member contributing content – but the content will be more open, deeper in context and much more opinion based than just parroting simple announcements of XSS in XYZ product.

Thanks to all of the readers who enjoy the blog and we hope you will continue to read and even learn to love it more. We look forward to less noise and much more content with context in the coming months. Please, feel free to join in the conversation. We love hearing from you.

Changes to the look and feel of the blog are coming soon and the entire blog process is in flux. Let us know what you like and what you want us to scrap. Spread the word about us and we look forward to a whole new set of eyes!

See you next week and have a GREAT weekend!

Broadband Caps Could Mean Consumers Pay for Bot-Net Traffic

The broadband caps proposed by Comcast and other home ISPs would mean that consumers would now be paying for excessive traffic from their networks, even when malware or bot-nets caused the traffic. Much media attention has been paid to the effect of traffic from spam and video ads used in normal web pages, but little has been said about the effect on consumers that malware infection could now have.

Imagine a simple malware infection that sends email. That infected machine could send millions of emails a month, easily breaching the modest bandwidth limits that some are proposing. How will the average consumer respond when they get warnings and then large bills from their network ISP for traffic that they did not cause? Imagine the help desk calls, irate customers and the increased costs of handling such incidents. How will the average help desk technician handle claims that infected systems caused the excess traffic?How will courts handle the cases when the consumer refuses to pay these charges and the ISP pursues their clients for the money?

Attackers are the real winners here, at least those interested in causing chaos. Effective attacks to cause financial damages and ISP cutoff against a known/focused target become all that much easier to perform. If you hate your neighbor and her barking dog, then you get her machine infected with malware and cause her to get a huge bill from her cable company. Do this enough and you can damage her credit, get her cut off from the Internet and maybe even interfere with her ability to earn a living (especially if she is a web worker). Heck, malware isn’t the only way – break into her wireless network or find it open to start with – and you have the perfect entry point for making her “iLife” a true nightmare.

Sure, some folks say these risks already exist without the added pressures of ISP bandwidth caps. They are right, they do. Some folks also say that these threats may make average consumers pay more attention to security. I think they are wrong, this will be just another item on a long list of ignored and forgotten “bad things” that happen to “other people”. However, I do think that these attacks should be a serious concern for the ISPs implementing the caps. The ISPs seem to be sharing a primary of claim that they are adding these caps due to bandwidth issues and the costs required to handle the current and future traffic. Yet, I would suggest that bandwidth caps are very likely to raise their support and account management costs exponentially – which could mean that they are shooting themselves in the foot.

Bandwidth caps are a bad idea for a variety of reasons (including stifling innovation), but they play directly into attacker hands and lend attackers a new spin on how to cause damage and chaos. In the last few weeks, much has been made of the recent growth in bot-net infected systems. Experts point to a nearly 400% increase over the summer months alone. Imagine the chaos and issues that could stem from calculated campaigns that wrangle those bot-net infected machines into breaking the boundaries of their ISP. Maybe bot herders would even change from holding end users hostage to targeting ISPs with bandwidth cap breaking storms that would trigger massive client notifications, calls to technical support and account management systems. Maybe attackers could figure out a way to use bot-net infected systems to cause “human customer denial-of-service” attacks against cable companies. I am certainly not rooting for such a thing, but it seems plausible given the current state of infected systems.

I just don’t see a positive for anyone coming from these ideas. I don’t see how they aid the consumer. I see how they could be used to harm both the consumer and the ISP. I see how attackers could leverage the change in multiple ways – given than many are extensions of existing issues. Generally, I just fail to see an upside. I find it hard to believe that consumers will be thrilled about paying for illicit traffic that they will argue they did not create and I can’t see the courts doing much to force them to pay for that traffic. I guess only time will tell – but it seems to me that in this game – everyone loses…

Ignuma 0.0.9.1 Overview

I spent a few minutes this morning looking at the newest release of Ignuma. If you aren’t familiar with it, it is another penetration testing framework, mostly focused on Oracle servers, but has plenty of other capabilities and front ends a number of fuzzing and host discovery tools.

The tool is written in Python and has both command line and GUI interfaces, including a QT-based GUI and a more traditional “curses-based” GUI. The tool is pretty easy to get working and adapts itself pretty well to some easy scans, probes and fuzzing. In the hands of someone with skills in vuln dev, this could be a capable tool for finding some new vulnerabilities.

The tools is written to be extendable and the Python code is easy to read. It is not overly well documented, but enough so that a proficient programmer could add in new modules and extend the capabilities of it pretty easily.

The tool is still in heavy development and it looks like it could be interesting over the next few months as it matures. Keep you eyes on it if you are interested in such things. You can find the latest version of Ignuma here.

Patched DNS Servers Still Not Safe!?!

OK, now we have some more bad news on the DNS front. There have been new developments along the exploit front that raise the bar for protecting DNS servers against the cache poisoning attacks that became all the focus a few weeks ago.

A new set of exploits have emerged that allow successful cache poisoning attacks against BIND servers, even with the source port randomization patches applied!

The new exploits make the attack around 60% likely to succeed in a 12 hour time period and the attack is roughly equivalent in scope to a typical brute force attack against passwords, sessions or other credentials. The same techniques are likely to get applied to other DNS servers in the coming days and could reopen the entire DNS system to further security issues and exploitation. While the only published exploits we have seen so far are against BIND, we feel it is likely that additional targets will follow in the future.

It should be noted that attackers need high speed access and adequate systems to perform the current exploit, but a distributed version of the attack that could be performed via a coordinated mechanism such as a bot-net could dramatically change that model.

BTW – according to the exploit code, the target testing system used fully randomized source ports, using roughly 64,000 ports, and the attack was still successful. That means that if your server only implemented smaller port windows (as a few did), then the attack will be even easier against those systems.

Please note that this is NOT a new exploit, but a faster, more powerful way to exploit the attack that DK discovered. You can read about Dan’s view of the issue here (**Spoiler** He is all about risk acceptance in business. Alex Hutton, do you care to weigh in on this one?)

This brings to mind the reminder that ATTACKERS HAVE THE FINAL SAY IN THE EVOLUTION OF ATTACKS and that when they change the paradigm of the attack vector, bad things can and do happen.

PS – DNS Doberman, the tool we released a couple of days ago, will detect the cache poisoning if/when it occurs! You can get more info about our tool here.