E-Voting Follow Up

I think the presentation at TechColumbus went well. The crowd seemed into it and their questions, comments and feedback were good. Sorry to the person I had to shutdown during the talk – but we had a time limit and such for the presentation and we had to keep from getting on a tangent.

Overall the e-voting summary was that yes, the systems are broken. Yes, they have vulnerabilities. But, we know what many of them are and we know what many of the exploits look like when performed. The Secretary of State has implemented process controls and new techniques for monitoring and detection of many of the attacks that EVEREST identified. Even though the system might be less than perfect – YOU SHOULD STILL GET OUT AND VOTE.

Thanks to Terry Dick, the Ohio Secretary of State’s Office, TechColumbus, Platform Labs, Mike Krippendorf and David Garcia for the help with the presentation. Special thanks to the rest of the EVEREST team, without everyone’s dedication to the cause, it would not have been as successful as it was. Extra special thanks to those who attended, without you guys, we are just strangers talking to ourselves in a dark room!

Here’s hoping everyone has a nice weekend.

Microsoft Patches Now Have an Exploitability Rating

Microsoft patches now include a new exploitability index. This new rating attempts to quantify when/if an exploit is likely to become available for a given vulnerability. The rating also attempts to take into consideration how stable a given exploit is likely to be.

Personally, I think this is a good idea, especially if they keep their methods for rating issues consistent and transparent. Already, a number of vendors have said that they will be adding support for the new index value in their tools and software. As might be expected, reaction has been mixed from the community, though, I have yet to see any response that included how such information could be truly harmful.

You can read Microsoft’s published information here.

I hope more vendors embrace this seemingly small detail. I think it is helpful for more than a few organizations overwhelmed by patch cycles. It may not be the “holy grail of patch risk”, but it is likely better than what we have now.

How does your organization plan to use this new information, if at all? Drop us a comment and let us know!

Why Replacing Internal NIDS with HoneyPoint is Critical to Your Organization

We are in a new age of information security. The primary threats to our critical data assets are well within the firewalls and layered architectures of the degenerative “perimeter”. Attackers can and will leap your firewalls, tunnel through your DMZs and trick your users into being the gateway to attack. The idea of the walled castle as a form of defense is destroyed and no longer serves anyone well.

With 55% of all attacks that cause financial damages to organizations originating internally, it makes sense that organizations change their focus to internal prevention, detection and response. But using a “false positive generator” like Snort!, Proventia or other NIDS approach is just madness. These mechanisms are so fraught with bad data when focused on the typical internal network that applying any attention to them at all is a huge waste of resources. Of course, the vendors will respond with their magic phrases – “tuning” and “managed service” both of which are just marketing speak for “spend more time and resources that you already don’t have on making our tool actually useful”. Don’t believe me, just ask them about applying their tool to a complex internal environment. Our polls, interviews and questions to users of these technology showed immense amounts of time, money and human resources being applied to keeping signatures up to date, tweaking filters and rules to eliminate false positives and spending HUGE amounts of security team time to chase ghosts and sort out useful events from the noise.

Our initial metrics, as we discussed previously showed that we could cut those resource requirements by 60-90% using a different approach. By leveraging the power of HoneyPoints, their deploy and forget architecture and their lack of false positives your organization can reap the reward of better security with less time, money and work. By combining HoneyPoint Security Server and an appropriate log monitoring tool (like OSSEC), organizations have been able to greatly simplify their deployments, reduce their costs and increase their abilities to focus on the security events that matter. Many have relegated their NIDS deployments at the perimeters to being another source of forensic data to be used along with syslog server data, file system analysis and other data sources compiled to provide evidence when a true incident occurs. NIDS at the perimeters have their value here and being a part of solution as a forensic tool makes them effective when needed, but prevents the “attention overload” that they require when used as a data source on a daily basis.

Detection of attackers in your environment IS CRITICAL. But the way you go about it has to make sense from both a security and manageability standpoint. NIDS has proven to be an ineffective solution in terms of allowing organizations with average resources to succeed. There is a way forward. That way is to change the way we think about information security. HoneyPoint Security Server and MicroSolved can help your organization do just that!

Check out http://www.microsolved.com/honeypoint/ for more information, or give us a call and we will be happy to explain how it works!

Please note: Snort! and Proventia are trademarks of their respective companies. They are great tools when applied to appropriate problems, but in the case of internal network security – we just have a better way! 🙂

3 Reasons Why Internet Voting is a Bad Idea

One of the questions I get asked the most when I speak on electronic voting is why voting is not done over the Internet. While I can clearly understand the idea of online voting being easy and efficient, I wanted to take a moment and give you the three biggest reasons why I think it is a bad idea, at least currently.

1. End Point Security. Voting online would mean that we would allow users to come into an online portal and cast their respective votes. The problem is that we have zero control over the security of the PC doing the voting. Your machine could be under the control of an attacker who could perform any myriad of attacks against you or the voting system. It would be trivial for an attacker who has gained control of your machine to both know how you voted and to modify your vote in real time. Everything from the simple to the sophisticated is within the realm of likely threats against home machines, for proof just look at the number and rates of bot-net infections. Imagine the chaos that could result from voting on compromised systems on a wide scale. The number of variables in this part of the equation alone is enough to give you nightmares.

2. Anonymity. The very processes that would be required to secure and authenticate the voter to the online voting system would also greatly impact their ability to remain anonymous. In order to verify the online identity of the voter, ensure that they only vote once and secure the voting session would require the system to correctly identify the voter against a database and then allow the voter to vote online. Such identification would involve a plethora of logged events and data records. Each of those log entries and data records could be compiled to help an attacker, especially an insider, identify particular voters and perhaps even isolate their vote cast. This has shown to be true with time stamps of paper trails in the current e-voting systems and would be only easier to accomplish with purely digital data.

3. Denial of Service Attacks. This is a severe issue. DoS attacks are trivial to perform these days, even against large scale systems and those with advanced capabilities. The prevalence and ease of bot-net attacks reduce the complexity of shutting down a site to the trivial level. If entire nation’s networks can be knocked off the net, then what chance would a voting portal have? Given the sensitivity, time requirements and public confidence that is needed in the electoral process, any successful denial of service attack against the voting system would be likely to cause chaos. In worst case scenarios, the entire electoral process could be disrupted or forced back to the alternative measures anyway.

In addition to these 3 reasons, many others exist. Sure, there are solutions for some of the problems – but they each range in scale from small to immense. While some countries have worked on or even adopted online voting, it continues to be a bad idea, in my opinion for the United States. The added complexity, cost and security issues certainly raise the idea well beyond the level of current workability. Cost alone is a killer given our current state of the economy, in my opinion.

So, the bottom line is that our current e-voting processes are not perfect. They do leave a lot to be desired, but work is being done in this area. Online voting, however, faces significant issues before it could even be considered as a relatively workable idea.

If you are interested in hearing more about e-voting, I will be presenting this Friday at TechColumbus on the issue, along with another member of the EVEREST team from the Ohio Secretary of State’s office. You can learn more and sign up at: http://www.techcolumbus.org/en/cev/314

HPSS And OSSEC

I’d like to go over some of the tools that we mention on the blog. The first one I’d like to take a look at is OSSEC. You may have heard of us talking about it before, we mentioned it a few days ago. That was in relation to HoneyPoints and using OSSEC as another layer of your “defense in depth” strategy. I’ll explain what it does, and how it can help you.

First of all, what is OSSEC? OSSEC is an acronym for “Open Source Host-based Intrusion Detection System”. From the name you can see it’s a Host-based Intrusion Detection System (HIDS). As a HIDS it has the capability to do log analysis, integrity checking, Windows registry monitoring (and event log), rootkit detection, real-time alerting and active response against malicious hosts. It can be run locally, or as a centralized system with agents running on hosts.

So how does OSSEC relate to HoneyPoint? Well they both watch different things, and complement each other. While HoneyPoints are psuedo services and capture traffic from them, OSSEC watches real services for probes and compromises. It does this largely by a system of log analysis. I won’t go into it deeply, but the log analysis rules are very configurable, chainable, and fairly easy to write for anyone that knows regex and has a familiarity of basic scripting language.

With OSSEC’s active monitoring, it’s possible for the host to dynamically write firewall rules to block that host. Similar to HoneyPoints Plugin interface, with which you could also use to write a plugin to do that. You could even use OSSEC to watch your HoneyPoint Console syslogs and integrate HoneyPoint Console triggers with its own active response rules, to centralize blocking of hosts between HPSS and OSSEC.

As you can see, OSSEC can work quite nicely with HoneyPoint Security Server as part of a “defense in depth” strategy. There’s no single tool to “rule them all”, so to speak, so it’s important to watch from multiple perspectives! If you want to check out OSSEC, you can visit www.ossec.net.

Port Knocking and SPA – Thoughts

A colleague of mine pointed me to an article on Port Knocking, more specifically, Single Packet Authorization. I wasn’t too familiar with either but once I started reading, some thoughts came to mind. Does this look far to cumbersome and “pain in the butt” to implement for such a small gain to anyone else? This is just another method of implementing the doomed “security by obscurity”.
First off, Port Knocking “is a method of externally opening ports on a firewall by generating a connection attempt on a set of prespecified closed ports.” [1] Single Packet Authorization is similar, but requires only one encrypted packet. While this may impress some people with it’s technical savvy, this solution should be thoroughly evaluated before implementing. As far as enterprise usability goes – limited at best. Talking amongst ourselves here we did think of one implementation that would actually be useful. That is to prevent your ISP from knowing you’re hosting a service without having to create extensive black or white lists. You could host an ftp server for example without the port ever showing as open to an overly intrusive ISP. Of course we do not condone the breaking of any agreements with an ISP.
However, for enterprise environments Port Knocking and Single Packet Authorization are in my opinion no way a replacement for good security practices These include keeping the service up to date with any patches/updates provided by the vendor. Be aware of any newly developed or developing threats to the service you’re hosting. Implement proper ACLs at the firewall. Block all of Eastern Asia from accessing your SSH service if need be. Use VPN clients. This is critical, there’s no real reason to have remote access ports opened without protection. Use VPN clients. Just about every enterprise firewall comes with some sort of VPN option. Last but not least, do not forget the importance of a strong password policy. Brute force attacks really become a non issue with a complicated enough password.
In conclusion, PK and SPA sound good in practice, and implemented as part of a greater defense in depth solution could work; otherwise, stand alone PK and SPA in my opinion are less than ideal.

[1] http://en.wikipedia.org/wiki/Port_knocking

Save Time and Money with HoneyPoint Security Server

Well, the initial round of metrics are in. Organizations that have changed the way they think about information security can seriously benefit from changing the way they use NIDS (if they use them at all) and embracing the evolution in information security that HoneyPoint represents. Here are some pretty amazing metrics that have come back from our clients:

Strategy:

Customers who have continued to use NIDS have been able to cease daily monitoring of the alerts and relegate the NIDS to basically being a forensics tool when odd events occur.

Customers who combined our technology with a high signal, low noise log monitoring tool (such as OSSEC) have seen the largest return on investment and simplification.

Metrics:

Basically, when compared with a FREE NIDS (like Snort) using a registered rule set (with a 30 day delay), clients still achieved total cost of ownership savings of 50% by eliminating signature updates, IDS/IPS tuning and management human costs. The elimination of false positives and drastic reduction of events to process (from 5-18K per day) to less than 20 actual events per month, on average also aided these savings. The time savings they reported on average was about 90% per full time employee engaged in security monitoring!

Customers had nothing but praise for HoneyPoint’s strategy, performance and commitment to “deploy and forget” security.

That’s right! Let me recap that again: TCO reduction of 50% and time reduction of 90%! — Better security with less time and money…

The numbers increase significantly from there when compared with commercial (pay for play) software and managed services from a variety of companies. A couple of clients who were using commercial software managed services to try and manage their internal security were able to save between $30K and $82K in their first year with HoneyPoint and then up to $95,000 per year in subsequent years!

The last key point we have taken away from the quick summary of the interviews we have done so far is the amount of respect that the approach, strategy and implementation has earned from regulatory auditors. They have examined the product very thoroughly and done very deep reviews of the both the strategy and the capabilities. The outcome of these regulatory reviews, to date, have been excellent. Regulators have seemed to appreciate the forward thinking and the payoff that customers are receiving. Feedback has been excellent and continues to make us very proud of the work that we and our clients have done to bring HoneyPoint to market.

We will be putting together a more formal way to demonstrate these numbers in the near future. Our use cases and the attack results that we have been able to capture continue to come in and some simply amaze us! Stay tuned for more details as we finish analyzing the interviews and the use cases.

If your organization is interested in trying HoneyPoint and is willing to be a use case or public reference, we would like to talk to you. Deep discounts are available to firms who are willing to engage in this manner with us and we are certainly looking for more verticals outside of our existing markets. Give us a call if you would like to discuss it!

BTW – customers using HoneyPoint Security Server and HornetPoints exposed to the Internet have achieved some significant reduction is scans, probes and attacks by leveraging both “defense fuzzing” and our “one strike and you’re out black hole” approach. Let us know if you would like to hear more about how these strategies and tactics can reduce your Internet risk.

Yet More on SockStress…

OK gang, the story gets interesting again….

Check this out for some deeply technical details on some level of the basics of the attack. Fyodor has done an excellent write up of his guess.

You can also check out the response from the relevant researchers here.

I do like and understand Fyodor’s point that this smells like marketing. Perhaps we are supposed to believe that the vendors will have their responses coodinated and completed before the talk and disclosure? If not, then what is the point of waiting to disclose except to sell tickets to the conference?

This is a pretty HUGE can of worms that seems to have been opened by Kaminsky during the recent DNS issue. I guess it is just another nuance of this new age of attackers that we have entered. We will have to deal with more “huge holes” accompanied by media-frenzy, hype, researcher infighting and security vendor blather until the public and the press grow tired of it.

My point yesterday was that one of these days we will reach a point when some of these major vulnerabilities will not be able to be easily repaired or patched. When that becomes so, we may have to find a way to teach every day users how to plan for, and engineer for, acceptable failures. Until then, we should probably hone those skills and ideas, because it looks like where we are headed may just be fraught with scenarios where some levels of ongoing vulnerability and compromise may be a fact of life.

I believe strongly that we can engineer for failure. We can embrace data classification, appropriate controls and enclave computing in such a way that we can live with a fairly high level of comprise and still keep primary assets safe. I believe that because it seems to be the way we have dealt with other threats throughout history that we could not contain, eliminate or mitigate. We simply evolved our society and ourselves to the point where we could live with them as “accepted risks”. Some day, maybe even soon, we will be able to spend a lot less time worrying about whether or not users click on the “dancing gnome”, keep their workstations patched or if there is a vulnerability in some deep protocol…

The Protocol Vulnerability Game Continues…

First it was the quaking of the Earth under the weight of the DNS vulnerability that kept us awake at night. Experts predicted the demise of the Internet and cast doomsday shadows over the length of the web. Next came a laser focus on BGP and the potential for more damage to the global infrastructure. Following that came the financial crisis – which looks like it could kill the Internet from attrition when vendor, customer, banking and government dollars simply strangle it to death with a huge gasp!

Likely, we haven’t even seen the end of these other issues when a new evil raises it’s head. There has been a ton of attention on the emerging “sockstress” vulnerability. According to some sources this manipulation of TCP state tables will impact every device that can plug into a network and allow an attacker to cause denial of service outages with small amounts of bandwidth. If this is truly a protocol issue across implementations, as the researchers claim, then the effects could be huge for businesses and consumers alike.

What happens when vulnerabilities are discovered in things that can’t be patched? What happens when everyday devices that depend on networking become vulnerable to trivial exploits without mitigation? These are huge issues that impact everything from blenders to refrigerators to set top cable boxes, modems, routers and other critical systems.

Imagine the costs if your broadband ISP had to replace every modem or router in their client’s homes and businesses. What choice would they have if there were a serious vulnerability that couldn’t be fixed with a remote firmware upgrade? Even if the vulnerability could be minimized by some sort of network filtering, what else would those filters break?

It doesn’t take long to understand the potential gravity of attackers finding holes deep inside accepted and propagated protocols and applications.TCP is likely the widest used protocol on the planet. A serious hole in it, could impact risk in everything from power grid and nuclear control systems to the laundromat dryers that update a Twitter stream when they are free.

How will organizations that depend on huge industrial control systems handle these issues? What would the cost be to update/upgrade the robots that build cars at a factory to mitigate a serious hole? How many consumers would be able or willing to replace the network firewall or wireless router that they bought two years ago with new devices that were immune to a security issue?

Granted there should always be a risk versus reward equation in use, and the sky is definitely NOT falling today. But, that said, we know researchers and attackers are digging deeper and deeper into the core protocols and applications that our networks depend on. Given that fact, it seems only reasonable to assume that someday, we may have to face the idea of a hole being present in anything that plugs into a network – much of which does not have a mechanism to be patched, upgraded or protected beyond replacement. Beginning to consider this issue today just might give us some epiphanies or breakthroughs between now and the tomorrow that makes this problem real…

Book October 17th Now… E-Voting Issues with TechColumbus

We finally got everything arranged and we got all of the ducks, not just in a row, but quacking nicely together and we are going forward with the TechColumbus presentation on E-Voting. Come out on October 17th and hear about the EVEREST project, attacks against voting systems and the work that the entire EVEREST team has done to make sure that voting is more secure for Ohioans in 2008.

If you have an interest in voting security, elections or application/device security then this should be a must attend!

You can register here and get more information.