Why Use Public Key Encryption? A User’s Perspective…

In the last year and a half we have all been affected or know someone who has been affected by leaked information.  We have begun to hear this message over and over….…Information stolen, Personal Information Compromised, Social Security Numbers Lost, etc.

We begin to ask ourselves, what can we do?  How do we protect ourselves in both our professional and personal lives and be more proactive? There are several things you can do to protect yourself and PGP/GnuPG (GPG) is one of them.

At MicroSolved, Inc. (MSI) our team uses a variety of tools and applications, PGP and GPG are just a few.  PGP/GPG are used to encrypt confidential data using public-key encryption. For example, you might use them to protect E-Mail and Data Files. They allow you to exchange files or messages with privacy, authentication and convenience.

So what is PGP?  “An abbreviation for Pretty Good Privacy; PGP is an electronic privacy program which helps you ensure privacy by letting you encrypt files and e-mail. The encryption technology employed by PGP is very strong. PGP was created by Phil Zimmermann, and depends on public key cryptography for its effectiveness. Public key cryptography is a procedure in which users exchange “keys” to send secure documents to each other. For more on PGP, go to http://www.pgp.com.”

Source:  www.redhat.com/docs/manuals/linux/RHL-7-Manual/getting-started-guide/ch-glossary.html

What is GnuPG?  “GPG is the GNU project’s complete and free implementation of the OpenPGP standard. GPG allows you to encrypt and sign your data and communication, features a versatile key management system as well as access modules for all kind of public key directories. GPG is a command line tool with features for easy integration with other applications. A wealth of front end applications and libraries are also available.”

Source: http://www.gnupg.org/

So you ask what does privacy, authentication and convenience mean?

v     Privacy means only those intended to receive a message can read it.

v     Authentication means that messages that appear to be from a particular person can only have originated from that person.

v     Convenience means that privacy and authentication are provided without the hassles of managing keys associated with conventional cryptographic software.

Whether we are protecting confidential data stored on our computers, communicating with clients or a remote office, these tools can help.

Where should you use PGP/GPG?

v     Email Security

v     File & Disk Security

v     Secure File Transfer

v     Secure Storage

v     Removable Media

v     Instant Messaging Security

Costs for the different modules and toolsets from the two products vary, but range from FREE to a few hundred dollars. They likely make for an excellent investment, either personally or for companies of any size.

Stay tuned for additional tools and ways to a more secure Internet experience.  Remember everyone has a responsibility to protect confidential data and be safe online!

MX Injection Testing Available

In reference to the previous post, our partner Syhunt has added MX injection testing capabilities to their Sandcat product. Of course, this is in addition to the thousands of other tests already being performed by the tool.

Sandcat is an excellent tool for performing checks of web servers, web applications and such for potential and known vulnerabilities.

MSI is proud to represent Syhunt in the United States, and we use Sandcat as a powerful addition to our toolkit. If you would like more information about Sandcat or MX Injection, please call your MSI account executive and schedule a time for a technical briefing with an engineer.

MX and other injection vulnerabilities are an emerging risk, and more information will be coming over the next several weeks and months as various tools, techniques and products in the security community begin to evaluate product lines and software applications common to most organizations. Stay tuned for more on this family of issues as it becomes available.

Injection Attacks – Not Just for SQL Anymore!

Over the last several months security researchers have been identifying more and more scenarios for performing injection style attacks against various applications.

What is interesting about this is that many of the new injection issues have little to do with SQL. In fact, protocols like LDAP and SSI along with various forms of command injections, code injections and response spoofing have proven to be targets for this family of input attacks.

In a recent article about a new version, called MX Injections, techniques for attacking and compromising various web-based mail applications are disclosed. Using these types of exploits could prove a serious danger to organizations – exposing their internal communications and data stores to attackers, or even allowing compromise of underlying systems (depending on what the data stores contain.)

Given the focus of attackers on new application layer techniques such as these, every organization should quickly identify their existing exposed applications and ensure that those systems have been appropriately tested for various injection issues. Additionally, since these techniques are continually evolving, a system of ongoing application testing is likely to be the most effective tool for protecting against these emerging threats.

Bugs

Last month over two dozen kernel bugs were published on a security researcher’s blog. Most of them were found using a file system fuzzer, which would create malformed file systems to try to crash each kernel. Not all of the MOKB bugs were file system related though. Some problems were found with Apple Airport drivers, Netgear wireless drivers, and Broadcom wireless drivers. Although, now more vulnerabilities are known that could be exploited, this fuzzing approach does improve the overall stability of software available to consumers.

What I wonder, though, is why don’t these big company engineering teams have a process to find all these bugs before the software is put into production? The same free fuzzing tools and techniques are available to the engineers as are available to the underground, so why aren’t they using them as part of their development process at each step along the way? They actually have the source code… so it should be easier!

Big companies have been cutting corners in development, and especially testing, in order to turn a bigger quicker profit for their shareholders. Then, the vulnerabilities always come back to bite them and the consumer who gets exploited.

Eventually, maybe hundreds of years from now, all code will be open source and properly tested. People will realize that it is the only way to have secure software, and better processess will be put in place to ensure stable code. Until then, MO_B’s (Month of ___ Bugs) will be one of the only checks and balances upon the undertested software products being released today. Love them or hate them, security researchers that find these flaws are doing the work that the engineering teams should have done pre-release.

Making Passwords Manageable

Recently, with the passing of the Thanksgiving holiday, many of us have paid closer attention to those things for which we are thankful. I, too, have just taken an assessment and realize I have a plethora of things for which I’m grateful, at home as well as at work.

I know this might sound trite, but in my work life, I am thankful for my password vault. I’m sure many of you know and use this simple software tool, but for those of you who do not, a password vault is a software application that stores a list of all of your many passwords. What sets this type of tool apart from the plain text Word file where I used to store all my passwords, is that this application provides encryption. Now, I need only remember one password in order to access all of the rest!

This new device has set me free! As well, it has enabled me to follow all of our corporate guidelines for password creation and updating. No longer do I simply change the number behind my bird’s name! And, I can easily change all my passwords every thirty days, whether a particular network requires it or not.

I know this has been a problem for more than just me. Often, as a part of security assessments, our staff will conduct a physical review of our client’s workplace. During this “walk through”, we often find post-it notes with passwords underneath mouse pads and on computer monitors themselves! I always said, I was more secure than that, since all my passwords were in a document on my hard drive. What I learned was that since my document was named “Passwords” and was in plain text, I was no safer than the person with the post-it note!

But, the number of passwords I needed to remember and the frequency with which they needed to be changed was ever increasing. I wasn’t sure what to do until someone suggested a password vault. There are many of them available now, both open source and as off the shelf products. All that I have seen are easy to install remain as an icon in your taskbar or on your desktop and are easy to use.

My message here is short and sweet. Get and use a password vault. You and your security team will both be glad you did!

Can Technology Alone Make Your Information Safe?

Have you ever thought to yourself: “If only they would build some kind of IDS or something that really works! A little box I could plug into my network that would tell me when someone was doing something they weren’t supposed to do. Then I could just kick back, and let technology secure my data. I wouldn’t have to worry at all!” Do you really think that is true?

During World War II, the Germans thought that their Enigma code machines couldn’t possibly be compromised. After all, the Enigma was the epitome of high tech; years ahead of it’s time! They thought that their advanced technology would keep their data entirely safe. They were sure they didn’t need to worry. Were they right? No! Not only was the Enigma compromised, it was compromised in short order by a combination of espionage, clever cognition and (yes) technology. If this instance of German reliance on high technology didn’t cost them the war outright, it certainly made the war much shorter and cost the lives of thousands of German troops.

In the early 1960’s, the United States Military thought they no longer needed to mount guns on their new F-4 Phantom fighter. After all, the F-4 had new, high tech air to air missiles like the Sidewinder and Sparrow! The Military thought no enemy would be able to get close enough to use their guns. They thought that aerial dogfights were a thing of the past! Were they right? No! The enemy was able to exploit tactical errors and circumstance and get in too close for the vaunted high tech missiles to work! This instance of over reliance on high technology caused the death of American pilots and the loss of expensive aircraft!

In the 1980’s and 90’s, the CIA thought that there was very little need for human intelligence sources anymore. Why put agents on the ground when you can see what other countries are doing from space using high tech satellites and hear what they are planning using high tech electronic surveillance and code breaking equipment? The CIA thought they could save money and avoid putting their agents in danger by relying on these high tech solutions. Were they right? No! During the lead up to the current war in Iraq, the CIA found that all the high resolution photographs and electronic intercepts they had told them next to nothing about the state of the Iraqi nuclear and biological programs. Without agents on the ground, the CIA was forced to rely on intelligence from such shaky sources as Saddam Hussein’s own son in law and the few agents that other countries like Germany and Great Britain were able to recruit. The CIA concluded that Iraq had advanced weapons programs and that the U.S. and her allies were in imminent danger of attack. Were they right? No! The CIA’s over-reliance on high technology and their failure to recruit human agents in the Gulf region helped lead to a full scale war in Iraq that has cost the lives of thousands!

Much the same thing is happening today with distributed computer information systems. Organizations think that better firewalls and intrusion detection systems are the answer. Are they right?

Twenty years ago the Internet was just starting to grow. Personal computers were getting more powerful, faster and more useful every day. Lots of software was appearing that made almost every business task easier to accomplish and keep track of. Businesses were able to streamline their operations and get a lot more work done with a lot less people. Everything was becoming more user friendly. Prices were down and profits were up!

Then the crackers started to appear. Information started to disappear! Computers suddenly stopped working! Data began getting corrupted and changed! Confidentiality was lost! Businesses and government agencies began to panic.

What was the problem? Why was this happening? Well, the main problem was that the Internet and transmission protocols that the Internet is based on were designed for the free and easy interchange of information; not security. And by the time people began to realize the importance of security, it was too late. The Internet was in place and being used by millions of people and thousands of businesses. People were unwilling to just scrap the whole thing and start over again from scratch! And there were other problems. The fact that the most widely used operating systems in the world are based on secret source code is a good example. Clever people can always reverse engineer operating code and expose its weaknesses.

So we are stuck with using an information technology system that cannot be reliably secured. And it cannot be reliably secured largely because of a technological flaw. So why would we think that technology alone could solve this problem ?! It can’t.

What government agencies and business organizations are coming to realize now is the need for a renewed emphasis on the application of operational and managerial security techniques to accompany their technology-based information security systems. A good example of this is the requirement by the FFIEC and the other financial agencies that financial institutions must use something more than single part authentication techniques (user name and password) to protect high risk transactions taking place over the Internet. Did they come right out and demand financial institutions use technology based (and expensive!) solutions such as Tokens or biometrics? No! The Agencies happily, and I think wisely, left the particular solution up to each organization. They simply required that financial institutions protect their customer information adequately according to the findings of risk assessments, and they left plenty of room for financial institutions to apply layered operational and managerial security techniques to accomplish the task instead of once again relying solely on high tech.

And despite the insecurity and frustration this lack of clear guidance initially causes organizations, I think ultimately it will help them in establishing tighter, cheaper and more reliable information security programs. If financial institutions and businesses want to get off the merry-go-round of having to buy new IT equipment for security reasons seemingly every day, they are going to have to bite the bullet and do the security things that everyone hates to do. They are going to have to make sure that all personnel, not just the IT admins, know their security duties and apply them religiously. They are going to have to track the security of customer information through each step of their operations and ensure that security is applied at every juncture. They are going to have to classify and encrypt their data appropriately. They are going to have to lock up CDs and documents. They are going to have to apply oversight and double checks on seemingly everything! And everything will need to be written down.

At first this will all be a mess! Mistakes will be made! Time and money will be wasted! Tempers will flare! But the good thing is that once everyone in the organization gets the “security mind-set”, it will all get easier and better.

The fact is that once an information security program is fully developed and integrated, and all the bugs are worked out, it actually becomes easy to maintain. Personnel apply their security training without even thinking about it. Operating procedures and incident response plans are all written down and everyone knows how to get at them. And when personnel or equipment changes occur, they integrate smoothly into the system. Panic is virtually eliminated! And almost all of this is provided by the application of operational and managerial security techniques. In other words, policies and procedures.

So when your organization gets that required risk assessment done. When you develop your required incident response and business continuity plans, don’t just let them sit in the filing cabinet! Use them, and actually start applying them to your business. It will give your organization a head start on what is almost surely going to be a requirement in the future, and could save you some money in the process!

The World Needs “Open Source Security Best Practices”

Continuously, there are client questions about best practices on a myriad of different ideas, technologies and strategies. Put four or five information security teams together and some of the basics shake out but the higher-level best practices remain “under discussion”.

We need a better way to make this happen. We need a wikipedia-like, open source discussion mechanism for best practices that can bring people together, establish baselines and encourage discussion of the sticking points. I would have MSI attempt this, but as a vendor, it should be viewed as a conflict of interest. That said though, someone needs to support an interactive way to make this discussion feasible, free, open and accessible. SANS, OWASP, CISecurity and others are all good starts and highly powerful as organizations, but we need some open group to establish an open forum that creates, revises and reaches consensus on best practices for everything from system settings to physical access processes.

Perhaps this exists already and I just can’t seem to find it. But, neither can the other folks that ask for this type of information. If it is out there, we as infosec professionals need to do a better job of making it known.

If you have an organization willing to undertake such a project, or are willing to lead a group to undertake such a task – drop us a line. We would love to contribute.

Safe Travels For the Holidays

As we Americans depart for the Thanksgiving holiday, we often engage in a large amount of travel around the country. This year, I would like to have all of our readers pay special attention to the safety measures being used to protect you as you travel about.

On the roads, check out the numbers of police, their laser/radar guns and the automated systems they have been placing around the country for the last year or more. Do these deployments and tools really make you safer, or do they just make you feel safer?

At the airport, you will be asked to remove your shoes, place your laptop in a bin and put everything liquid into a clear plastic bag. Do any of these processes actually make you safer? Does having someone look at a clear liquid in a baggie make it more or less safe, or is this security theater?

Even trains, busses and other forms of public transportation have begun to deploy similiar techniques and new technologies. What is the value of these mechanisms?

So, as you travel this year, please pay attention, ask questions and compare the implementations to the risks. Some of the steps out there certainly make sense and protect us. My opinion is, many others are simply a waste of time, money and resources – since they truly provide little more than a feeling of safety or security through theater.

You decide. Maybe together, enough of us can help those in charge of such things make better choices about solutions. Maybe we can get them to focus on real risks, real threats and effective mitigations…

Either way, have a safe and happy holiday!

A new threat

A new threat in software has established itself in the last year. That threat is vulnerabilities in device drivers. Historically, security and drivers never had much in common. It appears that this line of thinking is going to cause some severe headaches in the near future.

Just a few days ago it was announced that a severe vulnerability was identified in Broadcom’s wireless drivers. There’s a buffer overflow condition in the SSID handler. Potentially somebody driving around broadcasting a malicious SSID could compromise your machine by just sitting there waiting for your computer to pick it up. It is claimed that there is a reliable exploit for this already, fortunately it hasn’t been made public yet. If this does become public, it could be very dangerous. It’s a kernel level exploit, which means it’s going to bypass any anti-virus measures on the computer. Broadcom was notified of the problem and they updated their driver, but issued no security warning. So far, it doesn’t appear than any vendors that use Broadcom chipsets have updated their corresponding drivers.

This isn’t the first occurrence of such a vulnerability. You may remember the Centrino vulnerabilities earlier this year, vulnerabilities were also identified in Apple’s wifi drivers, and recently in Nvidia’s video drivers for Linux, among others.

It’s time for hardware manufacturers to start thinking about security, and taking responsibility for any security issues just as every other software developer has to. It’s unfortunate this was not already the case, and it may be too late.

To Comply Or To Secure?

Yes, that is the question. Unfortunately, there is a difference between compliance and security, in terms of Information Security. MSI was recently approached with a simple question concerning multi-factor authentication and what the regulations really are (or will be, for those bodies of legislation that are a little behind the power curve). A quick perusal of several different pieces of regulatory guidance (i.e…NCUA 748 and the FFIEC Handbooks) indicate that, while they each call for the use of multi-factor authentication for high-risk transactions involving access to customer information or the movement of funds to other parties, there is very little guidance that dictates the level or complexity of the proposed authentication scheme.  One “attempt” at guidance says that where a risk assessment has indicated that single-factor authentication is inadequate, financial institutions should implement multi-factor authentication, layered security, or other controls reasonably calculated to mitigate those risks.  What in the world does that mean?  To me, that means that a financial institution, given a third party risk assessment has been completed, can decide to use some implementation of authentication that may not be the most secure, as long as “they” believe it to be reasonable enough.

I currently bank with a financial institution that only requires a username and a password (at least 6 characters, one capital letter, and no special characters allowed) for me to log in to the online banking site and have unfettered access to my account. To me, this is an outrage!  Granted, I can change banks.  Unfortunately, I don’t believe there are very many options that offer a more secure authentication scheme.

At MSI, we set about to try and define our stance on multi-factor authentication and whether simply complying with the regulations is going far enough to secure that precious “member data”. We were asked if, instead of implementing a multi-factor authentication scheme, would a solution that requires the use of a password and a security question (much like the age old “mother’s maiden name” question) would put a financial institution into compliance.  The short answer….yes.  The long answer…depends if the financial institution “believes” it does.  MSI’s answer….not even close.

In these types of situations (where regulatory guidance is too “willy-nilly” to enforce a secure solution) organizations should look to industry standard’s best practices for guidance and implement the secure multi-factor authentication scheme that will go much further in protecting customer data.

Multi-factor authentication is meant to be difficult to circumvent.  It requires the customer to be able to offer AT LEAST 2 of 3 possible forms of proof of identity.  Those forms are (in no certain order):

  1. Something you know (password, PIN)
  2. Something you have (ATM Card, Token, Smart Card)
  3. Something you are (Biometrics…fingerprint, hand print, retinal scan)

While ATM’s have been using multi-factor authentication schemes since the beginning of time (at least for those Laguna Beach watchers in our audience), financial institutions continue to leave the most critical of vulnerabilities unchecked.  That’s the vulnerability of an attacker exploiting the inability of a customer to keep their passwords to themselves. If those same financial institutions took that leap to offer a more secure authentication scheme, I believe the market would reward them handsomely.  They’d get my money, as measly as the balance may be.

The moral of the story is that multi-factor authentication is meant to be difficult for all parties involved.  Sure, all I hear is that security departments don’t want to hinder their customer’s or their employee’s ability to perform their work by requiring a difficult authentication scheme.  That’s the biggest complaint that surrounds multi-factor authentication. However, if it’s easy for your customers to use, it’s probably pretty darn easy for an attacker to use as well.

While the current regulations give many financial institutions a “cop-out” when deciding whether or not to implement a multi-factor authentication scheme, it should not mean that the bottom line should always be the deciding factor when protecting your customer’s personal information. Industry standard’s best practices should drive this moral dilemma. A risk assessment, performed by a qualified third party, may indicate that the risk doesn’t require a tough authentication scheme.  I have to wonder if that risk assessment bothered to contact any or all of the 10’s of thousands of people who have fallen victim to fraud or identity crimes because of poor authentication requirements?