Disagreement on Password Vault Software Findings

Recently, some researchers have been working on comparing password vault software products and have justifiably found some issues. However, many of the vendors are quickly moving to remediate the identified issues, many of which were simply improper use of proprietary cryptography schemes.

I agree that proprietary crypto is a bad thing, but I find fault with articles such as this one where the researchers suggest that using the built in iOS functions are safer than using a password vault tool.

Regardless of OS, platform or device, I fail to see how depending on simple OS embedded tools versus OS embedded tools, plus the additional layers of whatever mechanisms a password vault adds, reduces risk to the user. It would seem that the additional layers of control (regardless of their specific vulnerability to nuanced attacks against each control surface), would still add overall security for the user and complexity for the attacker to manage in a compromise.
 
I would love to see a model on this scenario where the additional controls reduce the overall security of the data. I could be wrong (it happens), but in the models I have run, they all point to the idea that even a flawed password vault wrapped in the OS controls are stronger and safer than the bare OS controls alone.
 
In the meantime, while the vendors work on patching their password vaults and embracing common crypto mechanisms, I’ll continue to use my password vault as is, wrapped in the additional layers of OS controls and added detection mechanisms my systems enjoy. I would suggest you and your organization’s users continue to do the same.

Information Security Is More Than Prevention

 

 

 

 

 

 

One of the biggest signs that an organization’s information security program is immature is when they have an obsessive focus on prevention and they equate it specifically with security.

The big signs of this issue are knee-jerk reactions to vulnerabilities, a never-ending set of emergency patching situations and continual fire-fighting mode of reactions to “incidents”. The security team (or usually the IT team) is overworked, under-communicates, is highly stressed, and lacks both resources and tools to adequately mature the process. Rarely does the security folks actually LIKE this environment, since it feeds their inner super hero complex.

However, time and time again, organizations that balance prevention efforts with rational detection and practiced, effective response programs perform better against today’s threats. Evidence from vendor reports like Verizon DBIR/Ponemon, law enforcement data, DHS studies, etc. have all supported that balanced program work much better. The current state of the threat easily demonstrates that you can’t prevent everything. Accidents and incidents do happen. 
 
When bad things do come knocking, no matter how much you have patched and scanned, it’s the preparation you have done that matters. It’s whether or not you have additional controls like enclaving in place. Do you have visibility at various layers for detection in depth? Does your team know how to investigate, isolate and mitigate the threats? Will they do so in a timely manner that reduces the impact of the attacker or will they panic, knee-jerk their way through the process, often stumbling and leaving behind footholds of the attacker?
 
How you perform in the future is largely up to you and your team. Raise your vision, embrace a balanced approach to security and step back from fighting fires. It’s a much nicer view from here. 

Secure Networks: Remember the DMZ in 2012

Just a quick post to readers to make sure that everyone (and I mean everyone), who reads this blog should be using a DMZ, enclaved, network segmentation approach for any and all Internet exposed systems today. This has been true for several years, if not a decade. Just this week, I have talked to two companies who have been hit by malicious activity that compromised a web application and gave the attacker complete control over a box sitting INSIDE their primary business network with essentially unfettered access to the environment.

Folks, within IT network design, DMZ architectures are not just for best practices and regulatory requirements, but an essential survival tool for IT systems. Punching a hole from the Internet to your primary IT environment is not smart, safe, or in many cases, legal.
 
Today, enclaving the internal network is becoming best practice to secure networks. Enclaving/DMZ segmentation of Internet exposed systems is simply assumed. So, take an hour, review your perimeter, and if you find internally exposed systems — make a plan and execute it. In the meantime, I’d investigate those systems as if they were compromised, regardless of what you have seen from them. At least check them over with a cursory review and get them out of the business network ASAP.
 
This should go without saying, but this especially applies to folks that have SCADA systems and critical infrastructure architectures.
 
If you have any questions regarding how you can maintain secure networks with enclaving and network segmentation, let us know. We’d love to help!

10 Ways to Handle Insider Threats

 

 

 

 

 

 

As the economic crisis continues, the possibility of an insider threat occurring within a company increases. Close to 50% of all companies have been hit by insider attacks, according to a recent study by Carnegie Mellon’s CERT Insider Threat Center. (Click here to access the page that has the PDF download, “Insider Threat Study.”)

It doesn’t help when companies are restructuring and handing out pink slips. The result of leaner departments means that often there are less employees to notice when someone is doing something wrong. Tough economic times may also make it tempting for an employee to switch his ‘white hat’ to a black one for financial gain. Insider threats include employees, contractors, auditors, and anyone who has authorized access to an organization’s computers. How can you minimize the risk? Here are a few tips:

1. Monitor and enforce security policies. Update the controls and oversee implementation.

2. Initiate employee awareness programs. Educate the staff about security awareness and the possibility of them being coerced into malicious activities.

3. Start paying attention to new hires. Keep an eye out for repeated violations that may be laying the groundwork for more serious criminal activity.

4. Work with human resources to monitor negative employee issues. Most insider IT sabotage attacks occur following a termination.

5. Carefully distribute resources. Only give employees what they need to do their jobs.

6. If your organization develops software, monitor the process. Pay attention to the service providers and vendors.

7. Approach privileged users with extra care. Use the two-man rule for critical projects. Those who know technology are more likely to use technological means for revenge if they perceive they’ve been wronged.

8. Monitor employees’ online activity, especially around the time an employee is terminated. There is a good chance the employee isn’t satisfied and may be tempted to engage in an attack.

9. Go deep in your defense plan to counter remote attacks. If employees know they are being monitored, there is a good possibility an unhappy worker will use remote control to gain access.

10. Deactivate computer access once the employee is terminated. This will immediately end any malicious activity such as copying files or sabotaging the network.

Be vigilant with your security backup plan. There is no approach that will guarantee a complete defense against insider attacks, but if you continue to practice secure backup, you can decrease the damage. Stay safe!

4 Tips for Teaching Your Staff About Social Engineering

If there is one thing that is tough to prevent, it is a person whose curiosity overrides their better judgement. Human nature leans toward discovery. If someone believes a valuable piece of information is available, there’s a very good chance she will satisfy her curiosity.

Social engineering, the process of obtaining confidential information through tricking people to do things they should not do; is on the rise. So how can you help your staff recognize social engineering before it’s too late?

Here are a few tips:

1. Create a process for validating outside inquiries.

Often, an attacker has done their homework in obtaining certain pieces of information such as having another employee’s name or their calendar to establish credibility. Create a process for inquiries, making someone the gatekeeper for such calls. Tell staff to not give out confidential information before checking with the gatekeeper.

2. Secure access into the organization.

Does your organization have guards? If not, it is the job of every employee to be alert to outsiders.

Name badges are another way to do this and require everyone to keep it visible. Explain to staff that it is perfectly legitimate to say, “I’m sorry, who did you say you were with again?” Teach awareness through fun exercises and safety posters.

3. Train staff to resist picking up strange USB keys.

This is difficult because it is where a person’s curiosity can get the best of them. However, a person has no idea what is on a found USB key. Would they eat food left on the floor of the kitchen? (Some, unfortunately, might!) Why would anyone take a found USB key and plug it into their computer? Curiosity. Create an incentive program for employees to return found keys to an IT administrator.

4. Fine tune a sense of good customer service.

Most people are helpful. This helpful nature is especially nurtured by organizations who want to provide good customer service to both internal staff and external contacts. Attackers take advantage of this by insisting that it would “be very helpful” if they could get someone’s confidential information in order to do their job. Train your staff to stick to the plan of verifying all inquiries by going through the proper channels. Help employees understand that this approach is truly the most “helpful” since they’ll be saving the company countless dollars if it’s an attack.

Consistent awareness is the key to resisting social engineering attacks. Use these tips and decrease your probability of an attack. Stay safe!

Malicious Exploits: Hitting the Internet Waves with CSRF, Part Two

 

If you’re the “average Web user” using unmodified versions of the most popular browsers can do relatively little to prevent cross-site request forgery.

Logging out of sites and avoiding their “remember me” features can help mitigate CSRF risk, in addition —  not displaying external images or not clicking links in spam or untrusted e-mails may also help. Browser extensions such as RequestPolicy (for Mozilla Firefox) can prevent CSRF by providing a default-deny policy for cross-site requests. However, this can significantly interfere with the normal operation of many websites. 

The CsFire extension (also for Firefox) can mitigate the impact of CSRF with less impact on normal browsing, by removing authentication information from cross-site requests.

Web developers, however have a better fighting chance to protect their users by implementing counter-measures such as:

  • Requiring a secret, user-specific token in all form submissions, and side-effect URLs prevents CSRF; the attacker’s site cannot put the right token in its submissions
  • Requiring the client to provide authentication data in the same HTTP Request used to perform any operation with security implications (money transfer, etc.)
  • Limiting the lifetime of session cookies
  • Checking the HTTP Referer header
  • Ensuring that there is no clientaccesspolicy.xml file granting unintended access to Silverlight controls
  • Ensuring that there is no crossdomain.xml file granting unintended access to Flash movies
  • Verifying that the request’s header contains a X-Requested-With. Used by Ruby on Rails (before v2.0) and Django (before v1.2.5). This protection has been proven insecure under a combination of browser plugins and redirects which can allow an attacker to provide custom HTTP headers on a request to any website, hence allow a forged request.

One simple method to mitigate this vector is to use a CSRF filter such as OWASP’s CSRFGuard. The filter intercepts responses, detects if it is an html document, and inserts a token into the forms and optionally inserts script-to-insert tokens in ajax functions. The filter also intercepts requests to check that the token is present. One evolution of this approach is to double submit cookies for users who use JavaScript. If an authentication cookie is read using JavaScript before the post is made, JavaScript’s stricter (and more correct) cross-domain rules will be applied. If the server requires requests to contain the value of the authentication cookie in the body of POST requests or the URL of dangerous GET requests, then the request must have come from a trusted domain, since other domains are unable to read cookies from the trusting domain.

Checking the HTTP Referer header to see if the request is coming from an “authorized” page is a common tactic employed by embedded network devices due to the low memory requirements. However, a request that omits the Referer header must be treated as unauthorized because an attacker can suppress the Referer header by issuing requests from FTP or HTTPS URLs. This strict Referer validation may cause issues with browsers or proxies that omit the Referer header for privacy reasons. Also, old versions of Flash (before 9.0.18) allow malicious Flash to generate GET or POST requests with arbitrary http request headers using CRLF Injection. Similar CRLF injection vulnerabilities in a client can be used to spoof the referrer of an http request. To prevent forgery of login requests, sites can use these CSRF countermeasures in the login process, even before the user is logged in. Another consideration, for sites with especially strict security needs, like banks, often log users off after (for example) 15 minutes of inactivity.

Using the HTTP specified usage for GET and POST, in which GET requests never have a permanent effect, while good practice is not sufficient to prevent CSRF. Attackers can write JavaScript or ActionScript that invisibly submits a POST form to the target domain. However, filtering out unexpected GETs prevents some particular attacks, such as cross-site attacks using malicious image URLs or link addresses and cross-site information leakage through <script> elements (JavaScript hijacking); it also prevents (non-security-related) problems with some web crawlers as well as link prefetching.

I hope this helps when dealing with this malicious exploit. Let me know how it works out for you. Meanwhile, stay safe out there!

Audio Interview with a CIO: Dual Control of Computers for Security

Recently, Brent Huston, CEO and Security Evangelist for MicroSolved, had the opportunity to sit down with Dave, a CIO who has been working with dual control for network security. 

Brent and Dave talk about intrusion detection, dual control, and a few other information security topics, including these questions:

  • What is collusion and how can it pay off?
  • How does it work with dual control?
  • What are some dual control failures?

Click here to listen in and let us know what you think. Are you using dual control?

Brute-Force Attacks Reveal Band Tour Dates Before Official Announcement

Wikipedia File: Live Phish 7-15-03 (cover art)As many of my friends know, I have a slightly unhealthy obsession with the band Phish.  Yes, that Phish from Vermont. The band whose reputation rides the coattails of Jerry Garcia & Co., traveling from city to city and playing wanked-out, 30-minute versions of songs to a bunch of patchouli and Birkenstock-wearing hippies.

While only partially true, (or a majority for that matter) many “phans” or “glides” are actually quite resourceful and technically cunning.

Since the bands inception, (and taking a cue from The Grateful Dead) they’ve encouraged concert-goers to audio tape performances and trade those tapes, thus spreading their music far and wide. 

More recently, the band has included a free MP3 download code on each ticket and fans can actually listen to “crispy board” literally hours after the show has ended.  A co-operative Google spreadsheet was established to document and source every known performance of the bands storied career and links to digital and audience performances. For those who aren’t interested in downloading every show in the band’s archive, one can actually obtain an external hard drive with the music and a handful of video taped performances already loaded.  

If owning the entire catalog is too much music to sift through, fans have put together a number of compilations including “Machine Gun Trey,” “The Covers Project,” and a “Best Of’” version of the bands songs, chronologically — all labeled, sourced and ready to be downloaded into iTunes, with album art, of course.

While access to the band’s previous shows does quell the senses of their rabid fans, it does nothing but amplify the anticipation of upcoming tours.  For a band that has a reputation of traveling from town to town, fans have come to expect Fall/Holiday/Spring/Summer Tour announcements to come about the same time each year.

Rumors began to speculate weeks ago about where and when the band might be playing this summer. Recon missions for tour dates along with some good old-fashioned social engineering confirmed a date here or there.  Unsuspecting venue employees’ word about a “2 day hold on a venue” were placed together like puzzle pieces.

On February 28th, 2012, anticipation reached a near fevered pitch as the anticipated “Tour Dates at Noon” came and went without official announcement from the band and band’s management.  With only 1 official date announced for Bonnaroo (a four-day, multi-stage camping festival held on a 700-acre farm in Manchester, Tennessee) on www.phish.com, internet savvy fans began a brute-force attack on the website, with surprisingly accurate results.

By changing the URL of the band’s website using the intelligence and rumored concert dates gathered during the social engineering exercise, a more accurate touring calendar began to reveal itself. A simple change to the URL didn’t reveal a “404 – Not Found” web page but the message “You don’t have access rights to this page.” Fans knew they were on to something and my Facebook friends began to make travel plans for the tour that hadn’t been officially announced. This “leg up” could possibly make the difference between a hotel bed close to the venue or car-camping on a hot July evening in a field somewhere nearby. It also could mean a difference in airfare, days off work, or even rental car availability.

The official announcement came shortly after 12:00 PM on Leap Day 2012, (which is perfectly fitting for Red, Henrietta, Leo, and Cactus) complete with a professionally produced video of Phish Drummer Jon “Greasy Physique” Fishman preparing like Rocky for an upcoming bout. At the bottom, of the video, the band’s summer tour plans streamed with surprising accuracy of what many knew 24 hours beforehand.

Not bad for bunch of stinky hippies!

Reflections on a Past Vulnerability, Kind Of…

 Recently, someone asked me about a vulnerability I had found in a product 15 years ago. The details of the vulnerability itself are in CVE-1999-1141 which you can read for yourself here.

Apparently, some of these devices are still around in special use cases and some of them may not have been updated, even now, 15 years after this issue came to light and more than 13 years after Mitre assigned it a 7.5 out of 10 risk rating and an associated CVE id. That, in itself, is simply shocking, but is not what this post is about.

This post is about the past 15 years since I first made the issue public. At that time, both the world of infosec and I were different. I still believed in open disclosure, for example. However, shortly after this vulnerability research experience, I started to choke back on that belief. Today, I still research and discover vulnerabilities routinely, but I handle them differently.
 
I work with the vendor directly, consult with their developers and project teams as much as they let me, and then allow them to work through fixing their products. Some of these fixes take a very, very long time and some of them are relatively short. Sometimes the vendors/projects give me or MicroSolved public credit, but often they do not. Both are OK under the right circumstances, and I am much happier when the vendors ask us if we want to be credited publicly, but I am content if they fix the problems we find in many cases. We do our very best to be non-combative and rational with all of them in our discussions. I think it is one of the reasons why application and device testing in our lab is so popular — better service and kindness go a long way toward creating working relationships with everyone.
 
Now, I don’t want to dig into the debate about open disclosure and non-disclosure. You may have different opinions about it than I do, and I am perfectly fine with that and willing to let you have them. I choose this path in vulnerability handling because in the end, it makes the world a safer place for all of us. And make no mistake, that’s why I do what I do nearly every day and have done what I have done for more than 20 years now in information security.
 
That’s really what this post is about. It’s about change and commitment. I’m not proud of releasing vulnerability data in 1997, but I’m not ashamed of it either. Times have changed and so have I. So has my understanding of the world, crime and security. But at the bottom of all of that change, what remains rock solid is my commitment to infosec. I remain focused, as does MicroSolved, on working hard every day to make the world a safer place for you and your family.
 
In November of 2012, MSI will enter its 20th year in business. Twenty years of laser focus on this goal, on the work of data protection, and on our customers. It’s an honor. There is plenty of tradition, and plenty of change to reflect on. Thanks to all of you for giving me the opportunity to do so.
 
Now that I have nostalgia out of the way, if you are still using those old routers (you know who you are), replace those things! 
 
As always, thanks for reading and stay safe out there! 

Credit Unions and Small Banks Need Strong Security Relationships

With all of the attention in the press these days on the large banks, hacking, and a variety of social pressures against the financial institutions, it’s a good time to remember that credit unions and small banks abound around the world, too. They may offer an alternative to the traditional big banking you might be seeking, but they sometimes offer an alternative to the complex, well staffed information security teams that big banks have to bear against attackers and cyber-criminals, too.
 
While this shouldn’t be a worry for you as a consumer (in that your money is secure in a properly licensed and insured institution), it should be a concern for those tasked with protecting the data assets and systems of these organizations.
 
That’s where strong vendor relationships come in. Partnerships with good solution providers, security partners, virtual security teams and monitoring providers can be very helpful when there are a small number of technical resources at the bank or credit union. Ongoing training with organizations like SANS, CUISPA and our State of the Threat series is also very likely to assist the resources they do have in being focused against the current techniques used by attackers. Whether with peers or vendors, relationships are a powerful tool that help security admins in the field.
 
Smaller organizations need to leverage simple, effective and scalable solutions to achieve success. They simply won’t have the manpower to manage overwhelming alerts, too many log entries or some of the other basic mechanisms of infosec. They either must invest in automation or strategically outsource some of those high resource functions to get them done. If your bank has a single IT person who installs systems, manages software, secures the network, helps users, and never goes on vacation; you have one overwhelmed technician. Unfortunately, this all too common. Even worse is that many times, the things that can’t be easily done sometimes end up forgotten, pushed off or simply ignored. 
 
In some cases, where some of the security balls may have been dropped, attackers take advantage. They use malware, bots, social engineering and other techniques to scout out a foothold and go to work on committing fraud. That’s a bad way to learn the lessons of creating better security solutions.
 
So, the bottom line is if you are one of these smaller organizations, or one of the single technicians in question, you need to find some relationships. I suggest you start with your peers, work with some groups in your area (ISSA, ISACA, ISC2, etc.) and get together with some trusted vendors who can help you. Better to get your ducks in a row ahead of time than to have your ducks in the fire when attackers come looking for trouble.