About Brent Huston

I am the CEO of MicroSolved, Inc. and a security evangelist. I have spent the last 20+ years working to make the Internet safer for everyone on a global scale. I believe the Internet has the capability to contribute to the next great leap for mankind, and I want to help make that happen!

Some Potential DNS Poisoning Scenarios

We have kind of been breaking down the DNS cache poisoning exploit scenarios and have been dropping them into 3 different “piles”.

1) Massive poisoning attacks that would be used a denial of service style attack to attempt to “cut an organization off from the Internet” or at least key sites – the damage from this one could be low to medium and is obviously likely to be discovered fairly quickly, though tracking down the issue could be difficult for organizations without adequate technical support or on-site IT teams

2) Large scale attacks with malware intent – these would also be largely executed in an attempt to introduce malware into the organization, browser exploits, client-side exploits or forms of social engineering could be used to trick users into activating malware, likely these attempts would introduce bot-net agents into the organization giving attackers remote control of part or all of the environment

3) Surgical poisoning attacks – these attacks would be more focused and much more difficult to identify, in this case, the attackers would poison the cache of sites that they knew could be critical- this could be as obvious as the windows update sites or as focused as the banking sites or stock trade sites of executives, this attack platform is likely to be focused on specific effects and will likely be combined with social engineering to get insight into the specifics of the target

There certainly may be a myriad of additional scenarios or specific focus points for the attacks, but we wanted to give some examples so that folks can be aware of where attackers may go with their new toys and techniques.

Doing incident response and forensics on these attacks could be difficult depending on the levels of the cache time to live and logging that is done on the DNS systems. Now might be a good time to review both of these variables to make sure they will be adequate to examine any attack patterns should they be discovered now, or in the future from this or any other poisoning attack vector.

As we stated earlier, please do not rely on the idea that recursion is only available from internal systems as a defense. That might help protect you from the “click and drool” exploits, but WILL NOT PROTECT YOU from determined, capable attackers!

Myriad of Ways to Trigger Internal DNS Recursion – Please Patch Now!

For those organizations who have decided not to patch their DNS servers because they feel protected by implemented controls that only allow recursion from internal systems, we just wanted to point out that there a number of ways that an attacker can cause a recursive query to be performed by an “internal” host.

Here is just a short list of things that an attacker could do to cause internal DNS recursion to occur:

Send an email with an embedded graphic from the site that they want to poison your cache for, which will cause your DNS to do a lookup for that domain if it is not already known by your DNS

Send an email to a mail server that does reverse lookups on the sender domain (would moving your reverse lookup rule down in the rule stack of email filters help minimize this possibility???)

Embed web content on pages that your users visit that would trigger a lookup

Trick users through social engineering into visiting a web site or the like

Use a bot-net (or other malware) controlled system in your environment to do the lookup themselves (they could also use this mechanism to perform “internal” cache poisoning attacks)

The key point here is that many organizations believe that the fact that they don’t allow recursion from external hosts makes them invulnerable to the exploits now circulating in the wild for the DNS issue at hand. While they may be resilient to the “click and drool” hacks, they are far more vulnerable than they believe to a knowledgeable, focused, resourced attacker who might be focused on their environment.

The bottom line solution, in case you are not aware, is to PATCH YOUR DNS SYSTEMS NOW IF THEY ARE NOT PATCHED ALREADY.

Please, do not wait, active and wide scale exploitation is very likely in the very near future, if it is not underway right now!

DNS Exploit is in the Wild – Patch NOW!!!

Unfortunately, the blackout period for the DNS issues has been broken. The exploit details have been made public and have been in the wild for a number of hours. While the security researchers involved have tried to remove the details and analysis, Google had already cached the site and the details are now widely known.

Please patch IMMEDIATELY if you have not already done so!

If you can not patch your existing DNS product, please switch to a patched public DNS (for Internet resolution) or deploy OPENDNS as soon as possible.

Here is a quick and dirty plan of action:

1. Catalog the DHCP Servers you use on the Internet and internally. Be sure you check all branch locations, firewalls and DHCP servers to ensure that you have a complete picture. If you find any Internet facing DNS with recursive enabled, disable it ASAP!

2. Verify that each of these DNS implementations are patched or not vulnerable. You can check vulnerability by using the “Check DNS” tool at Mr. Kaminski’s page, here.

3. Test the patch and get it implemented as quickly as possible.

4. Note that you may have to upgrade firmware and software for firewalls, packet filters and other security controls to enable them to understand the new DNS operations and keep them from interfering with the new way that DNS “acts”.

Please note that the exploit for this cache poisoning attack in now public and exploitation on a wide scale could already be underway. PATCH AS SOON AS POSSIBLE!

Symptoms to look for include:

Vulnerability: unpatched and non-random source ports for DNS query responses.

Exploit: check for a large number of non-existent subdomains in your DNS records (or subdomain requests in your logs) if you are an authoritative DNS for a domain, attackers will be poisoning the cache with subdomain records at least according to some researchers.

If you have questions or concerns, please contact MSI for more information or assistance.
Updates to our DNS paper and other details will be released soon, so check back with stateofsecurity.com for updates.

MicroSolved is Hiring!

We are seeking a new member for our team of security analysts, engineers and consultants. This is a junior level, full time, salary position. We are seeking technicians with the following skills and interests. You do NOT need security experience, as we will teach the successful applicant our award-winning methodologies and approaches to information security.

What you bring:

Technical Skills:

Knowledge of Perl, PHP and/or Python or other programming language(s)

Knowledge of Windows and/or Linux/OS X/BSD

Understanding of basic IP networking, TCP protocols and network troubleshooting, etc.

Personal Skills:

Ability to work as a member of an elite team

Personal diligence, attention to detail and a dedication to learning and exploring infosec topics

Self reliance, initiative and the ability to pass a full background check

An already existing capability to work in the United States

Flexibility and great customer service skills

This position is located in Columbus, Ohio and physical presence is required. Some occasional business travel will be required, usually in 3-5 day increments.

What we bring:

A unique business casual atmosphere with the most dedicated, enthusiastic and technically capable team that you can find.

A full benefits package including health, life and disability insurance, 401(K) with match, performance-based bonuses, paid vacations and personal time and much more.

Ongoing training programs and involvement in the information security community.

How to apply:

To apply to join our team, please send your resume, a technical writing sample and salary requirements to “jobs [at] microsolved [dot] com”.

Be sure to include the writing sample and salary requirements as incomplete submissions will not be reviewed.

Please, no phone calls, headhunters or third parties.

We are only interested in talking directly to folks who want to join our team and are willing to make the personal commitment to be the best at what they do. If this does not describe you, then please, ignore this posting. 😉

Content Management System Research Project – Some Results

As I referred to earlier, our team has been doing some research on popular content management systems and potential security vulnerabilities in them. We were doing this as a part of a review of the Syhunt Sandcat4PHP product that our partner has released.

As a part of that project, we have identified significant vulnerabilities in each of the popular content managers we reviewed. Several of the products were found to have various types of injection vulnerabilities (SQL/command/etc.), arbitrary file disclosure and access issues and tons of cross-site scripting (XSS) problems. We are now in the process of notifying each of the product teams about the vulnerabilities we identified.

How bad were things? One word, abysmal…

Here is an inside glimpse of the raw math of the scanning tool’s findings:

CMS                    Injections & File Issues            XSS                   “Risk Rating”

==================================================

Bitweaver                         37                                 7                          42.25

Drupal                             97                                 2                           98.50

Joomla                               4                               15                           15.25

Mambo                             45                             207                        200.25

WordPress                          5                             166                        129.50

** The “risk rating” was based upon each injection and file issue being given a score of 1.0 and each XSS being given a score of .75, then adding them together. It should be noted that this was an arbitrarily chosen mechanism created to give a simple basis for comparison and is NOT reflective of any specific risk rating system or the like. Also, no general weighting or anything is included, so I use the term “risk” loosely…

I also dropped the data into InspireData, a quick and dirty visualization tool I like to play with. It produced these quick images (Note that you can download them for a clearer view):

CMSRiskScore.jpg

This graph shows a plot of the “risk score” by the product tested.

CMSByVulnMap.jpg

This graph shows a matrix of the products plotted across an axis for Injections and File Leaks and an axis for XSS. Interestingly, the red lines show the mean values of the plot just for a quick reference.

As I said before, our team is in the process of contacting each of the CMS projects that we tested and will be disclosing the vulnerability information to them for their mitigation. Our team did some basic testing and analysis on the data that the Syhunt tool found and determined it to be pretty good at finding the issues. We found very few false positives, and the ones we did find were areas where other functions are involved in testing inputs beyond the initial layer of the source code.

The Syhunt tool did very well. It is a great tool for a 1.00 release and very much worth the cost. If you have PHP and javascript applications in your environment, I would suggest grabbing your team a copy. If you have applications that you would like tested by a third party, please feel free to contact us for a quote. Let us know if we can be of any assistance or if you have questions about what we did or the like.

Please note that we will NOT be making disclosures of the identified vulnerabilities at this time, so don’t ask. We will be working with the project teams to mitigate any vulnerabilities identified.

Note that all products were downloaded from public sources and are “open” projects. Versions were current as of the download date. We only scanned the source of core products, no plugins/add ons/expansions or modules outside of the core products were tested in this project. Your paranoia may vary and you should not take any of the results of these tests as advice or endorsement of any of these projects or products. Use the results at your own risk…… 😉

DNS Patches May Break Some Things…

I just had a quick conversation with an IT technician who alluded to the idea that more than Zone Alarm may be broken by the new port randomization behaviors of “patched DNS”. These fundamental changes to the ports allocated for DNS traffic may confuse existing firewalls and other filtering devices that are unaware of the changes to DNS behaviors.

For example, if you have filtering devices that specific port ranges defined for egress or ingress of DNS traffic, especially if you are using a non-stateful device, this configuration may need to be changed to allow for the greater port range applied to the “patched DNS” setup. Systems that are also “DNS aware” might not expect the randomization of the ports that the patching is creating. As such, filtering devices, especially at the perimeter may well need to be reconfigured or upgraded as well to allow for continued operation of unimpeded DNS traffic.

There may be SEVERAL other nuances that become evident in some environments as the patch process for the DNS issue continues to evolve. Stay tuned to stateofsecurity.com and other security venues for information and guidance as it becomes available.

More on DNS Security Issue Management – Know & Control DNS + SOHO Issues

Just added this to Revision 2 of the whitepaper:

Attack Vector Management

Part of mitigating the risk of this security issue is also managing the availability of the attack vector. In this case, it is essential that security teams understand how DNS resolution operates in their environment. DNS resolution must be controlled to the greatest extent possible. That means that all servers and workstations MUST be configured to use a set of known, trusted and approved DNS servers whenever possible. In addition, proper egress filtering should be implemented to prevent external DNS resolution and contact with port 53 on unknown systems. Without control over desktop and server DNS use, the attack vector available for exploitation becomes unmanageably large. Upper management must support the adoption of these controls in order to prevent compromise as this and other DNS vulnerabilities evolve.

Home User and Small Office Vulnerability

Home users and small offices (or enclaves within larger organizations) should pay careful attention to how their DNS resolution takes place. Many home and small business firewall devices such as Linksys, D-Link, Netgear, etc. are likely to be vulnerable to these attacks and are quite UNLIKELY to be patched to current firmware levels. Efforts must be made to educate home and small office users about this issue and to update all of these devices as the patches and upgrades to their firmware becomes available.

DNS Security Issue Overview & Mitigation Whitepaper

Our engineering team has analyzed the available data on this emerging security issue and the fixes identified. As such, we have prepared the following white paper for our clients and readers.

Please review the paper and feel free to distribute it to your management team, co-workers and others who need to be involved in understanding and remediating the problems emerging with DNS.

You can obtain the white paper here.

If your organization needs any assistance in understanding or managing this vulnerability, please do not hesitate to contact us. We would be happy to assist in any way possible.

HoneyPoint Security Server Console Upgrade and New Deployment Worksheet Available

A new release of HoneyPoint Security Server Console was released today. Version 2.51 includes two bug fixes and several library upgrades. The new release seems to be a bit faster on Windows systems, likely due to upgrades in the back-end libraries.

The new version fixes a bug in the math of the email alerts to system administrators where the wrong event counts would be included. It also repairs a bug that caused a crash on some systems when changing the status of multiple events. While neither of these bugs are critical, we thought the speed changes were worth a release.

The new version also includes the recently updated User Guide that now includes full instructions for installing the HPoints as a service or daemon using common tools or the tools from the resource kit.

We are also pretty happy to announce the availability of a deployment worksheet that guides new users through the deployment of the console and HPoints and helps them gather and define the information needed to do a full roll out.

We are hard at work on new HPoints and we have several that are finishing the testing process, so stay tuned for more releases soon. Updates are also underway to the Personal Edition (including a whole new GUI) and we are just starting to plan for version 3 of the console, so if you have suggestions, send them in.

Both the updates and the deployment guide are now available on the FTP server. Please use your credentials assigned when you made your product purchase to download them. If you need assistance, simply give us a call!

Corporate Data Classification

One of the most urgent steps that many organizations are facing in their information security program is that of data classification. While this, and role-based access controls, are two of the most critical processes in the changing security landscape, they are also two of the most painful. Many organizations do not even know where their data is located, stored, processed or used to a full extent and are spending a great deal of resources just understanding “what they have” and “how it is used”.

While knowing where the data is and how it is used is essential, organizations must also embrace some type of mechanism for classifying data. In some cases this can be as easy as creating a standard set of data definitions such as Private Identity Data, Internal Use Only, Customer Confidential, etc. and then building a policy around how data of each type is to be created, managed, stored, processes, handled and destroyed. For many small businesses, this can be a relatively small undertaking and when done right can provide a real improvement in security – IF EVERYONE FOLLOWS THE RULES.

In larger organizations, classifications may be more diverse. There may be Private Employee Identity Data, Private Employee Healthcare Data, Customer Private Identity Data, Internal Use Only, Customer Confidential or others. Many organizations even go a little wild with this and build small acronyms and/or a legend into their policy so that you can label a word document of a contract with a client something like CCC for Customer Confidential – Contracts” or even worse, they will add a department code followed by some acronym that the department heads have made up. This is where the pain gets excruciating!

At MSI, we are big supporters of keeping the classifications as simple as possible. In most cases we are able to stick with “PII” for personal identity information, “Internal Use Only” for sensitive data not to be released outside of the company, “Confidential” for data that must be protected from all eyes except the intended participants and maybe a small set of divisions for other data outside of these such as HR, Finance, M&A, HIPAA, GLBA, etc. depending on what groups need to access the data or what regulations apply to the data. Of course, these can then be added to folder names, document headers, meta-tags and the myriad of other places used to quickly identify data.

Once you get your head around a working group of classifications, then comes the next task – identifying the appropriate controls for each type of data. That process takes experience, insight into specific business processes and a lot of patience. Start with data classification, though, and then build from there. As security evolves and becomes more nuanced, those with data classification schemes in place will be ahead of the coming curve. In the future, not all data will be treated or regulated the same, so make it easy on yourself and get started with data classification as soon as you can!