About Phil Grimes

Phil Grimes was a Security Analyst for MicroSolved, Inc.

Malicious Exploits: Hitting the Internet Waves with CSRF, Part Two

 

If you’re the “average Web user” using unmodified versions of the most popular browsers can do relatively little to prevent cross-site request forgery.

Logging out of sites and avoiding their “remember me” features can help mitigate CSRF risk, in addition —  not displaying external images or not clicking links in spam or untrusted e-mails may also help. Browser extensions such as RequestPolicy (for Mozilla Firefox) can prevent CSRF by providing a default-deny policy for cross-site requests. However, this can significantly interfere with the normal operation of many websites. 

The CsFire extension (also for Firefox) can mitigate the impact of CSRF with less impact on normal browsing, by removing authentication information from cross-site requests.

Web developers, however have a better fighting chance to protect their users by implementing counter-measures such as:

  • Requiring a secret, user-specific token in all form submissions, and side-effect URLs prevents CSRF; the attacker’s site cannot put the right token in its submissions
  • Requiring the client to provide authentication data in the same HTTP Request used to perform any operation with security implications (money transfer, etc.)
  • Limiting the lifetime of session cookies
  • Checking the HTTP Referer header
  • Ensuring that there is no clientaccesspolicy.xml file granting unintended access to Silverlight controls
  • Ensuring that there is no crossdomain.xml file granting unintended access to Flash movies
  • Verifying that the request’s header contains a X-Requested-With. Used by Ruby on Rails (before v2.0) and Django (before v1.2.5). This protection has been proven insecure under a combination of browser plugins and redirects which can allow an attacker to provide custom HTTP headers on a request to any website, hence allow a forged request.

One simple method to mitigate this vector is to use a CSRF filter such as OWASP’s CSRFGuard. The filter intercepts responses, detects if it is an html document, and inserts a token into the forms and optionally inserts script-to-insert tokens in ajax functions. The filter also intercepts requests to check that the token is present. One evolution of this approach is to double submit cookies for users who use JavaScript. If an authentication cookie is read using JavaScript before the post is made, JavaScript’s stricter (and more correct) cross-domain rules will be applied. If the server requires requests to contain the value of the authentication cookie in the body of POST requests or the URL of dangerous GET requests, then the request must have come from a trusted domain, since other domains are unable to read cookies from the trusting domain.

Checking the HTTP Referer header to see if the request is coming from an “authorized” page is a common tactic employed by embedded network devices due to the low memory requirements. However, a request that omits the Referer header must be treated as unauthorized because an attacker can suppress the Referer header by issuing requests from FTP or HTTPS URLs. This strict Referer validation may cause issues with browsers or proxies that omit the Referer header for privacy reasons. Also, old versions of Flash (before 9.0.18) allow malicious Flash to generate GET or POST requests with arbitrary http request headers using CRLF Injection. Similar CRLF injection vulnerabilities in a client can be used to spoof the referrer of an http request. To prevent forgery of login requests, sites can use these CSRF countermeasures in the login process, even before the user is logged in. Another consideration, for sites with especially strict security needs, like banks, often log users off after (for example) 15 minutes of inactivity.

Using the HTTP specified usage for GET and POST, in which GET requests never have a permanent effect, while good practice is not sufficient to prevent CSRF. Attackers can write JavaScript or ActionScript that invisibly submits a POST form to the target domain. However, filtering out unexpected GETs prevents some particular attacks, such as cross-site attacks using malicious image URLs or link addresses and cross-site information leakage through <script> elements (JavaScript hijacking); it also prevents (non-security-related) problems with some web crawlers as well as link prefetching.

I hope this helps when dealing with this malicious exploit. Let me know how it works out for you. Meanwhile, stay safe out there!

Malicious Exploits: Hitting the Internet Waves with CSRF, Part One

 

 

 

 

 

Cross-site request forgery, also known as a “one-click attack”, “session riding”, or “confused deputy attack”, and abbreviated as CSRF (sometimes pronounced “sea-surf”) or XSRF, is a type of a website malicious exploit where unauthorized commands are transmitted from a user that the website trusts.

Unlike cross-site scripting (XSS), which exploits the trust a user has for a particular site, CSRF exploits the trust that a site has in a user’s browser. Because it is carried out in the browser (from the user’s IP address), this attack method becomes quite difficult to log. A successful CSRF attack is carried out when an attacker entices a user to “click the dancing gnome” which does some dirty gnom-ish v00d00 magic (no offence to any gnomes in the readership) on another site where the user is, or has recently been, authenticated.

As we’ll see in our video example, by tricking a user into clicking on a link in, we are able to create a new administrator user which allows us to log in at will and further our attack.

 

 

According to the United States Department of Homeland Security the most dangerous CSRF vulnerability ranks as the 909th most dangerous software bug ever found, making this vulnerability more dangerous than most buffer overflows. Other severity metrics have been issued for CSRF vulnerabilities that result in remote code execution with root privileges as well as a vulnerability that can compromise a root certificate; which will completely undermine a public key infrastructure. 

If that’s not enough, while typically described as a static-type of attack, CSRF can also be dynamically constructed as part of a payload for a cross-site scripting attack, a method seen used by the Samy worm. These attacks can also be constructed on the fly from session information leaked via offsite content and sent to a target as a malicious URL or leveraged via session fixation or other vulnerabilities, just to name a few of the creative ways to launch this attack. 

Some other extremely useful and creative approaches to this attack have evolved in recent history. In 2009 Nathan Hamiel and Shawn Moyer discussed “Dynamic CSRF”, or using a per-client payload for session-specific forgery at the BlackHat Briefings, and in January 2012 Oren Ofer presented A new vector called “AJAX Hammer” for composing dynamic CSRF attacks at a local OWASP chapter meeting.

So we know this type of attack is alive and well. What can you do about it? Stay tuned — I’ll give you the solutions tomorrow in Part Two!

Part Two here.

Control Valuable Data By Using Maps

As the battle rages, attackers look for every angle they can leverage in order to access your data. Our team has spent countless hours discussing the importance of identifying what ‘valuable data’ means (it is NOT the same for everyone), learning where that data lives, and understanding how it is accessed. Data flow mapping provides a useful tool that helps illustrate how data moves through any given process or system. When approaching this project in the field, we often see how compartmentalized our business processes are as each person, department, and/or unit knows a little about the target system/process. But when we take an in depth look, rarely does anyone understand it thoroughly! While this philosophy presents a challenge to any organization, the payoff can be priceless- especially in the case of a breach!

These maps are not only helpful to a new employee; but can also explain the system/process to an auditor or regulatory authority in a fraction of the time, and more thoroughly than most employees can. Realizing how our data is handled is vital to the next stage in protecting the data as the battlefield continually changes!

We have to focus on wrapping better controls around our valuable data. Don’t be discouraged by the challenge ahead. Instead, embrace the opportunity to help change the way the world thinks about Information Security! Nothing worth doing is ever easy, and applying this strategy to your environment won’t be either. But as we repeat the process over each facet of our organizations we become more efficient. After all, practice makes perfect!

The graphic below is what the finished product looks like. Yours will look entirely different, no doubt! Don’t focus on this map or this process, but on the underlying principle instead. By combining this with a network map, trust map, and surface map, we can create a comprehensive mechanism to provide useful, accurate intelligence that is easily parsed and processed on demand.

All Your Data Are Belong To Us!

My last post discussed some tactics for realizing what’s happening under the hood of our browsers when we’re surfing the web, and hopefully generated some thoughts for novice and intermediate users who want to browse the Internet safely. This week, we’re going to look a step beyond that and focus on steps to protect our passwords and data from unwanted visitors.

Passwords are the bane of every system administrator’s existence. Policies are created to secure organizations, but when enforced they cause people to have trouble coming up with (and keeping track of) the multitude of passwords necessary. As a result, people commonly use the same passwords in multiple places. This makes it easier on us as users because we can remember puppy123 a lot easier than we can those passwords that attackers can’t or don’t guess. Doing so also makes it easier on attackers to find a foot hold, and what’s worse is that if they are able to brute force your Yahoo! email account then they now have the password to your online banking, paypal, or insurance company login as well.

Hopefully some of you are thinking to yourselves “Is this guy telling me I shouldn’t be using the same password for everything?” If you are, you get a gold star and you’re half-way toward a solution. For those of you who are not, either you have mastered the password problem or still don’t care- in which case I’ll see you when our Incident Response Team is called to clean up the mess.

To solve this problem, find your favorite password manager (Google will help with this), or use what our team uses- KeePass. This is a fast, light, secure password manager that allows users to sort and store all their passwords under one master password. This enables you to use puppies123 to access your other passwords, which can be copied and pasted so you have no need to memorize those long, complex passwords. KeePass also includes a password generator. This tool lets users decide how long and what characters will make up their passwords. So you’re able to tailor passwords to meet any policy needs (whitespace, special characters, caps, etc) and not have to think about creating something different than the last password created- the tool handles this for you.

In addition to password composition, this tool lets you decide when and if the password should expire so you can force yourself to change this on a regular basis- this is an invaluable feature that helps minimize damage if and when a breach DOES occur. Once passwords are created, they are saved into a database file that is encrypted- so if your computer is lost, stolen, or breeched in some other manner, the attacker will have a harder time getting to your protected password data. There are many of these solutions available for varying price ranges, but I highly recommend KeePass as a free solution that has worked really well for me for quite some time. It’s amazing how nice it is to not have to remember passwords any longer!

Okay, so our passwords are now safe, what about the rest of our files? Local hard drive storage is a great convenience that allows us to save files to our hard drive at will. The downside to this is that upon breaking into our PC an attacker has access to any file within their permission scope, which means a root user can access ALL files on a compromised file system! While full disk encryption is still gaining popularity, “On the fly encryption” products are making their mark by offering strog and flexible encryption tools that create encrypted containers for data that can be accessed when given the appropriate password.

I have used the tool TrueCrypt for years and it has proven to be invaluable in this arena! TrueCrypt allows users to create containers of any size which becomes an encrypted drive that can be accessed once unlocked. After being locked, it is highly unlikely that an attacker will successfully break the encryption to decipher the data, so if you’re using a strong password, your data is as “safe” as it can be. This tool is one of the best out there in that it offers on the fly and total disk encryption, as well as allowing for encryption of individual disk partitions including the partition where Windows is installed (along with pre-boot authentication), and even allows these containers to be hidden at will.

Wow, we’ve gone through a lot together! You’re managing passwords, protecting stored data, learning what’s going on when your browsing the web, and becoming a human intrusion detection/prevention system by recognizing anomalies that occur in regular online activities! Visit next time as I explorer updates with you to round out this series on basic user guidelines.

How to Safeguard Your Data From Hackers, Phishing Scams, and Nasty Intruders

In my last article, we discussed shedding the fears we have of the technologies we interact with by learning more about them. Building on that philosophy, we’ll venture down a rabbit hole now that we’re online and looking to browse, shop, bank, and interact safely. As society becomes increasingly reliant on the conveniences of the Internet, it will be important to know basic safety and how to identify possibly dangerous activity.

Somehow people have come to feel less and less worried about email being an attack vector in the modern arena. Unfortunately, this complacency has done an injustice as email attacks are still a dominant method by which attackers compromise their targets. Our penetration testing team uses email attacks on almost every engagement, and we see through our work with HoneyPoint as well as other intelligence that this continues to be a staple of the modern attacker’s arsenal. But what does that mean to you?

Hopefully, the average user has gotten into the habit of filtering spam, only opening email from known senders, and only opening attachments when they are known and/or expected. But are we seeing the possible danger in an email from support@mycompany.com or human.resources@mycompany.com when we have only ever received email from techsupport@mycompany.com or humanresources@mycompany.com? Attackers spend a lot of time doing their homework and finding trust relationships to exploit in obscure ways such as these. If in doubt about the source of an email, send a separate email to the sender to verify it.

Browsing the Internet is fun, entertaining, and often necessary. Web browsers are also a ripe playground for nefarious activity which means the more risky places you visit, the bigger the chance that you’ll face some sort of danger. First, like all software, we need to be using a fully patched deployment of the latest stable version of the browser. Here is one of many statistical breakdowns of browser security for review, which should make a user consider which web browser they want to use. Internet Explorer controls a majority of the market simply due to being packaged with Windows as a rule, but the other options are stable, smooth, and less of a target making a successful attack less likely.

In addition to being compromised simply by using a weak browser, we must also be aware of where we browse and look for oddities when we surf. Looking at the URLs in the browser’s address bar, hovering over links to see where they direct and then ensuring that’s where you end up, realizing the pop-up browser window (telling you the machine is infected with a crazy number of infections and must be dealt with NOW) is a browser window, not a legitimate warning from your Anti Virus solution (you ARE running AV, right?). After all, modern browsers still struggle with BROWSING properly, we can’t expect them to properly provide AV coverage too!

While browsing safely is much deeper than we have space to cover in this post, one last activity we’ll discuss is online banking. Banks do a good job protecting us while providing online service for the most part. Individual users must still run a tight ship to keep the attack surfaces as small as possible. First off, change your banking passwords regularly. I know this sounds like a pain in the backside, but it’s worth it. I promise my next post will discuss more about how to manage this with ease. Secondly review your account often, looking for discrepancies (If you want details on the plethora of fraud I’ve encountered doing this, contact me on twitter). And finally, log off. Most banking web applications are designed to properly terminate your session upon logging off which prevents any issues with stale sessions that might be hijacked by an attacker.

Embrace the conveniences that technology provides, but do so with a sharp mind and open eyes. Following these few basic tips will help build the skills that become second nature to a wise and seasoned traveler on the Information Super Highway!

How to Safely Use a PC and the Internet: Fear Them No More!

As the MicroSolved team strives to bring quality service to our clients, we also make every effort to educate the masses and try to contribute not only to the Info Sec community, but to the “average Joe” out there trying to bank online, check email, or use Facebook without sacrificing their digital security or personal identity.

It’s human nature to fear the unknown. We don’t like to deal with things we don’t understand. Once upon a time, it might have been ok to just avoid what we didn’t know. But today’s world is becoming more and more reliant on machines, computers, and the Internet. Before, a person used be able to go through life without knowing how to work with technology. Today this is becoming more difficult. People use computers at work, at home, and at the store. Children are required to do papers, reports, and projects on a computer- it’s not something that can be easily circumvented any longer.

This being said, it is time to STOP fearing these things. The only way to do is it to face the fear. Realize the machines only do what they’re told- you just need to know how to give the proper orders. Computers are dumb. They’re basically a digital filing cabinet which holds files with digital instructions on them. They can be manipulated to the will of the user, and can be helpful tools once the apprehension subsides. Take a basic course on how to use a PC and the Internet- they’re not costly and should be readily available. If you have trouble finding one, ask around. Many libraries and community centers offer basic introduction courses either for free or at low cost. You don’t need to be a Windows Jedi or a Linux Guru to operate these machines.

The Internet is a staggering creation of man. Nearly everything in the world can be accessed in some form online. Learn what a web browser is, what it does, how to operate it, and how it should behave. Learn to pay attention to how your browser acts when surfing and how commonly visited pages act. When something changes don’t dismiss it! These changes can indicate unsafe conditions and should not be ignored. Using the Internet is a responsibility and users need to be aware when they’re online.

Over the coming weeks, the MicroSolved team will be working to create blog and video content focused on educating end users to keep them safe while surfing the web. If you have a topic you’d like to see covered, contact us! We’re always excited to hear from you.

Review of darkjumper v5.7

In continuing our research and experimentation with PHP and the threat of Remote File Inclusion (RFI), our team has been seeking out and testing various tools that have been made available to help identify web sites that are vulnerable to RFI during our penetration tests. Because we’re constantly finding more tools to add to the list, we’ve started the evaluation this week with the release of darkjumper v5.7. This python tool prides itself on being cross platform, and at first glance, seems rather easy to use. After downloading the tarball and extracting the files, simply calling the script from the command line brings it to life.

Running again with the –help or -h switches will print the options to the menu. This tool has several helpful options that could help expedite the discovery of various attack vectors against the web site. The injection switch incorporates a full barrage of SQLi and blind SQLi attempts against every web site identified on the target server. We did not use this option for this evaluation but intend to thoroughly test it in the future.

Using the inclusion switch will test for both local file inclusion (LFI) and RFI, again on every website identified on the target. This is our main focus for the evaluation since we’ve seen an incredible number of RFI attacks in the recent HITME data from around the globe. Selection of the full switch will attack the target server with the previously mentioned checks, in addition to scanning cgi directories, user enumeration, port scanning, header snatching, and several other possibly useful options. While a full review of this tool will be written eventually, we’re focusing on the RFI capabilities this time, so we’re using this test only against our test target. The test appears quite comprehensive. Another seemingly useful function of this tool is its ability to discover virtual hosts the live on the target server. After a short wait, darkjumper works it’s magic and spits out several files with various information for us to review. After pouring through these files, our team was disappointed to realize that there were URLs that pointed to this server which seem to have been missed by the tools scans. Even more disappointing is the fact that of the 12 target sites identified by the tool, none were the target that we had suspected of being vulnerable to RFI.

File inclusion is a real threat in the wild today. We are seeing newly vulnerable and compromised hosts on a regular basis from the HITME data, and seeing that Apache ships with a default configuration that is vulnerable to these attacks and the fact that PHP is inherently insecure, makes the battle even more intense. It is absolutely critical in this environment that we are hardening our servers before bringing them online. Those of us developing our web applications are validating every bit of information that is submitted to us by our users! Allowing our servers to execute code from an unknown source is one of the most popular attack vectors today from SQL injection, to XSS and XSRF, to RFI. The Internet continues to be a digital equivalent to the wild, wild west, where outlaws abound. There is no guarantee that the users who interface with our sites are who they say they are or that they have the best of intentions. It is up to us to control how our applications and servers are handling this data.

SQL Injection Tools in the Field

As the Internet continues to morph, common attack vectors change. Info Sec professionals once had the ease of scanning a network and leveraging available vulnerabilities to gain a foothold; but now we’re seeing a paradigm shift toward web applications and the security that protects them. I’m sure this is nothing new to our readers! We all see the application as an emerging favorite to gain access to the network; just as we’re seeing the web browser gaining popularity in targeting the end user and workstation.

As our Team continues to provide top notch application assessment services, we’re seeing SQL Injection (SQLi) as one major vector of which to take advantage. Unfortunately, this attack is quite time-consuming, considering the various ways developers code their queries, utilize the architecture involved, and evaluate how the application handles interactions with the database. In an effort to be more efficient in the quest for vulnerable query strings, we have aggressively tested the plethora of SQLi tools that have been publicly released. Initially, the Team hoped to evaluate these tools and provide an extensive review on the performance of each. This tech is sad to report that from the three tools tested recently, not one was successful in the endeavor.

After some discussion, the Team concluded there are simply too many variables in play for one tool to serve as “the silver bullet.” The language and structure of the queries are just a few of the challenges these tools face when sniffing out vulnerable SQL strings. With so many variables for attackers and penetration testers to overcome, SQL injection testing has become extremely difficult to automate reliably! That being said, it appears as if these tools are created for use in such specific circumstances that they’re rendered useless for anything but that one, specialized scenario. So we’re continuing to find this to be a long, drawn out, manual process. This is not a complaint. Our Team loves the challenge! It’s just difficult to find a SQLi tool that can adapt to uses other than that for which the tool was specifically created – commonly a source of frustration when trying to expedite the process and finding little success.

SKIPFISH Review

This week, our team had the opportunity to test Google’s recently released web application scanner known as SKIPFISH. Touted as an active reconnaissance tool, SKIPFISH claims to present an interactive site map for a targeted site by performing a myriad of recursive crawls and discretionary based probes. The map is then notated with the output of several active security checks which are designed to be non-disruptive. SKIPFISH isn’t a replacement for Nessus, Nikto, or any other vulnerability scanner which might own your allegiance. Instead, this tool hopes to supplement your current arsenal.

SKIPFISH boasts high performance- “500+ requests per second against responsive Internet targets, 2000+ requests per second on LAN / MAN networks, and 7000+ requests against local instances have been observed, with a very modest CPU, network, and memory footprint.” To that end, the test used for our evaluation saw a total of more than 9 million HTTP requests over 3 days using the default dictionary included with the tool. While this test was conducted, there was no interruption of the target site although response times did increase dramatically.

The scan’s result provides a huge directory of files that are fed into index.html. When called by the web browser, this report turns out to be easily readable and comes with pointy-clicky goodness, thanks to a plethora of JavaScript (so be sure you’re allowing it to be seen). The report lists each page that was interrogated during the scan and documents server responses (including errors and successful replies), identifies possible attack vectors (such as password entry fields to brute force), along with other useful tidbits for each. Following the breakdown by page, SKIPFISH provides a list of document types (html, js, PDF, images, and various text formats) and their URLs. The report closes with an overview of various issues discovered during the scan, complete with severity ratings and the URL of the finding.

All in all, this tool has potential. It’s certainly not going to replace any of the other tools in our Web Application Assessment toolkit, but it is a good supplement and will most likely be added to give more information going forward. It is very user friendly, despite the time it took to scan the target site with the default dictionary. This in itself tells our team more testing is necessary, not to mention the fact that there are several options that can enhance functionality of the tool. With the sheer number of exploits and attack vectors available in web applications today, it can never hurt to get a different look at the application using a number of tools. And in this tech’s opinion, redundancy is good in that it shows the stability of our findings across the board.