SSL Certificate High-Level Best Practices

SSL certificates are an essential part of online security. They protect websites against hackers who try to steal information such as credit card numbers and passwords. In addition, they ensure that customers trust the site and its content.

Almost 50% of the top one million websites use HTTPS by default (they redirect inquiries of HTTP pages to URLs with HTTPS). (comodosslstore.com)As such, even pages that don’t deal with confidential data are being deployed using SSL. The underlying certificates to power the encryption are available from a variety of commercial providers, and even the pro-bono resource https://letsencrypt.org. No matter where you get your certificate from, here are a few resources for high-level best practices.

Trust Your Certificate Provider

Since certificates provide the basis for the cryptography for your site, their source is important. You can find a trustworthy list of providers for certificates here. https://www.techradar.com/news/best-ssl-certificate-provider. Beware of commercial providers not found on this list, as some of them may be sketchy at best, or dangerous at worst. Remember, the Let’s Encrypt project above is also highly trusted, even though they are not a commercial firm.

Manage Versions and Algorithms

Make sure you disable SSL and TLS 1.0 on the server. That version has known vulnerabilities. If possible, and there are no impacts on your users, consider removing 1.1 and 1.2 as well. 1.3 fixes a lot of the known issues with the protocol and supports only the known secure algorithms.

In cryptography, cipher suites play an important part in securing connections by enabling encryption at different levels. You shouldn’t be using an old version of a cryptographic protocol if there’s a newer one available; otherwise, you may put your site’s security at risk. Using secure cipher suites that support 128-bit (or more) encryption is crucial for securing sensitive client communications.

Diffie Hellman Key Exchange has been shown to be vulnerable when used for weaker keys; however, there is no known attack against stronger keys such as 2048-bits. Make sure you use the strongest settings possible for your server.

Manage and Maintain Certificate Expiration

As of Sept. 1, 2020, Apple’s Safari browser will no longer trust certificates with validity periods longer than 398 days, and other browsers are likely to follow suit. Reducing validity periods reduces the time period in which compromised or bogus certificates can be exploited. As such, any certificates using retired encryption algorithms or protocols will need to be replaced sooner. (searchsecurity.techtarget.com)

Maintain a spreadsheet or database of your certificate expiration dates for each relevant site. Make sure to check it frequently for expiring certificates to avoid user issues and browser error messages. Even better is to use an application or certificate management platform that alerts you in plenty of time to upcoming certificate expirations – thus, you can plan accordingly. Best of all, if possible, embrace tools and frameworks for automating certificate management and rotation – that makes sure that you are less likely to have expiration issues. Most popular web frameworks now have tools and plugins available to perform this for you.

Protect Your Certificates and Private Keys

Remember that your certificate is not only a basis for cryptography, but is also a source of identification and reputation. As such, you need to make sure that all certificates are stored properly, securely and in trusted locations. Make sure that web users can’t access the private certificate files, and that you have adequate back up and restore processes in place.

Make sure that you also protect the private keys used in certificate generation. Generate them offline, if possible, protect them with strong passwords and store them in a secure location. Generate a new private key for each certificate and each renewal cycle.

Revoke your certificate or keys as quickly as possible if you believe they have been compromised.

Following these best practices will go a long way to making your SSL certificate processes safer and more effective. Doing so protects your users, your reputation and your web sites. Make sure you check back with your certificate provider often, and follow any additional practices they suggest.

 

 

 

 

Drupal Security Best Practices Document

This is just a quick post to point to a great guide on Drupal security best practices that we found recently. 

It was written for the Canadian government and is licensed under the Open Government License platform. 

The content is great and it is available free of charge. 

If your organization uses Drupal, you should definitely check it out and apply the guidance as a baseline! 

Operation Lockdown Update ~ Xojo Web App Security

Just a quick note today to bring you up to date on Operation Lockdown. As many of you may know, MSI began working with Xojo, Inc. a year or so ago, focusing on increasing the security of the web applications coded in the language and produced by their compiler. As such, we gave a talk last year at XDC in Orlando about the project and progress we had made. 

Today, I wanted to mention that we have again begun working on OpLockdown, and we remain focused on the stand-alone web applications generated by Xojo. 

Last week, Xojo released Xojo 2014R3 which contains a great many fixes from the project and our work.

The stand-alone web apps now use industry standard HTTP headers (this was true for the last couple of releases) and have the ability to do connection logging that will meet the compliance requirements for most regulatory guidelines.

Additionally, several denial-of-service conditions and non-RFC standard behaviors have been fixed since the project began.

My team will begin doing regression testing of the security issues we previously identified and will continue to seek out new vulnerabilities and other misbehaviors in the framework. We would like to extend our thanks to the folks at BKeeney Software who have been helping with the project, and to Xojo for their attention to the security issues, particularly to Greg O’Lone, who has been our attentive liaison and tech support. Together, we are focused on bringing you a better, safer and more powerful web application development platform so that you can keep making the killer apps of your dreams!

Three Tough Questions with Aaron Bedra

This time I interviewed Aaron Bedra about his newest creation ~ RepSheet. Check it out here:


Aaron’s Bio:

Aaron is the Application Security Lead at Braintree Payments. He is the co-author of Programming Clojure, 2nd Edition as well as a frequent contributor to the Clojure language. He is also the creator of Repsheet, a reputation based intelligence and security tool for web applications.


Question #1:  You created a tool called Repsheet that takes a reputational approach to web application security. How does it work and why is it important to approach the problem differently than traditional web application firewalling?

I built Repsheet after finding lots of gaps in traditional web application security. Simply put, it is a web server module that records data about requests, and either blocks traffic or notifies downstream applications of what is going on. It also has a backend to process information over time and outside the request cycle, and a visualization component that lets you see the current state of the world. If you break down the different critical pieces that are involved in protecting a web application, you will find several parts:

* Solid and secure programming practices

* Identity and access management

* Visibility (what’s happening right now)

* Response (make the bad actors go away)

* HELP!!!! (DDoS and other upstream based ideas)

* A way to manage all of the information in a usable way

This is a pretty big list. There are certainly some things on this list that I haven’t mentioned as well (crypto management, etc), but this covers the high level. Coordinating all of this can be difficult. There are a lot of tools out there that help with pieces of this, but don’t really help solve the problem at large.

The other problem I have is that although I think having a WAF is important, I don’t necessarily believe in using it to block traffic. There are just too many false positives and things that can go wrong. I want to be certain about a situation before I act aggressively towards it. This being the case, I decided to start by simply making a system that records activity and listens to ModSecurity. It stores what has happened and provides an interface that lets the user manually act based on the information. You can think of it as a half baked SIEM.

That alone actually proved to be useful, but there are many more things I wanted to do with it. The issue was doing so in a manner that didn’t add overhead to the request. This is when I created the Repsheet backend. It takes in the recorded information and acts on it based on additional observation. This can be done in any form and it is completely pluggable. If you have other systems that detect bad behavior, you can plug them into Repsheet to help manage bad actors.  

The visualization component gives you the detailed and granular view of offenses in progress, and gives you the power to blacklist with the click of a button. There is also a global view that lets you see patterns of data based on GeoIP information. This has proven to be extremely useful in detecting localized botnet behavior.

So, with all of this, I am now able to manage the bottom part of my list. One of the pieces that was recently added was upstream integration with Cloudflare, where the backend will automatically blacklist via the Cloudflare API, so any actors that trigger blacklisting will be dealt with by upstream resources. This helps shed attack traffic in a meaningful way.

The piece that was left unanswered is the top part of my list. I don’t want to automate good programming practices. That is a culture thing. You can, of course, use automated tools to help make it better, but you need to buy in. The identity and access management piece was still interesting to me, though. Once I realized that I already had data on bad actors, I saw a way to start to integrate this data that I was using in a defensive manner all the way down to the application layer itself. It became obvious that with a little more effort, I could start to create situations where security controls were dynamic based on what I know or don’t know about an actor. This is where the idea of increased security and decreased friction really set it and I saw Repsheet become more than just a tool for defending web applications.

All of Repsheet is open sourced with a friendly license. You can find it on Github at:

https://github.com/repsheet

There are multiple projects that represent the different layers that Repsheet offers. There is also a brochureware site at http://getrepsheet.com that will soon include tutorial information and additional implementation examples.

Question #2: What is the future of reputational interactions with users? How far do you see reputational interaction going in an enterprise environment?

For me, the future of reputation based tooling is not strictly bound to defending against attacks. I think once the tooling matures and we start to understand how to derive intent from behavior, we can start to create much more dynamic security for our applications. If we compare web security maturity to the state of web application techniques, we would be sitting right around the late 90s. I’m not strictly talking about our approach to preventing breaches (although we haven’t progressed much there either), I’m talking about the static nature of security and the impact it has on the users of our systems. For me the holy grail is an increase in security and a decrease in friction.

A very common example is the captcha. Why do we always show it? Shouldn’t we be able to conditionally show it based on what we know or don’t know about an actor? Going deeper, why do we force users to log in? Why can’t we provide a more seamless experience if we have enough information about devices, IP address history, behavior, etc? There has to be a way to have our security be as dynamic as our applications have become. I don’t think this is an easy problem to solve, but I do think that the companies that do this will be the ones that succeed in the future.

Tools like Repsheet aim to provide this information so that we can help defend against attacks, but also build up the knowledge needed to move toward this kind of dynamic security. Repsheet is by no means there yet, but I am focusing a lot of attention on trying to derive intent through behavior and make these types of ideas easier to accomplish.

Question #3: What are the challenges of using something like Repsheet? Do you think it’s a fit for all web sites or only specific content?

I would like to say yes, but realistically I would say no. The first group that this doesn’t make sense for are sites without a lot of exposure or potential loss. If you have nothing to protect, then there is no reason to go through the trouble of setting up these kinds of systems. They basically become a part of your application infrastructure and it takes dedicated time to make them work properly. Along those lines, static sites with no users and no real security restrictions don’t necessarily see the full benefit. That being said, there is still a benefit from visibility into what is going on from a security standpoint and can help spot events in progress or even pending attacks. I have seen lots of interesting things since I started deploying Repsheet, even botnets sizing up a site before launching an attack. Now that I have seen that, I have started to turn it into an early warning system of sorts to help prepare.

The target audience for Repsheet are companies that have already done the web security basics and want to take the next step forward. A full Repsheet deployment involves WAF and GeoIP based tools as well as changes to the application under the hood. All of this requires time and people to make it work properly, so it is a significant investment. That being said, the benefits of visibility, response to attacks, and dynamic security are a huge advantage. Like every good investment into infrastructure, it can set a company apart from others if done properly.

Thanks to Aaron for his work and for spending time with us! Check him out on Twitter, @abedra, for more great insights!

Quick PHP Malware vs AV Update

It’s been a while since I checked on the status of PHP malware versus anti-virus. So, here is a quick catch up post. (I’ve been talking about this for a while now. Here is an old example.)

I took a randomly selected piece of PHP malware from the HITME and checked it out this afternoon. Much to my surprise, the malware detection via AV has gotten better.

The malware I grabbed for the test turned out to be a multi-stage PHP backdoor. The scanner thought it was exploiting a vulnerable WordPress installation. 

I unpacked the malware parts into plain text and presented both the original packed version from the log and the unpacked version to VirusTotal for detection testing. As you know, in the past, detection of malware PHP was sub single digits in many cases. That, at least to some extent has changed. For those interested, here are the links to see what was tripped.

Decoded to plain text vs Encoded, as received

As you can see, decoded to plain text scored a detection of 44% (19/43), which is significantly improved from a year or so ago. Additionally, excitingly, undecoded, the attack in raw form triggered a detection rate of 30% (13/44)! The undecoded result is HUGE, given that the same test a year or so ago often yielded 0-2% detection rates. So, it’s getting better, just SLOWLY.

Sadly though, even with the improvements, we are still well below half (50%) detection rates and many of the AV solutions that fail to catch the PHP malware are big name vendors with commercial products that organizations running PHP in commercial environments would likely be depending on. Is your AV in the missing zone? If so, you might want to consider other forms of more nuanced detection

Now, obviously, organizations aren’t just depending on AV alone for detection of web malware. But, many may be. In fact, a quick search for the dropped backdoor file on Google showed 58,800 systems with the dropped page name (a semi-unique indicator of compromise). With that many targets already victim to this single variant of PHP backdoors, it might be worth checking into if you are a corporate PHP user.

Until next time, take a look around for PHP in your organization. It is a commonly missed item in the patch and update cycles. It also has a pretty wide security posture with a long list of known attack tools and common vulnerabilities in the coding patterns used by many popular products. Give any PHP servers you have a deeper inspection and consider adding more detection capability around them. As always, thanks for reading and stay safe out there! 

Quick Thought on CSRF Attacks

Yesterday, I listened to @Grap3_Ap3 present at the Columbus OWASP local chapter on Cross Site Request Forgery (CSRF). While this attack has been around since 2001, it continues to show a strong presence in web applications across a range of platforms. Phil spent a lot of his time talking about content management systems on the public Internet, but we have seen CSRF very widely exploitable on embedded devices.

Embedded devices, often equipped with rather rudimentery web servers and applications for management, have proven to be a searing hot pain point for CSRF in our research. While that isn’t shocking or new, I definitely see an interesting and potentially dangerous collision between the growth of the “Internet of Things” and web vulnerabilities. Today, some of these platforms are toys, or novelty tools built into home appliances – BUT, the future of internetworking of our devices and our physical lives means that these web controls will eventually have larger impacts on our day to day lives.

What happens when a CSRF attack can be used to trick your teenager into clicking on a picture on the web that while they view it, they also execute a command to raise the temperature on your refrigerator to unsafe levels? Or when an embedded link in an email tricks you into a click that turns your oven onto super heat clean mode without your knowledge? Sound like a prank? Maybe. Extend it to thermostats, home automation and consumer control over alternative energy controls like solar panels and such and it might take a new form.

We are on a course of collision. Our inattention to information security and the exploding complexity and technology dependencies will soon come together in ways that may surprise us. Ignore the hyperbole, but think about it rationally. Isn’t it time we worked with organizations who make products to demand an increase in protection from some of these basic known attacks? In the future, consumers and organizations alike will vote with their dollars. How will you spend yours?

Ask The Experts: Getting Started with Web App Security

Question from a  reader: What should I be paying attention to the most with regards to web applications? My organization has a number of Internet facing web applications, but I don’t even know where to start to understand what the risks and exposures might be.

Adam Hostetler responds:

The first thing I would do is to identify what the applications are. Are they in house developed applications, or are they something like WordPress or another framework? What kind of information do they store (email addresses, PII, etc)? If they are in house or vendor applications, have they been assessed before? With a little knowledge of the applications, you can start building an understanding of what the risks might be. A great resource for web application risks is the OWASP project. https://www.owasp.org

Phil Grimes adds:

When it comes to web applications, I always promote a philosophy that I was raised on and continue to pound into my kid’s heads today: Trust but verify. When an organization launches an internet facing application there is an immediate loss of control on some level. The organization doesn’t know that the users accessing the application are who they say they are, or that their intentions are “normal”. Sure, most people who encounter the app will either use it as intended or if they access the app inadvertently, they may just mosey on about their merry way. But when a user starts poking around the application, we have to rely on the development team to have secured the application. Making sure identity management is handled properly will help us ensure our users are who they say they are, and validating all data that a user might pass to the application becomes an integral part of security to ensure possible attacks are recognized and thwarted.

John Davis comments:

I would say that the most important thing is to ensure that your Internet facing web applications are coded securely. For some time now, exploiting coding weaknesses in web applications has been one of the leading attack vectors exploited by cyber criminals to compromise computer networks. For example, poor coding can allow attackers to perform code injection and cross site scripting attacks against your applications. The Open Web Application Security Project (OWASP), which is accessible on the Internet, is a good place to learn more about secure web application coding techniques. Their website contains lots of free tools and information that will help your organization in this process. There are also professional information security organizations (such as MicroSolved) that can also provide your organization with comprehensive application security assessments.

As always, thanks for reading and let us know if you have questions for the experts.

Columbus OWASP Meeting Presentation

Last week, I presented at the Columbus OWASP meeting on defensive fuzzing, tampering with production web applications as a defensive tactic and some of the other odd stuff we have done in that arena. 

The presentation was called “Hey, You Broke My Web Thingee :: Adventures in Tampering with Production” and I had a lot of fun giving the talk. The crowd interaction was excellent and a lot of folks have asked for the slide deck from the talk, so I wanted to post it here

If you missed the talk in person, feel free to reach out on Twitter (@lbhuston) and engage with me about the topics. I’d love to discuss them some more. Please support OWASP by joining it as a member. These folks do a lot of great work for the community and the local chapter is quite active these days! 

OWASP Talk Scheduled for Sept 13 in Columbus

I have finally announced my Columbus OWASP topic for the 13th of September (Thursday). I hope it turns out to be one of the most fun talks I have given in a long while. I am really excited about the chance to discuss some of this in public. Here’s the abstract:

Hey, You Broke My Web Thingee! :: Adventures in Tampering with Production

Abstract:
The speaker will tell a few real world stories about practical uses of his defensive fuzzing techniques in production web applications. Examples of fighting with things that go bump in the web to lower deployment costs, unexpected application errors and illicit behavior will be explained in some detail. Not for the “play by the book” web team, these techniques touch on unconventional approaches to defending web applications against common (and not so common) forms of waste, fraud and abuse. If the “new Web” is a thinking admin’s game, unconventional wisdom from the trenches might just be the game changer you need.

You can find out more about attending here. Hope to see you in the crowd!

PS – I’ll be sharing the stage with Jim Manico from White Hat Security, who is always completely awesome. So, come out and engage with us!

Yandex.ru Indexing Crawler Issues

The yandex.ru crawler is an indexing application that spiders hosts and puts the results into the yandex.ru search engine. Like Google, Bing and other search engines, the system searches out new contents on the web continually and adds the content to the search engine database. Usually, these types of activities cause little issues for those whose sites are being indexed, and in fact, over the years an etiquette system based on rules placed in the robots.txt file of a web site has emerged.

Robots.txt files provide a rule set for search engine behaviors. They indicate what areas of a site a crawler may index and what sections of the site are to be avoided. Usually this is used to protect overly dynamic areas of the site where a crawler could encounter a variety of problems or inputs that can have either bandwidth or application issues for either the crawler, the web host or both. 

Sadly, many web crawlers and index bots do not honor the rules of robots.txt. Nor do attackers who are indexing your site for a variety of attack reasons. Given the impacts that some of these indexing tools can have on bandwidth, CPU use or database connectivity, other options for blocking them are sometimes sought. In particular, there are a lot of complaints about yandex.ru and their aggressive parsing, application interaction and deep site inspection techniques. They clearly have been identified as a search engine that does not seem to respect the honor system of robots.txt. A Google search for “yandex.ru ignores robots.txt” will show you a wide variety of complaints.

In our monitoring of the HITME traffic, we have observed many deep crawls by yandex.ru from a variety of IP ranges. In the majority of them, they either never requested the robots.txt file at all, or they simply ignored the contents of the file altogether. In fact, some of our HITME web applications have experienced the same high traffic cost concerns that other parts of the web community have been complaining about. In a couple of cases, the cost for supporting the scans of yandex.ru represent some 30+% of the total web traffic observed by the HITME end point. From our standpoint, that’s a pain in the pocketbook and in our attention span, to continually parse their alert traffic out of our metrics.

Techniques for blocking yandex.ru more forcibly than robots.txt have emerged. You can learn about some of them by searching “blocking yandex.ru”. The easiest and what has proven to be an effective way, is to use .htaccess rules. We’ve also had some more modest success with forcibly returning redirects to requests with known url parameters associated with yandex.ru, along with some level of success by blocking specific IPs associated with them via an ignore rule in HoneyPoint.

If you are battling yandex.ru crawling and want to get some additional help, drop us a comment or get in touch via Twitter (@lbhuston, @microsolved). You can also give an account representative a call to arrange for a more technical discussion. We hope this post helps some folks who are suffering increased bandwidth use or problems with their sites/apps due to this and other indexing crawler issues. Until next time, stay safe out there!