Operation Lockdown Update ~ Xojo Web App Security

Just a quick note today to bring you up to date on Operation Lockdown. As many of you may know, MSI began working with Xojo, Inc. a year or so ago, focusing on increasing the security of the web applications coded in the language and produced by their compiler. As such, we gave a talk last year at XDC in Orlando about the project and progress we had made. 

Today, I wanted to mention that we have again begun working on OpLockdown, and we remain focused on the stand-alone web applications generated by Xojo. 

Last week, Xojo released Xojo 2014R3 which contains a great many fixes from the project and our work.

The stand-alone web apps now use industry standard HTTP headers (this was true for the last couple of releases) and have the ability to do connection logging that will meet the compliance requirements for most regulatory guidelines.

Additionally, several denial-of-service conditions and non-RFC standard behaviors have been fixed since the project began.

My team will begin doing regression testing of the security issues we previously identified and will continue to seek out new vulnerabilities and other misbehaviors in the framework. We would like to extend our thanks to the folks at BKeeney Software who have been helping with the project, and to Xojo for their attention to the security issues, particularly to Greg O’Lone, who has been our attentive liaison and tech support. Together, we are focused on bringing you a better, safer and more powerful web application development platform so that you can keep making the killer apps of your dreams!

Three Tough Questions with Aaron Bedra

This time I interviewed Aaron Bedra about his newest creation ~ RepSheet. Check it out here:


Aaron’s Bio:

Aaron is the Application Security Lead at Braintree Payments. He is the co-author of Programming Clojure, 2nd Edition as well as a frequent contributor to the Clojure language. He is also the creator of Repsheet, a reputation based intelligence and security tool for web applications.


Question #1:  You created a tool called Repsheet that takes a reputational approach to web application security. How does it work and why is it important to approach the problem differently than traditional web application firewalling?

I built Repsheet after finding lots of gaps in traditional web application security. Simply put, it is a web server module that records data about requests, and either blocks traffic or notifies downstream applications of what is going on. It also has a backend to process information over time and outside the request cycle, and a visualization component that lets you see the current state of the world. If you break down the different critical pieces that are involved in protecting a web application, you will find several parts:

* Solid and secure programming practices

* Identity and access management

* Visibility (what’s happening right now)

* Response (make the bad actors go away)

* HELP!!!! (DDoS and other upstream based ideas)

* A way to manage all of the information in a usable way

This is a pretty big list. There are certainly some things on this list that I haven’t mentioned as well (crypto management, etc), but this covers the high level. Coordinating all of this can be difficult. There are a lot of tools out there that help with pieces of this, but don’t really help solve the problem at large.

The other problem I have is that although I think having a WAF is important, I don’t necessarily believe in using it to block traffic. There are just too many false positives and things that can go wrong. I want to be certain about a situation before I act aggressively towards it. This being the case, I decided to start by simply making a system that records activity and listens to ModSecurity. It stores what has happened and provides an interface that lets the user manually act based on the information. You can think of it as a half baked SIEM.

That alone actually proved to be useful, but there are many more things I wanted to do with it. The issue was doing so in a manner that didn’t add overhead to the request. This is when I created the Repsheet backend. It takes in the recorded information and acts on it based on additional observation. This can be done in any form and it is completely pluggable. If you have other systems that detect bad behavior, you can plug them into Repsheet to help manage bad actors.  

The visualization component gives you the detailed and granular view of offenses in progress, and gives you the power to blacklist with the click of a button. There is also a global view that lets you see patterns of data based on GeoIP information. This has proven to be extremely useful in detecting localized botnet behavior.

So, with all of this, I am now able to manage the bottom part of my list. One of the pieces that was recently added was upstream integration with Cloudflare, where the backend will automatically blacklist via the Cloudflare API, so any actors that trigger blacklisting will be dealt with by upstream resources. This helps shed attack traffic in a meaningful way.

The piece that was left unanswered is the top part of my list. I don’t want to automate good programming practices. That is a culture thing. You can, of course, use automated tools to help make it better, but you need to buy in. The identity and access management piece was still interesting to me, though. Once I realized that I already had data on bad actors, I saw a way to start to integrate this data that I was using in a defensive manner all the way down to the application layer itself. It became obvious that with a little more effort, I could start to create situations where security controls were dynamic based on what I know or don’t know about an actor. This is where the idea of increased security and decreased friction really set it and I saw Repsheet become more than just a tool for defending web applications.

All of Repsheet is open sourced with a friendly license. You can find it on Github at:

https://github.com/repsheet

There are multiple projects that represent the different layers that Repsheet offers. There is also a brochureware site at http://getrepsheet.com that will soon include tutorial information and additional implementation examples.

Question #2: What is the future of reputational interactions with users? How far do you see reputational interaction going in an enterprise environment?

For me, the future of reputation based tooling is not strictly bound to defending against attacks. I think once the tooling matures and we start to understand how to derive intent from behavior, we can start to create much more dynamic security for our applications. If we compare web security maturity to the state of web application techniques, we would be sitting right around the late 90s. I’m not strictly talking about our approach to preventing breaches (although we haven’t progressed much there either), I’m talking about the static nature of security and the impact it has on the users of our systems. For me the holy grail is an increase in security and a decrease in friction.

A very common example is the captcha. Why do we always show it? Shouldn’t we be able to conditionally show it based on what we know or don’t know about an actor? Going deeper, why do we force users to log in? Why can’t we provide a more seamless experience if we have enough information about devices, IP address history, behavior, etc? There has to be a way to have our security be as dynamic as our applications have become. I don’t think this is an easy problem to solve, but I do think that the companies that do this will be the ones that succeed in the future.

Tools like Repsheet aim to provide this information so that we can help defend against attacks, but also build up the knowledge needed to move toward this kind of dynamic security. Repsheet is by no means there yet, but I am focusing a lot of attention on trying to derive intent through behavior and make these types of ideas easier to accomplish.

Question #3: What are the challenges of using something like Repsheet? Do you think it’s a fit for all web sites or only specific content?

I would like to say yes, but realistically I would say no. The first group that this doesn’t make sense for are sites without a lot of exposure or potential loss. If you have nothing to protect, then there is no reason to go through the trouble of setting up these kinds of systems. They basically become a part of your application infrastructure and it takes dedicated time to make them work properly. Along those lines, static sites with no users and no real security restrictions don’t necessarily see the full benefit. That being said, there is still a benefit from visibility into what is going on from a security standpoint and can help spot events in progress or even pending attacks. I have seen lots of interesting things since I started deploying Repsheet, even botnets sizing up a site before launching an attack. Now that I have seen that, I have started to turn it into an early warning system of sorts to help prepare.

The target audience for Repsheet are companies that have already done the web security basics and want to take the next step forward. A full Repsheet deployment involves WAF and GeoIP based tools as well as changes to the application under the hood. All of this requires time and people to make it work properly, so it is a significant investment. That being said, the benefits of visibility, response to attacks, and dynamic security are a huge advantage. Like every good investment into infrastructure, it can set a company apart from others if done properly.

Thanks to Aaron for his work and for spending time with us! Check him out on Twitter, @abedra, for more great insights!

Quick PHP Malware vs AV Update

It’s been a while since I checked on the status of PHP malware versus anti-virus. So, here is a quick catch up post. (I’ve been talking about this for a while now. Here is an old example.)

I took a randomly selected piece of PHP malware from the HITME and checked it out this afternoon. Much to my surprise, the malware detection via AV has gotten better.

The malware I grabbed for the test turned out to be a multi-stage PHP backdoor. The scanner thought it was exploiting a vulnerable WordPress installation. 

I unpacked the malware parts into plain text and presented both the original packed version from the log and the unpacked version to VirusTotal for detection testing. As you know, in the past, detection of malware PHP was sub single digits in many cases. That, at least to some extent has changed. For those interested, here are the links to see what was tripped.

Decoded to plain text vs Encoded, as received

As you can see, decoded to plain text scored a detection of 44% (19/43), which is significantly improved from a year or so ago. Additionally, excitingly, undecoded, the attack in raw form triggered a detection rate of 30% (13/44)! The undecoded result is HUGE, given that the same test a year or so ago often yielded 0-2% detection rates. So, it’s getting better, just SLOWLY.

Sadly though, even with the improvements, we are still well below half (50%) detection rates and many of the AV solutions that fail to catch the PHP malware are big name vendors with commercial products that organizations running PHP in commercial environments would likely be depending on. Is your AV in the missing zone? If so, you might want to consider other forms of more nuanced detection

Now, obviously, organizations aren’t just depending on AV alone for detection of web malware. But, many may be. In fact, a quick search for the dropped backdoor file on Google showed 58,800 systems with the dropped page name (a semi-unique indicator of compromise). With that many targets already victim to this single variant of PHP backdoors, it might be worth checking into if you are a corporate PHP user.

Until next time, take a look around for PHP in your organization. It is a commonly missed item in the patch and update cycles. It also has a pretty wide security posture with a long list of known attack tools and common vulnerabilities in the coding patterns used by many popular products. Give any PHP servers you have a deeper inspection and consider adding more detection capability around them. As always, thanks for reading and stay safe out there! 

3 Tough Questions with Bill Sempf

Recently, I caught up over email with Bill Sempf. He had some interesting thoughts on software security, so we decided to do a 3 Tough Questions with him. Check this out! :

 

A short biography of Bill Sempf: In 1992, Bill Sempf was working as a systems administrator for The Ohio State University, and formalized his career-long association with inter-networking. While working for one of the first ISPs in Columbus in 1995, he built the second major web-based shopping center, Americash Mall, using Cold Fusion and Oracle. Bill’s focus started to turn to security around the turn of the century. Internet driven viruses were becoming the norm by this time, and applications were susceptible to attack like never before. In 2003, Bill wrote the security and deployment chapters of the often-referenced Professional ASP.NET Web Services for Wrox, and began his career in pen testing and threat modeling with a web services analysis for the State of Ohio. Currently, Bill is working as a security-minded software architect specializing in the Microsoft space. He has recently designed a global architecture for a telecommunications web portal, modeled threats for a global travel provider, and provided identity policy and governance for the State of Ohio. Additionally, he is actively publishing, with the latest being Windows 8 Application Development with HTML5 for Dummies.

 

Question #1: Infosec folks have been talking about securing the SDLC for almost a decade, if that is truly the solution, why haven’t we gotten it done yet?

For the same reason that there are still bugs in software – the time and money necessary to fix things. Software development is hard, and it takes a long time and lots of money to write secure software. Building security in to the lifecycle, rather than just waiting and adding it to the test phase, is just prohibitively expensive.

That said, some companies have successfully done it. Take Microsoft for instance. For a significant portion of their history, Microsoft was the butt of nearly every joke in the security industry. Then they created and implemented the MSDL and now Microsoft products don’t even show up on the top 10 lists anymore. It is possible and it should be done. It’s just very expensive, and companies would rather take on the risk than spend the money up front.

Question #2: How can infosec professionals learn to better communicate with developers? How can we explain how critical things like SQL injections, XSS and CSRF have become in a way that makes developers want to engage?

There are two fronts to this war: the social and the technical. I think both have to be implemented in good measure to extract any success.

On the social side, infosec pros need to get out of the lab, and start talking at developer conferences. I have been doing this as a good measure since 2010, and have encouraged other community members to do the same. It is starting to work. This year at CodeMash, Rob Gillen and myself gave a day long training on everything from malware analysis to Wi-Fi to data protection. The talk was so popular that we needed to be moved into a bigger room. Security is starting to creep into the developers scope of vision.

Technically, though, security flaws need to be treated just like any other defect. The application security test team needs to be part of QA, treated just like anyone else in QA, given access to the defect tracking system, and post defects against the system as part of the QA process. Until something like the Microsoft SDL is implemented in an organization, integrating security testing with QA is the next best thing.

Question #3: What do you think happens in the future as technology dependencies and complexities ramp up? How will every day life be impacted by information security and poor development/implementations?

More and more applications and devices are using a loosely connected model to support fast UIs and easy functional development. This means more and more business functionality exposed in the form of SOAP and REST services. These endpoints are often formerly internal services that were used to provide the web server with functionality, but are gradually being exposed in order to support mobile applications. Rarely are they fully tested. In the short term future, this is going to be the most significant challenge to application security. In the long term, I have no idea. Things change so fast, it is nearly impossible to keep up.

 

Thanks to Bill for sharing his insights. You can discuss them with him on Twitter, where he is @sempf. As always, thanks for reading!

Quick Thought on CSRF Attacks

Yesterday, I listened to @Grap3_Ap3 present at the Columbus OWASP local chapter on Cross Site Request Forgery (CSRF). While this attack has been around since 2001, it continues to show a strong presence in web applications across a range of platforms. Phil spent a lot of his time talking about content management systems on the public Internet, but we have seen CSRF very widely exploitable on embedded devices.

Embedded devices, often equipped with rather rudimentery web servers and applications for management, have proven to be a searing hot pain point for CSRF in our research. While that isn’t shocking or new, I definitely see an interesting and potentially dangerous collision between the growth of the “Internet of Things” and web vulnerabilities. Today, some of these platforms are toys, or novelty tools built into home appliances – BUT, the future of internetworking of our devices and our physical lives means that these web controls will eventually have larger impacts on our day to day lives.

What happens when a CSRF attack can be used to trick your teenager into clicking on a picture on the web that while they view it, they also execute a command to raise the temperature on your refrigerator to unsafe levels? Or when an embedded link in an email tricks you into a click that turns your oven onto super heat clean mode without your knowledge? Sound like a prank? Maybe. Extend it to thermostats, home automation and consumer control over alternative energy controls like solar panels and such and it might take a new form.

We are on a course of collision. Our inattention to information security and the exploding complexity and technology dependencies will soon come together in ways that may surprise us. Ignore the hyperbole, but think about it rationally. Isn’t it time we worked with organizations who make products to demand an increase in protection from some of these basic known attacks? In the future, consumers and organizations alike will vote with their dollars. How will you spend yours?

Java 0-Days are Changing Corporate Use Patterns

With all of the attention to the last few Java 0-days and the market value for them falling them (which many folks believe indicate there are more out there and more coming), we are starting to hear some organizations change their policies around Java, in general. 

It seems some clients have removed it from their default workstation images, restricting it to the pile of as-needed installs. A few have reported requiring more frequent Java update settings and a couple have talked about switching in-house development away from Java to different languages. 

Is your organization changing the way you view Java? How are things changing around the IT shops you work with? 

Drop us a line in the comments or via Twitter (@microsolved or @lbhuston) and let us know what YOU think!

Ask The Experts: Getting Started with Web App Security

Question from a  reader: What should I be paying attention to the most with regards to web applications? My organization has a number of Internet facing web applications, but I don’t even know where to start to understand what the risks and exposures might be.

Adam Hostetler responds:

The first thing I would do is to identify what the applications are. Are they in house developed applications, or are they something like WordPress or another framework? What kind of information do they store (email addresses, PII, etc)? If they are in house or vendor applications, have they been assessed before? With a little knowledge of the applications, you can start building an understanding of what the risks might be. A great resource for web application risks is the OWASP project. https://www.owasp.org

Phil Grimes adds:

When it comes to web applications, I always promote a philosophy that I was raised on and continue to pound into my kid’s heads today: Trust but verify. When an organization launches an internet facing application there is an immediate loss of control on some level. The organization doesn’t know that the users accessing the application are who they say they are, or that their intentions are “normal”. Sure, most people who encounter the app will either use it as intended or if they access the app inadvertently, they may just mosey on about their merry way. But when a user starts poking around the application, we have to rely on the development team to have secured the application. Making sure identity management is handled properly will help us ensure our users are who they say they are, and validating all data that a user might pass to the application becomes an integral part of security to ensure possible attacks are recognized and thwarted.

John Davis comments:

I would say that the most important thing is to ensure that your Internet facing web applications are coded securely. For some time now, exploiting coding weaknesses in web applications has been one of the leading attack vectors exploited by cyber criminals to compromise computer networks. For example, poor coding can allow attackers to perform code injection and cross site scripting attacks against your applications. The Open Web Application Security Project (OWASP), which is accessible on the Internet, is a good place to learn more about secure web application coding techniques. Their website contains lots of free tools and information that will help your organization in this process. There are also professional information security organizations (such as MicroSolved) that can also provide your organization with comprehensive application security assessments.

As always, thanks for reading and let us know if you have questions for the experts.

ProtoPredator to Become Family of Products

On Nov 16, we announced the availability of ProtoPredator for Smart Meters (PP4SM). That tool, aimed at security and operational testing of optical interfaces, has been causing quite a stir. Lots of vendors and utilities have been in touch to hear more about the product and the capabilities it brings to bear.

We are pleased with the interest in the PP4SM release and happy to discuss some of our further plans for future ProtoPredator products. The idea is for the ProtoPredator line to expand into a family of products aimed at giving developers, device designers and owners/operators a tool set for doing operational and security testing. We hope to extend the product family across a range of ICS protocols. We are currently working on a suite of ProtoPredator tools in our testing lab, even as we “scratch our own itch” and design them to answer the needs we have in performing testing of SmartGrid and other ICS component security assessments and penetration tests.

Thanks to the community for their interest in ProtoPredator. We have a lot more to come and we greatly appreciate your support, engagement and feedback.

ProtoPredator for Smart Meters Released

Today, MicroSolved, Inc. is proud to announce the availability of their newest software product – ProtoPredator™ for Smart Meters (PP4SM). This tool is designed for smart meter manufacturers, owners and operators to be able to easily perform security and operational testing of the optical interfaces on their devices.

PP4SM is a professional grade testing tool for smart meter devices. Its features include:

  • Easy to use Windows GUI
  • Easy to monitor, manage and demonstrate testing to management teams
  • Packet replay capability empowers testers to easily perform testing, verification and demonstrations
  • Manual packet builder 
  • Packet builder includes a standards compliant automated checksum generator for each packet
  • Automated packet session engine 
  • Full interaction logging
  • Graphical interface display with real time testing results, progress meters and visual estimations
  • Flexibility in the testing environment or meter conditions

The tool can be used for fuzzing smart meter interactions, testing protocol rule enforcement, regression testing, fix verification and even as a mechanism to demonstrate identified issues to management and other stakeholders. 

ProtoPredator for Smart Meters is available commercially through a vetted licensing process. Licenses are available to verified utilitiy companies, asset owners, asset operators and manufacturers of metering devices. For more information about obtaining PP4SM or to learn more about the product, please contact an MSI account representative. 

More information is available via:

Twitter: @lbhuston

Phone: (614) 351-1237 ext 206

Email: info /at/ microsolved /dot/ com (please forgive the spam obfuscation…) 🙂

What Is Your Browser Leaking?

Today in my tweet stream, someone pointed out this site and I wanted to blog about it. The site is called stayinvisible.com and offers a quick view of some of the data that is available to a web site or an attacker who can lure someone to a website. 

The site displays a dump of a variety of common data that you might not be aware of that is leaking from your browser. There are also tips for hardening your browser settings and operating system against some of the methods used to dump the data. 

If nothing else, it might just provide an “ah ha” moment for folks not used to the information security space. Give it a try and let us know what you think of it. 

We have no association with the site, its content or the folks who run it. We just thought it was interesting. Your paranoia may vary. 🙂