It’s Dev, not Diva – Don’t set the “stage” for failure

Development: the act, process, or result of developing, the development of new ideas. This is one of the Merriam-Webster definitions of development.

It doesn’t really matter what you call it…dev, development, stage, test. Software applications tend to be in flux, and the developers, programmers, testers, and ancillary staff need a place to work on them.

Should that place be out on the internet? Let’s think about that for a minute. By their very nature, dev environments aren’t complete. Do you want a work in progress, with unknown holes, to be externally facing? This doesn’t strike me as the best idea.

But, security peeps, we HAVE to have it facing the internet – because REASONS! (Development types…tell me what your valid reasons are?)

And it will be fine – no one will find it, we won’t give it a domain name!

Security through obscurity will not be your friend here…with the advent of Shodan, Censys.io, and other venues…they WILL find it. Ideally, you should only allow access via VPN or other secure connection.

What could possibly go wrong? Well, here’s a short list of SOME of the things that MSI has found or used to compromise a system, from an internet facing development server:

  • A test.txt file with sensitive information about the application, configuration, and credentials.
  • Log files with similar sensitive information.
  • .git directories that exposed keys, passwords, and other key development information.
  • A development application that had weak credentials was compromised – the compromise allowed inspection of the application, and revealed an access control issue. This issue was also present in the production application, and allowed the team to compromise the production environment.
  • An unprotected directory that contained a number of files including a network config file. The plain text credentials in the file allowed the team to compromise the internet facing network devices.

And the list keeps going.

But, security peeps – our developers are better than that. This won’t happen to us!

The HealthCare.Gov breach https://www.csoonline.com/article/2602964/data-protection/configuration-errors-lead-to-healthcare-gov-breach.html in 2014 was the result of a development server that was improperly connected to the internet. “Exact details on how the breach occurred were not shared with the public, but sources close to the investigation said that the development server was poorly configured and used default credentials.”

Another notable breach occurred in 2016 – an outsourcing company named Capgemini https://motherboard.vice.com/en_us/article/vv7qp8/open-database-exposes-millions-of-job-seekers-personal-information exposed the personal information of millions of job seekers when their IT provider connected a development server to the internet.

The State of Vermont also saw their health care exchange – Vermont Connected – compromised in 2014 https://www.databreachtoday.asia/hackers-are-targeting-health-data-a-7024 when a development server was accessed. The state indicates this was not a breach, because the development server didn’t contain any production data.

So, the case is pretty strongly on the side of – internet facing development servers is a bad idea.

Questions? Comments? What’s your take from the development side? I’d love to hear from you – lwallace@microsolved.com, or @TheTokenFemale on Twitter!

If you would like to know more about MicroSolved or its services please send an e-mail to info@microsolved.com or visit microsolved.com.

 

 

 

 

 

 

 

 

 

Because you know it’s all about them apps, ’bout them apps…

Know thyself – Socrates

I ran across this link last week, from SANS, and it’s one of the better basic checklists I’ve seen for application security. With all due respect to OWASP, their information is more technical, and useful for practitioners – their testing guide is here. For the CIO level crowd, I’d highly recommend a look at their top 10 for 2017. And a serious nod to Bill Sempf – if you haven’t heard his talk about care and feeding of developers in the security space, go find it!

Since this missive was designed to have pretty pictures and convince you to send your developers to the SANS courses listed, it’s a nice start for security practitioners that may need to work with developers, but aren’t 100% versed in application security. Some of this info is more basic than OWASP’s, as well – which does not diminish it’s importance. Let’s talk about what they list here, and why it’s important.

Error handling and logging:

Don’t display the specific error messages generated by your programs/architecture, and don’t allow unhandled exceptions – both of these items can display information about the underlying architecture of your application. Attackers can leverage this information and any associated vulnerabilities to compromise the application. If the user creates a condition that generates an error, offer them enough information to fix the problem – nothing more, nothing less.

Don’t allow specific framework errors…”the X program says you broke Y variable” – suppress them. Allowing these errors discloses potentially useful information about the framework and architecture to attackers.

Log all the things! Log authentication attempts – successful or not. Log privilege changes – successful or not. Log all administrative activity, or administrative attempts. Log any and all access and access attempts to sensitive information.

Log all the things….except when you don’t. Don’t log sensitive information. Log the admin attempts, but not admin passwords. Don’t log any information that falls under HIPAA, PCI, or other regulatory spheres.

Store logs securely. Plain text in an internet facing share? Not the world’s best idea. Encrypt, secure, and protect against data loss and tampering. If you have a data retention policy, make sure that logs are included and the policy is followed.

Data Protection:

Turn ON HTTPS, turn OFF HTTP. The same URL should not be accessible via HTTP. Get your HTTPS certificates from a respectable CA – no self-signed certificates. Accepting them is bad practice, and you run the risk of the impression that you haven’t done your due diligence, AND of conditioning your users to bypass this simple security measure.

Disable weak ciphers. Don’t wait for the 4,732 vulnerability, and don’t argue that these vulnerabilities are difficult to exploit. The NEXT one might not be. Get your SSL sanitization house in order.

Don’t allow auto-complete. Yes, some browsers will ignore things – their bad practices shouldn’t be used to justify your bad practices.

Avoid storing user info. Tokenize when possible. If you have to store password, encrypt, salt, spindle, mutilate and fold. There’s no such thing as TOO safely here.

Operations:

Have a consistent, repeatable process for…application development, testing, change control. Include security requirements at the beginning of the design – don’t try to shoehorn them in after the fact.

Review, review, review. Code reviews. Design reviews. Security testing – as you go, not at the end. Harden the environment per best practices.

Train your developers on security! Work as partners, not as the guys who make stuff and those security guys that always say no.

Have an incident response plan. TEST your plan, evaluate your plan, use your plan. Do not wait til something DOES happen to discover the holes in your plan. Keep your plan updated, as staff contacts and responsibility changes. Do disaster recovery exercises.

Authentication:

Hard coded credentials. Don’t. Just don’t. But I need to because….no. You do not. There are safer ways to do this.

Have a strong password policy. Have a strong password reset – do not accidentally disclose things like the validity of an account via the password reset mechanism. Do have a password lockout policy – unlimited attempts is an invitation to a brute force attack.

Again, make sure your error messages aren’t handing valuable information to attackers.

Run applications and middleware with the least privilege required. Database passwords are gold – do not put them in code. Guard them. But I need to because…again, you do not. Do it right, don’t do it over.

Session management:

Put a logout button on every page. Every. Page. Then, invalidate the session once they’ve logged out – no back button resumption of the session.

Randomize your session tokens, so that they are not vulnerable to predictive attacks. Regenerate them as user permissions change. Unless the application requires multiple connections – and you have a legitimate need to DO this – destroy tokens in multiple sessions. Don’t leave yourself open to session cloning.

Cookies. And not the chocolate chip kind. Set the domain and path correctly. Use secure cookie attributes, and expire cookies as appropriate.

Log users out automatically on reasonable idle periods. Implement an absolute logout – there are few, if any, legitimate reasons to be logged in forever.

Input & Output handling:

Whitelist over blacklist. Only accept data that meets the criteria for your application.

Validate, validate, validate. Validate uploaded files – consider all uploads as suspect, and sandbox accordingly. Validate input sources.

Follow the OWASP recommendations, many detailed in the link above, for input, output, and safe transport.

Access control:

Apply access controls consistently. Use “gate keeper” technology, so that all requests are validated and verified, whether the user is logged in or not.

Don’t allow unvalidated forwards or redirects. This gives an attacker potential capability to access content without authentication.

Least privilege rules. Make access control mandatory, don’t elevate rights when you don’t absolutely need to. Don’t use direct object references to validate access.

There’s a lot more than I’ve include here….don’t understand these? Need more info? Talk to your developers. Buy ’em a burger. Buy ’em a beer. Become the guy who listens, and attempts to understand….not the jerk that always says no. If you make an honest effort to understand them, and to help them understand you, you’ll both be better for the attempt.

Got a development war story? Got a good development story? Please reach out – @TheTokenFemale on Twitter. Let’s keep the conversation going.

Drupal Security Best Practices Document

This is just a quick post to point to a great guide on Drupal security best practices that we found recently. 

It was written for the Canadian government and is licensed under the Open Government License platform. 

The content is great and it is available free of charge. 

If your organization uses Drupal, you should definitely check it out and apply the guidance as a baseline! 

Pointers for Mobile App Certificate Pinning

We often get questions about Certificate Pinning in mobile applications. Many clients find the issue difficult to explain to other teams.

You can find really great write ups, and an excellent set of source code examples for fixing this issue – as well as explaining it – at this OWASP.org site.

At a super high level though, you basically want your mobile application to validate the SSL certificate of the specific server(s) that you want it to talk to, and REJECT any certificates that do not match the intended server certificate – REGARDLESS of whether or not the underlying OS trusts the alternative certificate.

This will go a long way to hardening the SSL communication streams between the app and the server, and will not permit easy interception or man-in-the-middle attacks via a network provider or hostile proxy server.

Updates to the app source code are needed to mitigate the issue, and you may need to update apps in the app stores, depending on the way your app is delivered.

As always, if you work with MSI on mobile app security reviews or application-specific penetration testing, we would be happy to demonstrate the attacks and suggested mitigations for any identified issue. Just let us know if you would like assistance.

As always, thanks for reading and I hope your team finds this useful.

Sometimes, It Happens…

Sometimes things fail in interesting ways. Sometimes they fail in dangerous ways. Occasionally, things fail in ways that you simply can’t predict and that are astounding.

In a recent assessment of a consumer device in our lab, we found the usual host of vulnerabilities that we have come to expect in Internet of Things (IoT) devices. But, while testing this particular device, which is also tied to a cloud offering for backup and centralization of data – I never would have predicted that a local device would have a full bi-directional trust with a virtual instance in the cloud.

Popping the local device was easy. It had an easy to compromise “hidden” TCP port for telnet. It took my brute force tool only moments to find a default login and password credential set. That’s pretty usual with IoT devices.

But, once I started poking around inside the device, it quickly became apparent that the device configuration was such that it tried to stay continually connected to a VM instance in the “cloud storage and synchronization” environment associated with the device and vendor. How strong was the trust? The local device had mount points on the remote machine and both systems had full trust to each other via a telnet connection. From the local machine, simply telnet to the remote machine on the right port, and without credential check, you have a shell inside the cloud. Not good…

But, as clear of a failure as the scenario above was, the rabbit hole went deeper. From the cloud VM, you could see thousands of other VMs in the hosted cloud environment. Connect from the VM to another, and you need the default credentials again, but, no sweat, they work and work and work…

So, from brute force compromise of a local piece of consumer hardware to a compromise of thousands of cloud instance VMs in less than 30 minutes. Ugh… 

Oh yeah, remember that storage centralization thing? Yep, default credentials will easily let you look through the centralized files on all those cloud VMs. Double ugh…

Remember, I said bi-directional? Yes, indeed, a connection from a VM to an end-point IoT device also works with assumed trust, and you get a shell on a device with local network visibility. Now is the time you kinda get sick to your stomach…

These kinds of scenarios are becoming more common as new IoT devices get introduced into our lives. Yes, the manufacturer has been advised, but, closing the holes will take a complete redesign of the product. The moral of this story is to pay careful attention to IoT devices. Ask questions. Audit. Assess. Test. There are a lot of bad security decisions being made out there in the IoT marketplace, especially around consumer products. Buyer beware!

Getting Smart with Mobile App GeoLocation to Fight Fraud

If your mobile application includes purchases with credit cards, and a pickup of the merchandise, then you should pay attention to this.

Recently, in our testing lab and during an intelligence engagement, we identified a fraud mechanism where stolen credit cards were being used via the mobile app in question, to fraudulently purchase goods. In fact, the attackers were selling the purchase of the goods as a service on auction and market sites on the dark web.

The scam works like this. The bad guys have stolen credit cards (track data, likely from dumps), which they use to make a purchase for their client remotely. The bad guys use their stolen track data as a card not present transaction, which is standard for mobile apps. The bad guys have access to huge numbers of stolen cards, so they can burn them at a substantial rate without impacting their inventory to a large extent. The bad guy’s customer spends $25 in bitcoins to get up to $100 in merchandise. The bad guy takes the order from the dark net, uses the mobile app to place the order, and then delivers the receipt and/or pickup information to the bad guys customer. The customer then walks into the retailer and shows the receipt for their mobile order, picking up the merchandise and leaving.

The bad guy gets paid via the bitcoins. For them, this is an extremely low risk way to convert stolen credit card info to cash. It is significantly less risky for them than doing physical card replication, ATM use or other conversion methods that have a requirement for physical interaction.

The bad guy’s customer gets paid by picking up the merchandise. They get up to $100 value for a cost of $25. They take on some risk, but if performed properly, the scam is low risk to them, or so they believe. In the odd event, they simply leave the store after making their demands for satisfaction. There is little risk of arrest or prosecution, it would seem, especially at the low rate of $100 – or at least that was how the bad guy was pitching it to their prospective customers…

The credit card issuer or the merchant gets stuck. They are out the merchandise and/or the money, depending on their location in the world, and the merchant agreement/charge back/PCI compliance issues they face.

Understanding the fraud and motivations of the bad guys is critical for securing the systems in play. Organizations could up their validation techniques and vigilance for mobile orders. They could add additional fraudulent transaction heuristics to their capability. They could also implement geo-location on the mobile apps as a control – i.e.. If the order is being physically placed on a device in Ukraine, and pick up is in New York, there is a higher level of risk associated with that transaction. Identifying ways  to leverage the sensors and data points from a mobile device, and rolling it into fraud detection heuristics and machine learning analytics is the next wave of security for some of these applications. We are pleased to be helping clients get there…

To hear more about modern fraud techniques, application security testing or targeted threat intelligence like what we discussed above, drop us a line (info at microsolved dot com) or via Twitter (@lbhuston). We look forward to discussing it with your team.

Hosting Providers Matter as Business Partners

Hosting providers seem to be an often overlooked exposure area for many small and mid-size organizations. In the last several weeks, as we have been growing the use of our passive assessment platform for supply chain assessments, we have identified several instances where the web site hosting company (or design/development company) is among the weakest links. Likely, this is due to the idea that these services are commodities and they are among the first areas where organizations look to lower costs.

The fall out of that issue, though, can be problematic. In some cases, organizations are finding themselves doing business with hosting providers who reduce their operational costs by failing to invest in information security.* Here are just a few of the most significant issues that we have seen in this space:
  • “PCI accredited” checkout pages hosted on the same server as other sites that are clearly under the control of an attacker
  • Exposed applications and services with default credentials on the same systems used to host web sites belonging to critical infrastructure organizations
  • Dangerous service exposures on hosted systems
  • Malware infested hosting provider ad pages, linked to hundreds or thousands of their client sites hosted with them
  • Poorly managed encryption that impacts hundreds or thousands of their hosted customer sites
  • An interesting correlation of blacklisted host density to geographic location and the targeted verticals that some hosting providers sell to
  • Pornography being distributed from the same physical and logical servers as traditional businesses and critical infrastructure organizations
  • A clear lack of DoS protection or monitoring
  • A clear lack of detection, investigation, incident response and recovery maturity on the part of many of the vendors 
It is very important that organizations realize that today, much of your risk extends well beyond the network and architectures under your direct control. Partners, and especially hosting companies and cloud providers, are part of your data footprint. They can represent significant portions of your risk, and yet, are areas where you may have very limited control. 
 
If you would like to learn more about using our passive assessment platform and our vendor supply chain security services to help you identify, manage and reduce your risk – please give us a call (614-351-1237) or drop us a line (info /at/ MicroSolved /dot/ com). We’d love to walk you through some of the findings we have identified and share some of the insights we have gleaned from our analysis.
 
Until next time, thanks for reading and stay safe out there!
 
*Caveat: This should not be taken that information security is correlated with cost. We have seen plenty of “high end”, high cost hosting companies with very poor security practices. The inverse is also true. Validation is the key…

Clients Finding New Ways to Leverage MSI Testing Labs

Just a reminder that MSI testing labs are seeing a LOT more usage lately. If you haven’t heard about some of the work we do in the labs, check it out here.

One of the ways that new clients are leveraging the labs is to have us mock up changes to their environments or new applications in HoneyPoint and publish them out to the web. We then monitor those fake implementations and measure the ways that attackers, malware and Internet background radiation interacts with them.

The clients use these insights to identify areas to focus on in their security testing, risk management and monitoring. A few clients have even done A/B testing using this approach, looking for the differences in risk and threat exposures via different options for deployment or development.

Let us know if you would like to discuss such an approach. The labs are a quickly growing and very powerful part of the many services and capabilities that we offer our clients around the world! 

Podcast Episode 7 Now Available

The newest version of the State of Security Podcast is now available. You can go the main page here, or listen by clicking on the embedded player below.

This episode features:

This episode is a great interview with Mark “Phork” Carey. We riff on the future of technology & infosec, how machine learning might impact security in the long term, what it was like to build the application-centric web with Sun, lessons learned from decades of hardware hacking and whole lot more! The short for this month is with @pophop, so check out what the self-proclaimed “elder geek” has to say as he spreads some wisdom. Let us know what you think and send in ideas for other folks you would like to hear on the podcast. 

 

Windows Server 2003 – End of Life

Windows Server 2003 has officially reached it’s end-of-life date. Does this mean that all of your Windows Server 2003 servers will be hacked on July 16th? Probably not. However, it is worthwhile to ensure that your organization has a plan in place to migrate all of your applications and services off of this legacy operating system. This is especially true if you have any Windows Server 2003 systems that are exposed to the internet. It is only a matter of time until a new vulnerability is discovered that affects this operating system.

As a former Windows Systems Administrator, I understand how difficult it can be to convince an application owner to invest the time and resources into migrating a system or service to a new operating system. Despite the fact that these systems have a heightened risk of being compromised, it’s very possible that your organization doesn’t have the financial resources to migrate your applications and services to a new operating system. You’re not alone. I found over 1.3 million servers running IIS 6.0 in Shodan. Over 688,000 of these servers are in the United States. However, there are still ways to reduce the risk of hosting these legacy operating systems until a migration plan is put into place.

A few ways to reduce the risk of hosting an application on a legacy operating system are:

  • Discover and document – You can’t protect a system if you don’t know it exists. Take some time to identify and document all of the legacy and unsupported operating systems in your network.
  • Learn about the application – Take some time to learn some details about the application. Is it still even being accessed? Who uses it? Why is it still hosted on an unsupported operating system? Are there other options available?
  • Educate the business users – If financial resources are an issue, take some time to explain the risks of hosting this application to the business users. Once they gain an understanding of the risk associated with hosting their application on a legacy OS, they can help secure funding to ensure that the application is upgraded.
  • Isolate – Segmenting the legacy system can reduce the risk that it is accessed by an attacker. It also can decrease the likelihood that a compromise of the legacy system will spread to other servers.
  • Update and secure – Install all available patches and updates. Not only for the operating system, but the hosted applications as well.
  • Perform thorough log analysis – Implement some sort of centralized logging platform to ensure you have the ability to detect any anomalies that occur within these systems.
  • Plan for the worst – Be prepared. Have a plan in place for responding to an incident involving these systems.