About Brent Huston

I am the CEO of MicroSolved, Inc. and a security evangelist. I have spent the last 20+ years working to make the Internet safer for everyone on a global scale. I believe the Internet has the capability to contribute to the next great leap for mankind, and I want to help make that happen!

Quick Use Case for HoneyPoint in ICS/SCADA Locations

 

 

 

 

 

 

 

This quick diagram shows a couple of ways that many organizations use HoneyPoint as a nuance detection platform inside ICS/SCADA deployments today.

Basically, they use HoneyPoint Agent/Decoy to pick up scans and probes, configuring it to emulate an endpoint or PLC. This is particularly useful for picking up wider area scans, probes and malware propagations.

Additionally, many organizations are finding value in HoneyPoint Wasp, using its white list detection capabilities to identify new code running on HMIs, Historian or other Windows telemetry data systems. In many cases, Wasp can quickly and easily identify malware, worms or even unauthorized updates to the ICS/SCADA components.

The Smart Grid Raises the Bar for Disaster Recovery

As we present at multiple smart grid and utility organizations, many folks seem to be focusing on the confidentiality, integrity, privacy and fraud components of smart grid systems.

Our lab is busily working with a variety of providers, component vendors and other folks doing security assessments, code review and penetration testing against a wide range of systems from the customer premise to the utility back office and everything in between. However, we consistently see many organizations under estimating the costs and impacts of disaster recovery, business continuity and other efforts involved in responding to issues when the smart grid is in play.

For example, when asked about smart meter components recently, one of our water concerns had completely ignored the susceptibility of these computer devices to water damage in a flood or high rain area. Seems simple, but even though the devices are used inside in-ground holes in neighborhoods, the idea of what happens when they are exposed to water had never been discussed. The vendor made a claim that the devices were “water resistant”, but that is much different than “water proof”. Filling a tub with water and submerging a device quickly demonstrated that the casing allowed a large volume of water into the device and that when power was applied, the device simply shorted in what we can only describe as “an interesting display”.

The problem with this is simple. Sometimes areas where this technology is eventually intended to be deployed will experience floods. When that happens, the smart meter and other computational devices may have to be replaced en masse. If that happens, there is a large cost to be considered, there are issues with labor force availability/safety/training and there are certainly potential issues with vendor supply capabilities in the event of something large scale (like Hurricane Katrina in New Orleans).

Many of the organizations we have talked to simply have not begun the process of adjusting their risk assessments, disaster plans and the like for these types of operational requirements, even as smart grid devices begin to proliferate across the US and global infrastructures.

There are a number of other examples ranging from petty theft (computer components have after market value & large scale theft of components is probable in many cases) to outright century events like hurricanes, floods, earthquakes and tornados. The bottom line is this – smart grid components introduce a whole new layer of complexity to utilities and the infrastructure. Now is the time for organizations considering or already using them to get their heads and business processes wrapped around them in today’s deployments and those likely to emerge in the tomorrows to come.

How to Choose a Security Vendor: Beware of “Free InfoSec”

In your search for security vendors, be aware of those who offer assessments on the “we find holes or it’s free” basis.  Below are a few points to consider when evaluating your choices.

  1. Security testing choices should not be based on price. They should be based on riskThe goal is to reduce the risk that any given operation (application, network, system, process, etc.) presents to the organization to a level that is manageable.

    Trust me, I have been in the security business for 20 years and all vendor processes are NOT created equal. Many variations exist in depth, skill level, scope, reporting capability, experience, etc. As such, selecting security testing vendors based upon price is a really bad idea. Matching vendors specific experience, reporting styles and technical capabilities to your environment and needs is a far better solution for too many reasons to expound upon here.
     

  2. The “find vulnerabilities or it’s free” mentality can backfire.It’s hard enough for developers and technical teams to take their lumps from a security test when holes emerge, but to also tie that to price makes it doubly difficult — “Great, I pay now because Tom made some silly mistake!” is just one possibility. How do you think management may handle that? What about Tom?

    Believe me, there can be long term side effects for Tom’s career, especially if he is also blamed for breaking the team’s budget in addition to causing them to fail an audit.
     

  3. It actually encourages the security assessment team to make mountains out of mole hills.Since they are rewarded only when they find vulnerabilities and the customer expectations of value are automatically built on severity (it’s human nature), then it certainly behooves the security team to note even small issues as serious security holes.

    In our experience, this can drastically impact the perceived risk of identified security issues in both technicians and management and has even been known to cause knee-jerk reactions and unneeded panic when reports arrive that show things like simple information leakage as “critical vulnerabilities”. Clearly, if the vendor is not extremely careful and mindful of ethical behavior among their teams, you can get seriously skewed views between perceived risk and real-world risk, again primarily motivated by the need to find issues to make the engagement profitable.

In my opinion, let’s stick to plain old value. We can help you find and manage your risk. We focus on specific technical vulnerabilities in networks, systems, applications and operations that attackers could exploit to cause you damage. The damages we prevent from occurring saves your company money. Look for a service vendor that provides this type of value and realize in the long run, you’ll be coming out ahead.

Want Rapid Feedback? Try a Web Application Security Scan!

A web application security scan is a great way to get rapid feedback on the security and health of your web-based applications.

You can think of the web application scan as a sort of vulnerability assessment “lite”. It leverages the power and flexibility of automated application scanning tools to do a quick and effective baseline test of your application. It is very good at finding web server configuration issues, information leakage issues and the basic SQL injection and cross-site scripting vulnerabilities so common with attackers today. 

This service fits particularly well for non-critical web applications that don’t process private information or for internal-facing applications with little access to private data. It is a quick and inexpensive way to perform due diligence on these applications that aren’t key operational focal points.

Many of our clients have been using the application scanning service for testing second-line applications to ensure that they don’t have injection or XSS issues that could impact PCI compliance or other regulatory standings. This gives them a less costly method for testing the basics than a full blown application assessment or penetration test.

While this service finds a number of issues and potential holes, we caution against using it in place of a full application assessment or penetration test if the web application in question processes critical or highly sensitive information. Certainly, these deeper offerings find a great deal more vulnerabilities and they also often reveal subtle issues that automated scans will not identify.

If you are interested in learning more about the applications scanning service, please fill out the contact form and put in the “Questions” box: Web App Scan. We can help you identify if these services are a good fit for your needs and are more than happy to provide more detail, pricing and other information about web application scans.

The Detection in Depth Focus Model & Example

Furthering the discussion on how detection in depth works, here is an example that folks have been asking me to demonstrate. This is a diagram that shows an asset, in this case PII in a database that is accessed via a PHP web application. The diagram shows the various controls around detection in place to protect the data at the various focus levels for detection. As explained in the maturity model post before, the closer the detection control is to the asset, the higher the signal to noise ratio it should be and the higher the relevance o the data should be to the asset being protected (Huston’s Postulate). 

Hopefully, this diagram helps folks see a working example of how detection in depth can be done and why it is not only important, but increasingly needed if we are going to turn the tide on cyber-crime.
 
As always, thanks for reading and feel free to engage with ideas in comments or seek me out on Twitter (@lbhuston) and let me know what you think. 

Detection in Depth Maturity Model

I have been discussing the idea of doing detection depth pretty heavily lately. One of the biggest questions I have been getting is about maturity of detection efforts and the effectiveness of various types of controls. Here is a quick diagram I have created to help discuss the various tools and where they fit into the framework of detection capability versus maturity/effectiveness.

The simple truth is this, the higher the signal to noise ratio a detection initiative has, the better the chance of catching the bad event. Detections layered together into various spots work better than single layer controls. In most cases, the closer you get to an asset, the more nuanced and focus (also higher signal to noise ratio) the detection mechanisms should become.
 
That is, for example – a tool like a script detecting new files with “base64decode()” in them on a web server is much higher signal than a generic IDS at the perimeter capturing packets and parsing them against heuristics.
 
When the close controls fire an alert, there better be a clear and present danger. When the distant controls alert, there is likely to be more and more noise as the controls gain distance from the asset. Technology, detection focus and configuration also matter A LOT. 
All of that said, detection only works if you can actually DO something with the data. Alarms that fire and nothing happens are pretty much useless tools. Response is what makes detection in depth a worthwhile, and necessary, investment.

How To Increase Cooperation Between SCADA/ICS and the IT Department

 

Here is a mind map of a set of ideas for increasing the cooperation, coordination and socialization between the ICS/SCADA operations team and their traditional IT counterparts. Last week, at the Ohio SCADA Security Symposium this was identified as a common concern for organizations. As such, we wanted to provide a few ideas to consider in this area. Let us know in the comments or on twitter if you have any additional ideas and we’ll get them added to a future version of the mind map. Click here to download the PDF.

Thoughts From The Ohio SCADA Security Symposium

 

 

This week, I had the distinct pleasure of playing MC at the 1st annual Ohio SCADA/ICS Security Symposium. The event was held in Columbus Ohio and offered a variety of speakers from federal, state and local government, as well as panels on controls that work and projects that have failed to succeed that included representatives from power, gas, water and manufacturing. These were powerful discussions and the content was eye-opening to many of the participants.

First, I would like to say thank you to all who were involved in the symposium. Their efforts in organizing, executing and attending the event are greatly appreciated. Feedback about the event has been spectacular, and we all look forward to participating again next year.
 
That said, one of the largest identified issues among the conversations at the symposium was the idea that cooperation and coordination between control network operators and engineers and their peers on the traditional business-oriented IT staff is difficult, if not nearly impossible.
 
This seems to be a common conundrum that many organizations are facing. How do you get these two sides to talk? How do you get them to participate in conversations about best practices and technology advances in their respective areas? It seems, that even though these two camps share similar architectures, common dependencies and often similar skill sets, that those things are still not enough to bring them together.
 
In the spirit of the symposium, and in the conversation openness that we identified and encouraged, I would like to ask for your input on this topic. What does your organization do to facilitate open communications between these two groups? What works for your teams? If you haven’t had success, what have you tried and why do you think it failed? Please feel free to discuss in the comments, on the OhioSCADA group on LinkedIn or even reach out to me personally on twitter (@lbhuston).
 
As always, thanks for reading and I look forward to the conversation that follows. Maybe together, we can identify some strategies that work and potentially bridge the gap between these two stakeholding groups. Clearly, from the discussions at the symposium, if we can fix this we can go a long way toward helping ourselves better the security posture and operational capabilities of our environments.

Why a Data Flow Map Will Make Your Life Easier

It’s impossible to protect everything in your environment if you don’t know what’s there. All system components and their dependencies need to be identified. This isn’t a mere inventory listing. Adding the dependencies and trust rela- tionships is where the effort pays off.

This information is useful in many ways

  • If Server A is compromised incident responders can quickly assess what other components may have been affected by reviewing its trust relationships
  • Having a clear depiction of component dependencies eases the re-architecture process allowing for faster, more efficient upgrades
  • Creating a physical map in accordance with data flow and trust relationships ensures that components are not forgotten
  • Categorizing system functions eases the enclaving process

Don’t know where to start? It’s usually easiest to map one business process at a time. This enables everyone to better understand the current environment and data operations. Once the maps are completed they must be updated peri- odically to reflect changes in the environment.

Click here to see an example of a Data Flow Map. The more you know, the better prepared you can be!

HoneyPoint Maturity Model

Many folks have asked for a quick review of the way HoneyPoint users progress as they grow their confidence in the product suite and in their capability to manage threat data. To help answer those questions and to give folks a quick way to check out how some folks use HoneyPoint beyond simple scan/probe detection, we put together this quick maturity model to act as a roadmap.
If you are interested in hearing more about a specific set of functions or capabilities, give us a call or drop us a line. We would be happy to walk you through the model or any of the specific items. HoneyPoint users, feel free to engage with support if some of this sparks a new idea for how your organization can deepen your own HoneyPoint use cases. Thanks for reading and stay safe out there!