Ask The Experts: Important SCADA Security Tips

This time the question comes from an online forum where we were approached about the MSI Expert’s Opinions on an interesting topic. Without further ado, here it is:

Question: In your opinion, what is the single most important question that security teams should be discussing with SCADA asset owners?

Adam Hostetler (@adamhos) replies:

Do your SCADA managers and IT have a culture of security? It’s still found that many SCADA industries still have a weak culture. This needs to be changed through ongoing education and training (like the DHS training). This will help engineers and IT develop and deploy stronger network architectures and technologies to combat increasing SCADA risks in the future.

John Davis also weighed in: 

I would say the most important question to discuss with SCADA asset owners is this: do you have short term, mid term and long term plans in place for integrating cyber-security and high technology equipment into your industrial control systems? Industrial concerns and utilities have been computerizing and networking their SCADA systems for years now. This has allowed them to save money, time and manpower and has increased their situational awareness and control flexibility. However, industrial control systems are usually not very robust and also very ‘dumb’. They often don’t have the bandwidth or processing power built into them for mechanisms like anti-virus software, IPS and event logging to work, and these systems are usually made to last for decades. This makes most industrial control systems extremely vulnerable to cyber-attack. And with these systems, availability is key. They need to work correctly and without interruption or the consequences vary from loss of revenue to personal injury or death. So, it behooves those in charge of these systems to ensure that they are adequately protected from cyber-attack now and in the future. They are going to have to start by employing alternate security measures, such as monitoring, to secure systems in the short term. Concerns should then work closely with their SCADA equipment manufacturers, IT specialists, sister concerns and information security professionals to develop mid term and long term plans for smoothly and securely transitioning their industrial control systems into the cyber-world. Failure to do this planning will mean a chaotic future for manufacturers and utilities and higher costs and inconveniences for us all.

What do you think? Let us know on Twitter (@microsolved) or drop us a line in the comments below.

Terminal Services Attack Reductions Redux

Last week, we published a post about the high frequency of probes, scans and attacks against exposed Windows Terminal Services from the Internet. Many folks commented on Twitter to me about some of the things that can be done to minimize the risk of these exposures. As we indicated in the previous post, the best suggestions are to eliminate them altogether by placing Terminal Services exposures behind VPN connections or through the implementation of tokens/multi-factor authentication. 

Another idea is to implement specific firewall rules that block access to all but a specific set of IP addresses (such as the home IP address range of your admins or that of a specific jump host, etc.) This can go a long way to minimizing the frequency of interaction with the attack surfaces by random attacker tools, probes and scans. It also raises the bar slightly for more focused attackers by forcing them to target specific systems (where you can deploy increased monitoring).

In addition, a new tool for auditing the configuration of Terminal Services implementations came to our attention. This tool, called “rdp-sec-check”, was written by Portcullis Security and is available to the public. Our testing of the tool showed it to be quite useful in determining the configuration of exposed Terminal Services and in creating a path for hardening them wherever deployed. (Keep in mind, it is likely useful to harden the Terminal Services implementations internally to critical systems as well…)

Note that we particularly loved that the tool could be used REMOTELY. This makes it useful to audit multiple customer implementations, as well as to check RDP exposures during penetration testing engagements. 

Thanks to Portcullis for making this tool available. Hopefully between this tool to harden your deployments and our advice to minimize the exposures, we can all drive down some of the compromises and breaches that result from poor RDP implementations.

If you would like to create some threat metrics for what port 3389 Terminal Services exposures might look like for your organization, get in touch and we can discuss either metrics from the HITME or how to use HoneyPoint to gather such metrics for yourself

PS – Special thanks to @SecRunner for pointing out that many cloud hosting providers make Terminal Server available with default configurations when provisioning cloud systems in an ad-hoc manner. This is likely a HUGE cause for concern and may be what is keeping scans and probes for 3389/TCP so active, particularly amongst cloud-hosted HITME end points.

PSS – We also thought you might enjoy seeing a sample of the videos that show entry level attackers exactly how to crack weak passwords via Terminal Services using tools easily available on the Internet. These kinds of videos are common for low hanging fruit attack vectors. This video was randomly pulled from the Twitter stream with a search. We did not make it and are not responsible for its content. It may not be safe for work (NSFW), depending on your organization’s policies. Indexing Crawler Issues

The crawler is an indexing application that spiders hosts and puts the results into the search engine. Like Google, Bing and other search engines, the system searches out new contents on the web continually and adds the content to the search engine database. Usually, these types of activities cause little issues for those whose sites are being indexed, and in fact, over the years an etiquette system based on rules placed in the robots.txt file of a web site has emerged.

Robots.txt files provide a rule set for search engine behaviors. They indicate what areas of a site a crawler may index and what sections of the site are to be avoided. Usually this is used to protect overly dynamic areas of the site where a crawler could encounter a variety of problems or inputs that can have either bandwidth or application issues for either the crawler, the web host or both. 

Sadly, many web crawlers and index bots do not honor the rules of robots.txt. Nor do attackers who are indexing your site for a variety of attack reasons. Given the impacts that some of these indexing tools can have on bandwidth, CPU use or database connectivity, other options for blocking them are sometimes sought. In particular, there are a lot of complaints about and their aggressive parsing, application interaction and deep site inspection techniques. They clearly have been identified as a search engine that does not seem to respect the honor system of robots.txt. A Google search for “ ignores robots.txt” will show you a wide variety of complaints.

In our monitoring of the HITME traffic, we have observed many deep crawls by from a variety of IP ranges. In the majority of them, they either never requested the robots.txt file at all, or they simply ignored the contents of the file altogether. In fact, some of our HITME web applications have experienced the same high traffic cost concerns that other parts of the web community have been complaining about. In a couple of cases, the cost for supporting the scans of represent some 30+% of the total web traffic observed by the HITME end point. From our standpoint, that’s a pain in the pocketbook and in our attention span, to continually parse their alert traffic out of our metrics.

Techniques for blocking more forcibly than robots.txt have emerged. You can learn about some of them by searching “blocking”. The easiest and what has proven to be an effective way, is to use .htaccess rules. We’ve also had some more modest success with forcibly returning redirects to requests with known url parameters associated with, along with some level of success by blocking specific IPs associated with them via an ignore rule in HoneyPoint.

If you are battling crawling and want to get some additional help, drop us a comment or get in touch via Twitter (@lbhuston, @microsolved). You can also give an account representative a call to arrange for a more technical discussion. We hope this post helps some folks who are suffering increased bandwidth use or problems with their sites/apps due to this and other indexing crawler issues. Until next time, stay safe out there!

Exposed Terminal Services Remains High Frequency Threat

GlobalDisplay Orig

Quickly reviewing the HITME data gathered from our global deployment of HoneyPoint continues to show that exposed Terminal Services (RDP) on port 3389 remains a high frequency threat. In terms of general contact with the attack surface of an exposed Terminal Server connection, direct probes and attacker interaction is seen on an average approximately two times per hour. Given that metric, an organization who is using exposed Terminal Services for remote access or management/support, may be experiencing upwards of 48 attacks per day against their exposed remote access tool. In many cases, when we conduct penetration testing of organizations using Terminal Services in this manner, remote compromise of that service is found to lead to high levels of access to the organization’s data, if not complete control of their systems.

Many organizations continue to use Terminal Services without tokens or VPN technologies in play. These organizations are usually solely dependent on the security of login/password combinations (which history shows to be a critical mistake) and the overall security of the Terminal Services code (which despite a few critical issues, has a pretty fair record given its wide usage and intense scrutiny over the last decade). Clearly, deploying remote access and remote management tools is greatly preferred behind VPN implementations or other forms of access control. Additionally, upping Terminal Services authentication controls by requiring tokens or certificates is also highly suggested. Removing port 3389 exposures to the Internet will go a long way to increasing the security of organizations dependent on RDP technology.

If you would like to discuss the metrics around port 3389 attacks in more detail, drop us a line or reach out on Twitter (@microsolved). You can also see some real time metrics gathered from the HITME by following @honeypoint on Twitter. You’ll see lots of 3389 scan and probe sources in the data stream.

Thanks for reading and until next time, stay safe out there!

Ask The Experts Series – Workstation Malware

This time around we had a question from a reader (thanks for the question!):

“My organization is very concerned about malware on desktop machines. We run anti-virus on all user systems but have difficulty keeping them clean and are still having outbreaks. What else can we do to keep infected machines from hurting us? –LW”

Phil Grimes (@grap3_ap3) responds:

In this day and age, preventing infection on desktop workstations is a losing battle. While Anti-virus and other measures can help protect the machine to some extent, the user is still the single greatest point of entry an attacker can leverage. Sadly, traditional means for prevention don’t apply to this attack vector, as tricking a user into clicking on the “dancing gnome” often launches attacks at levels our prevention solutions just can’t touch.

Realizing this is the first, and biggest step to success here.

Once we’ve embraced the fact that we need better detection and response mechanisms, we start to see how honeypots can help us but also how creating better awareness within our users can be the greatest investment an organization might make in detection. Teach your people what “normal” looks like. Get them in the habit of looking for things that go against that norm. Then, get them to want to tell someone when they see these anomalies! A well trained user base is more efficient, effective, and reliable detection mechanism an organization can have. After that, learn how to respond when something goes wrong.

John Davis added: 

Some of the best things you can do to combat this problem is to implement good, restrictive egress filtering and ensure that users have only those local administration rights to their workstations that they absolutely need.

There are different ways to implement egress filtering, but a big part of the most secure implementation is whitelisting. Whitelisting means that you start by a default deny of all outbound connections from your network, then only allow those things outbound that are specifically needed for business purposes. One of the ways that malware can infect user systems is by Internet surfing. By strictly limiting the sites that users can visit, you can come close to eliminating this infection vector (although you are liable to get plenty of blowback from users – especially if you cut visiting social networking sites).

Another malware infection vector is from users downloading infected software applications to their machines on disks or plugging in infected portable devices such as USB keys and smart phones to their work stations. This can be entirely accidental on the part of the user, or may be done intentionally by hostile insiders like employees or third party service providers with access to facilities. So by physically or logically disabling users local administration rights to their machines, you can cut this infection vector to almost nil.

You still have to worry about email, though. Everybody needs to use email and antivirus software can’t stop some malware such as zero day exploits. So, for this vector (and for those users who still need Internet access and local admin rights to do their jobs), specific security training and incentive programs for good security practices can go a long way. After all, a motivated human is twice as likely to notice a security issue than any automated security solution.

Adam Hostetler also commented:

Ensure a policy for incident response exists, and that it meets NIST guidelines for handling malware infections. Take the stand that once hosts are infected they are to rebuilt and not “cleaned”. This will help prevent reinfection from hidden/uncleaned malware. Finally, work towards implementing full egress controls. This will help prevent malware from establishing command and control channels as well as combat data leakage.

Got a question for the experts? If so, leave us a comment or drop us a line on Twitter (@microsolved). Until next time, stay safe out there! 

Handling Unknown Binaries Class Available



Recently, I taught a class on Handling Unknown Binaries to the local ISSA chapter and the feedback was excellent. I have talked to many folks who have asked if this class was available for their infosec teams, help desk folks and IT staff on a group by group basis. I am thrilled to announce today that the MSI team is making that same class available to companies and other groups.

The course abstract is as follows:

This is a hands on class and a laptop is required (you will need either strings for windows/Cygwin or regular Linux/OS X). This class is oriented towards assisting practitioners in covering the basics of how to handle and perform initial analyses of an unknown binary. Course will NOT cover reverse engineering or any disassembly, but will cover techniques and basic tools to let a security team member do a basic risk assessment on a binary executable or other file. Given the volume of malware, various means of delivery, and rapidly changing threats, this session will deliver relevant and critical analytical training that will be useful to any information security team.

The course is available for scheduling in early September and can be taught remotely via Webex or onsite for a large enough group. 

To learn more about this and other training that MSI can conduct, please drop us a line at info[at]microsolved[dot]com or give an account executive a call at (614) 351-1237. You can also engage with me directly on the content and other questions on Twitter (@lbhuston). 

As always, thanks for reading and stay safe out there.

CSO Online Interview

Our founder & CEO, Brent Huston (@lbhuston) just had a quick interview with CSO Online about the Gauss malware. Look for discussions with Brent later today or tomorrow on the CSO site. Our thanks to CSO Online for thinking of us!

Update 1: The article has been posted on CSO Online and you can find it here

Brent would also like to point out that doing the basics of information security, and doing them well, will help reduce some of the stomach churning, hand wringing and knee-jerk reactions to hyped up threats like these. “Applying the MSI 80/20 Rule of InfoSec throughout your organization will really give folks better results than trying to manage a constant flow of patches, updates. hot fixes and signature tuning.” Huston said.

Which Application Testing is Right for Your Organization?

Millions of people worldwide bank, shop, buy airline tickets, and perform research using the World Wide Web. Each transaction usually includes sharing private information such as names, addresses, phone numbers, credit card numbers, and passwords. They’re routinely transferred and stored in a variety of locations. Billions of dollars and millions of personal identities are at stake every day. In the past, security professionals thought firewalls, Secure Sockets Layer (SSL), patching, and privacy policies were enough to protect websites from hackers. Today, we know better.

Whatever your industry — you should have a consistent testing schedule completed by a security team. Scalable technology allows them to quickly and effectively identify your critical vulnerabilities and their root causes in nearly any type of system, application, device or implementation.

At MSI, our reporting presents clear, concise, action-oriented mitigation strategies that allows your organization to address the identified risks at the technical, management and executive levels.

There are several ways to strengthen your security posture. These strategies can help: application scanning, application security assessments, application penetration testing, and risk assessments.

Application scanning can provide an excellent and affordable way for organizations to meet the requirements of due diligence; especially for secondary, internal, well-controlled or non-critical applications.

Application security assessments can identify security problems, catalog their exposures, measure risk, and develop mitigation strategies that strengthen your applications for your customers. This is a more complete solution than a scan since it goes deeper into the architecture.

Application penetration testing uses tools and scripts to mine your systems for data and examine underlying session management and cryptography. Risk assessments include all policies and processes associated with the specific application, and will be reviewed depending on the complexity of your organization.

In order to protect your organization against security breaches (which are only increasing in frequency), consider conducting an application scan, application  assessment, application penetration test, or risk assessment on a regular basis. If you need help deciding which choice is best for you, let us know. We’re here to help!

Ask the Security Experts: Facebook Security For Teenagers

We’re starting a new series: “Ask the Security Experts.” We’ll pose an information security question and our panel of experts will do their best to answer.


Our panel:

  • Adam Hostetler, Network Engineer, Security Analyst
  • Phil Grimes, Security Analyst
  • John Davis, Risk Management Engineer

Our Question

What should I tell my teenage children about privacy and security on Facebook?

Adam Hostetler:

Teach them how to use Facebook privacy settings. Go into the settings
and explain how it works, and that they should only post updates and
photos to their friends and not in public. Also, how to set their
account so they can only be found by friends of friends. As for apps, be
very careful about what Facebook apps they use, and pay attention to the
permissions they request. For their account, always use a strong
password. Do not give out account information to anyone (except
parents). Lastly, they should always log out of the account when they
are done. Never close the browser with the account still logged in.

Phil Grimes:

I fight this battle daily. I constantly remind my kids that what goes online now stays online forever. I have discussed privacy settings with them and give them little reminders that help them think about security and privacy online — at least in terms of posting info and pictures. It never hurts to remind them who I am and what I do for a living, they tend to always think twice before posting.

As for the games, however, this is something that is almost impossible to combat in my house. I think I am the only person who does NOT play Facebook games. The keys here are simple. Accept the machines that play these games as lost assets. I image the disks so I can restore them quickly and easily, then cordon them off on their own network segment so WHEN they get popped, I can “turn and burn” to get them back online. This really works well for me, but another important factor is to NOT do anything sensitive from these machines. Luckily, my kids don’t do any online banking or anything like that. I have my wife conduct sensitive tasks through another machine.

John Davis:

I would say to watch the scams and traps that are strewn like land mines throughout the site. Watch the free give-aways, be wary of clicking on pictures and videos and look carefully at any messages that contain links or suggest web sites to visit. Also, be VERY careful about ‘friends’ of friends and other strangers that want to friend you or communicate with you. You very well may not be communicating with who you think you are. Finally, if you’re on Facebook frequently and have not been wary, chances are you have malware on your computer that hides itself and runs in the background where you are not aware of it. So be careful when using the site and scan your system frequently.

Raising Your Security Vision







If your security program is still focused on patching, responding to vulnerability scans and mitigating the monthly churn of product updates/hotfixes and the like, then you need to change.

Sure, patching is important, but that should truly NOT be the focus of your information security initiative.

Today, organizations need to raise their vision. They need to be moving to automate as much of prevention and baseline processes of detection, as possible. They need to be focused on doing the basics better. Hardening, nuance detection, incident investigation/isolation/mitigation — these are the things they should be getting better at. 
Their increased vision and maturity should let them move away from vulnerability-focused security and instead, concentrate their efforts on managing risk. They need to know where their assets are, what controls are in place and what can be done to mitigate issues quickly. They also should gain detection capability where needed and know how to respond when something bad happens. 
Check out tools like our 80/20 Rule for Information Security for tips on how to get there. Feel free to reach out and engage us in discussion as well. (@lbhuston) We would be happy to set up a call with our security experts to discuss your particular needs and how we can help you get farther faster.
As always, thanks for reading and stay safe out there!