About Brent Huston

I am the CEO of MicroSolved, Inc. and a security evangelist. I have spent the last 20+ years working to make the Internet safer for everyone on a global scale. I believe the Internet has the capability to contribute to the next great leap for mankind, and I want to help make that happen!

Terminal Services Attack Reductions Redux

Last week, we published a post about the high frequency of probes, scans and attacks against exposed Windows Terminal Services from the Internet. Many folks commented on Twitter to me about some of the things that can be done to minimize the risk of these exposures. As we indicated in the previous post, the best suggestions are to eliminate them altogether by placing Terminal Services exposures behind VPN connections or through the implementation of tokens/multi-factor authentication. 

Another idea is to implement specific firewall rules that block access to all but a specific set of IP addresses (such as the home IP address range of your admins or that of a specific jump host, etc.) This can go a long way to minimizing the frequency of interaction with the attack surfaces by random attacker tools, probes and scans. It also raises the bar slightly for more focused attackers by forcing them to target specific systems (where you can deploy increased monitoring).

In addition, a new tool for auditing the configuration of Terminal Services implementations came to our attention. This tool, called “rdp-sec-check”, was written by Portcullis Security and is available to the public. Our testing of the tool showed it to be quite useful in determining the configuration of exposed Terminal Services and in creating a path for hardening them wherever deployed. (Keep in mind, it is likely useful to harden the Terminal Services implementations internally to critical systems as well…)

Note that we particularly loved that the tool could be used REMOTELY. This makes it useful to audit multiple customer implementations, as well as to check RDP exposures during penetration testing engagements. 

Thanks to Portcullis for making this tool available. Hopefully between this tool to harden your deployments and our advice to minimize the exposures, we can all drive down some of the compromises and breaches that result from poor RDP implementations.

If you would like to create some threat metrics for what port 3389 Terminal Services exposures might look like for your organization, get in touch and we can discuss either metrics from the HITME or how to use HoneyPoint to gather such metrics for yourself

PS – Special thanks to @SecRunner for pointing out that many cloud hosting providers make Terminal Server available with default configurations when provisioning cloud systems in an ad-hoc manner. This is likely a HUGE cause for concern and may be what is keeping scans and probes for 3389/TCP so active, particularly amongst cloud-hosted HITME end points.

PSS – We also thought you might enjoy seeing a sample of the videos that show entry level attackers exactly how to crack weak passwords via Terminal Services using tools easily available on the Internet. These kinds of videos are common for low hanging fruit attack vectors. This video was randomly pulled from the Twitter stream with a search. We did not make it and are not responsible for its content. It may not be safe for work (NSFW), depending on your organization’s policies. 

 

Yandex.ru Indexing Crawler Issues

The yandex.ru crawler is an indexing application that spiders hosts and puts the results into the yandex.ru search engine. Like Google, Bing and other search engines, the system searches out new contents on the web continually and adds the content to the search engine database. Usually, these types of activities cause little issues for those whose sites are being indexed, and in fact, over the years an etiquette system based on rules placed in the robots.txt file of a web site has emerged.

Robots.txt files provide a rule set for search engine behaviors. They indicate what areas of a site a crawler may index and what sections of the site are to be avoided. Usually this is used to protect overly dynamic areas of the site where a crawler could encounter a variety of problems or inputs that can have either bandwidth or application issues for either the crawler, the web host or both. 

Sadly, many web crawlers and index bots do not honor the rules of robots.txt. Nor do attackers who are indexing your site for a variety of attack reasons. Given the impacts that some of these indexing tools can have on bandwidth, CPU use or database connectivity, other options for blocking them are sometimes sought. In particular, there are a lot of complaints about yandex.ru and their aggressive parsing, application interaction and deep site inspection techniques. They clearly have been identified as a search engine that does not seem to respect the honor system of robots.txt. A Google search for “yandex.ru ignores robots.txt” will show you a wide variety of complaints.

In our monitoring of the HITME traffic, we have observed many deep crawls by yandex.ru from a variety of IP ranges. In the majority of them, they either never requested the robots.txt file at all, or they simply ignored the contents of the file altogether. In fact, some of our HITME web applications have experienced the same high traffic cost concerns that other parts of the web community have been complaining about. In a couple of cases, the cost for supporting the scans of yandex.ru represent some 30+% of the total web traffic observed by the HITME end point. From our standpoint, that’s a pain in the pocketbook and in our attention span, to continually parse their alert traffic out of our metrics.

Techniques for blocking yandex.ru more forcibly than robots.txt have emerged. You can learn about some of them by searching “blocking yandex.ru”. The easiest and what has proven to be an effective way, is to use .htaccess rules. We’ve also had some more modest success with forcibly returning redirects to requests with known url parameters associated with yandex.ru, along with some level of success by blocking specific IPs associated with them via an ignore rule in HoneyPoint.

If you are battling yandex.ru crawling and want to get some additional help, drop us a comment or get in touch via Twitter (@lbhuston, @microsolved). You can also give an account representative a call to arrange for a more technical discussion. We hope this post helps some folks who are suffering increased bandwidth use or problems with their sites/apps due to this and other indexing crawler issues. Until next time, stay safe out there!

Exposed Terminal Services Remains High Frequency Threat

GlobalDisplay Orig

Quickly reviewing the HITME data gathered from our global deployment of HoneyPoint continues to show that exposed Terminal Services (RDP) on port 3389 remains a high frequency threat. In terms of general contact with the attack surface of an exposed Terminal Server connection, direct probes and attacker interaction is seen on an average approximately two times per hour. Given that metric, an organization who is using exposed Terminal Services for remote access or management/support, may be experiencing upwards of 48 attacks per day against their exposed remote access tool. In many cases, when we conduct penetration testing of organizations using Terminal Services in this manner, remote compromise of that service is found to lead to high levels of access to the organization’s data, if not complete control of their systems.

Many organizations continue to use Terminal Services without tokens or VPN technologies in play. These organizations are usually solely dependent on the security of login/password combinations (which history shows to be a critical mistake) and the overall security of the Terminal Services code (which despite a few critical issues, has a pretty fair record given its wide usage and intense scrutiny over the last decade). Clearly, deploying remote access and remote management tools is greatly preferred behind VPN implementations or other forms of access control. Additionally, upping Terminal Services authentication controls by requiring tokens or certificates is also highly suggested. Removing port 3389 exposures to the Internet will go a long way to increasing the security of organizations dependent on RDP technology.

If you would like to discuss the metrics around port 3389 attacks in more detail, drop us a line or reach out on Twitter (@microsolved). You can also see some real time metrics gathered from the HITME by following @honeypoint on Twitter. You’ll see lots of 3389 scan and probe sources in the data stream.

Thanks for reading and until next time, stay safe out there!

Ask The Experts Series – Workstation Malware

This time around we had a question from a reader (thanks for the question!):

“My organization is very concerned about malware on desktop machines. We run anti-virus on all user systems but have difficulty keeping them clean and are still having outbreaks. What else can we do to keep infected machines from hurting us? –LW”

Phil Grimes (@grap3_ap3) responds:

In this day and age, preventing infection on desktop workstations is a losing battle. While Anti-virus and other measures can help protect the machine to some extent, the user is still the single greatest point of entry an attacker can leverage. Sadly, traditional means for prevention don’t apply to this attack vector, as tricking a user into clicking on the “dancing gnome” often launches attacks at levels our prevention solutions just can’t touch.

Realizing this is the first, and biggest step to success here.

Once we’ve embraced the fact that we need better detection and response mechanisms, we start to see how honeypots can help us but also how creating better awareness within our users can be the greatest investment an organization might make in detection. Teach your people what “normal” looks like. Get them in the habit of looking for things that go against that norm. Then, get them to want to tell someone when they see these anomalies! A well trained user base is more efficient, effective, and reliable detection mechanism an organization can have. After that, learn how to respond when something goes wrong.

John Davis added: 

Some of the best things you can do to combat this problem is to implement good, restrictive egress filtering and ensure that users have only those local administration rights to their workstations that they absolutely need.

There are different ways to implement egress filtering, but a big part of the most secure implementation is whitelisting. Whitelisting means that you start by a default deny of all outbound connections from your network, then only allow those things outbound that are specifically needed for business purposes. One of the ways that malware can infect user systems is by Internet surfing. By strictly limiting the sites that users can visit, you can come close to eliminating this infection vector (although you are liable to get plenty of blowback from users – especially if you cut visiting social networking sites).

Another malware infection vector is from users downloading infected software applications to their machines on disks or plugging in infected portable devices such as USB keys and smart phones to their work stations. This can be entirely accidental on the part of the user, or may be done intentionally by hostile insiders like employees or third party service providers with access to facilities. So by physically or logically disabling users local administration rights to their machines, you can cut this infection vector to almost nil.

You still have to worry about email, though. Everybody needs to use email and antivirus software can’t stop some malware such as zero day exploits. So, for this vector (and for those users who still need Internet access and local admin rights to do their jobs), specific security training and incentive programs for good security practices can go a long way. After all, a motivated human is twice as likely to notice a security issue than any automated security solution.

Adam Hostetler also commented:

Ensure a policy for incident response exists, and that it meets NIST guidelines for handling malware infections. Take the stand that once hosts are infected they are to rebuilt and not “cleaned”. This will help prevent reinfection from hidden/uncleaned malware. Finally, work towards implementing full egress controls. This will help prevent malware from establishing command and control channels as well as combat data leakage.

Got a question for the experts? If so, leave us a comment or drop us a line on Twitter (@microsolved). Until next time, stay safe out there! 

Handling Unknown Binaries Class Available

 

J0289552

Recently, I taught a class on Handling Unknown Binaries to the local ISSA chapter and the feedback was excellent. I have talked to many folks who have asked if this class was available for their infosec teams, help desk folks and IT staff on a group by group basis. I am thrilled to announce today that the MSI team is making that same class available to companies and other groups.

The course abstract is as follows:

This is a hands on class and a laptop is required (you will need either strings for windows/Cygwin or regular Linux/OS X). This class is oriented towards assisting practitioners in covering the basics of how to handle and perform initial analyses of an unknown binary. Course will NOT cover reverse engineering or any disassembly, but will cover techniques and basic tools to let a security team member do a basic risk assessment on a binary executable or other file. Given the volume of malware, various means of delivery, and rapidly changing threats, this session will deliver relevant and critical analytical training that will be useful to any information security team.

The course is available for scheduling in early September and can be taught remotely via Webex or onsite for a large enough group. 

To learn more about this and other training that MSI can conduct, please drop us a line at info[at]microsolved[dot]com or give an account executive a call at (614) 351-1237. You can also engage with me directly on the content and other questions on Twitter (@lbhuston). 

As always, thanks for reading and stay safe out there.

CSO Online Interview

Our founder & CEO, Brent Huston (@lbhuston) just had a quick interview with CSO Online about the Gauss malware. Look for discussions with Brent later today or tomorrow on the CSO site. Our thanks to CSO Online for thinking of us!

Update 1: The article has been posted on CSO Online and you can find it here

Brent would also like to point out that doing the basics of information security, and doing them well, will help reduce some of the stomach churning, hand wringing and knee-jerk reactions to hyped up threats like these. “Applying the MSI 80/20 Rule of InfoSec throughout your organization will really give folks better results than trying to manage a constant flow of patches, updates. hot fixes and signature tuning.” Huston said.

Raising Your Security Vision

 

 

 

 

 

 

If your security program is still focused on patching, responding to vulnerability scans and mitigating the monthly churn of product updates/hotfixes and the like, then you need to change.

Sure, patching is important, but that should truly NOT be the focus of your information security initiative.

Today, organizations need to raise their vision. They need to be moving to automate as much of prevention and baseline processes of detection, as possible. They need to be focused on doing the basics better. Hardening, nuance detection, incident investigation/isolation/mitigation — these are the things they should be getting better at. 
 
Their increased vision and maturity should let them move away from vulnerability-focused security and instead, concentrate their efforts on managing risk. They need to know where their assets are, what controls are in place and what can be done to mitigate issues quickly. They also should gain detection capability where needed and know how to respond when something bad happens. 
 
Check out tools like our 80/20 Rule for Information Security for tips on how to get there. Feel free to reach out and engage us in discussion as well. (@lbhuston) We would be happy to set up a call with our security experts to discuss your particular needs and how we can help you get farther faster.
 
As always, thanks for reading and stay safe out there!

Security Experimentation with HoneyPoint

One of the best uses of HoneyPoint is using it to test your assumptions, model risk or otherwise perform experimentation.

If your management team would benefit from understanding how quickly a new web application will be targeted and attacked when deployed, a quick mock up with HoneyPoint can give them that data. If you want to prove to the development team that attackers will find XSS vulnerable apps, a quick publish of a HoneyPoint web app with the XSS vulnerability enabled will get you metrics to support your assertion.

That’s one of my favorite uses of HoneyPoint: to quickly, easily and safely build real world metrics that answer my questions. Sure, it’s a great tool for defense and detection. But I really love using it to scratch my own itch for real world data. 

Don’t Freak Out, It’s Only Defcon

It’s that time of year again. The time of year when the hype cycle gets its yearly injection of fear and hysteria from overheated, overstimulated, dehydrated journalists baking in the Las Vegas summer heat. It happens every year around this time, the journalists and bloggers flock to the desert to hear stories of emerging hacks, security researcher data, marketing spin and a ton of first person encounters with party goers and the followers of the chaos that has become Defcon.

It is, after all, one of the largest, oldest and most attended events in the hacker community. It mixes technology, business, hacking, marketing, drinking, oddity and a sprinkle of carnival into an extreme-flavored cocktail fed to the public in a biggie-sized martini glass that could only be made in the playground that is Las Vegas.

There are a ton of legitimate researchers there, to be sure. There are an army of folks who represent a large part of the core of the infosec hacker world brain trust. They will be consistently demonstrating their points throughout the events of BlackHat and Defcon. You can tell them apart from the crowd and scene mongers by the rational approaches they take. You can find them throughout the year, presenting, writing, coding and educating the world on information security, risk and other relevant topics. Extending from them, you can also find all of the extremes that such events attract. These are the “hackers” with green hair, destroying casino equipment, throwing dye and shampoo into the fountains, breaking glass in the pool and otherwise acting as if they have never been to outside of the jungle before. These are the ones that the journalists LOVE to talk about. Extreme views within the community, the irrational party goer who offers a single tech tidbit along with a smorgasbord of rhetoric. These interviews spin up the hype cycle. These interviews sell subscriptions, papers and advertising. Sadly, they also represent a tiny percentage of the truth and value of the gatherings in Vegas.
 
Over the next week or so, you’ll see many stories aimed at telling you how weak the security is on everything from hotel door locks to the power grid. The press will spin up a bunch of hype about the latest hacks, zero day exploits and other fearsome “cyber stuff”. Then, when the conference is over and the journalists and circus leave Las Vegas, everyone will come back and have to continue to make the same rational, risk based decisions about what to do about this issue and that issue. 
 
I mention this, not to disparage the events in Vegas or the participants. I think the world of them and call many my personal friends and partners. However, I do want to prep folks for the press cycle ahead. Take the over the top stories and breathless zero-day announcements in the coming weeks with a grain of salt. Disregard the tales of drunken hackers menacing Vegas hotels, changing signs and doing social engineering attacks in front of audiences as human interest stories. They are good for amusement and awareness, maybe even at piquing the interest of line management folks to get a first hand view, but they are NOT really useful as a lens for viewing your organization’s risk or the steps you should be taking to protect your data. Instead, stick to the basics. Do them well. Stay aware, but rational when the hype cycle spins up and hacks of all sorts are on the front page of papers and running as headlines at the bottom of TV screen news channels. Rational responses and analysis are your best defense against whatever comes out of the hacker gathering in the desert, or wherever they happen to meet up in the future.
 
Until next time, stay safe out there, and if you happen to be in Vegas, stay hydrated. The desert winds are like a furnace and they will bake you in no time!

Smart Grid Security is Getting Better – But Still Has Ways to Improve

Our testing lab has spent quite a bit of time over the last several years testing smart grid devices. We are very happy to say that we are seeing strong improvement in the general security controls in this space.

Many of the newer smart grid systems we are testing have implemented good basic controls to prevent many of the attacks we used to see in these devices in the early days of the smart grid movement. Today, for example, most of the devices we test, have implemented at least basic controls for firmware update signing, which was almost unheard of when we first started testing these systems years ago. 

Other improvements in the smart grid systems are also easily identifiable. Cryptographic protocols and hardened system configurations are two more controls that have become pretty well standard in the space. The days of seeing  silly plain-text protocols between the field devices or the field deployments and the upstream controls systems are pretty well gone (there are still SOME, albeit fewer exceptions…).
 
Zigbee and communications of customer premise equipment to the smart grid utility systems is getting somewhat better (still little crypto and a lot of crappy bounds checking), but still has a ways to go. Much of this won’t get fixed until the various protocols are revised and upgraded, but some of the easy, low hanging vulnerability fruit IS starting to get cleaned up and as CPU capability increases on customer devices, we are starting to see more folks using SSL overlays and other forms of basic crypto at the application layer. All of this is pretty much a good thing. 
 
There are still some strong areas for improvement in the smart grid space. We still have more than a few battles to fight over encryption versus encoding, modern development security, JTAG protection, input validation and the usual application security shortcomings that the web and other platforms for app development are still struggling with.
 
Default passwords, crypto keys and configurations still abound. Threat modeling needs to be done in deeper detail and the threat metrics need to be better socialized among the relevant stakeholders. There is still a plethora of policy/process/procedure development to be done. We need better standards, reporting mechanisms, alerting capabilities, analysis of single points of failure, contingency planning and wide variety of devices and applications still need to be thoroughly tested in a security lab. In fact, so many new applications, systems and devices are coming into the smart grid market space, that there is a backlog of stuff to test. That work needs to be done to harden these devices while their footprint is still small enough to manage, mitigate and mature.
 
The good news is that things are getting better in the smart grid security world. Changes are coming through the pipeline of government regulation. Standards are being built. Vendors are doing the hard, gut check work of having devices tested and vulnerabilities mitigated or minimized. All of this, culminates in one of the primary goals of MicroSolved for the last two decades – to make the world and the Internet safer for all of you.
 
As always, thanks for reading and stay safe out there!