Terminal Services Attack Reductions Redux

Last week, we published a post about the high frequency of probes, scans and attacks against exposed Windows Terminal Services from the Internet. Many folks commented on Twitter to me about some of the things that can be done to minimize the risk of these exposures. As we indicated in the previous post, the best suggestions are to eliminate them altogether by placing Terminal Services exposures behind VPN connections or through the implementation of tokens/multi-factor authentication. 

Another idea is to implement specific firewall rules that block access to all but a specific set of IP addresses (such as the home IP address range of your admins or that of a specific jump host, etc.) This can go a long way to minimizing the frequency of interaction with the attack surfaces by random attacker tools, probes and scans. It also raises the bar slightly for more focused attackers by forcing them to target specific systems (where you can deploy increased monitoring).

In addition, a new tool for auditing the configuration of Terminal Services implementations came to our attention. This tool, called “rdp-sec-check”, was written by Portcullis Security and is available to the public. Our testing of the tool showed it to be quite useful in determining the configuration of exposed Terminal Services and in creating a path for hardening them wherever deployed. (Keep in mind, it is likely useful to harden the Terminal Services implementations internally to critical systems as well…)

Note that we particularly loved that the tool could be used REMOTELY. This makes it useful to audit multiple customer implementations, as well as to check RDP exposures during penetration testing engagements. 

Thanks to Portcullis for making this tool available. Hopefully between this tool to harden your deployments and our advice to minimize the exposures, we can all drive down some of the compromises and breaches that result from poor RDP implementations.

If you would like to create some threat metrics for what port 3389 Terminal Services exposures might look like for your organization, get in touch and we can discuss either metrics from the HITME or how to use HoneyPoint to gather such metrics for yourself

PS – Special thanks to @SecRunner for pointing out that many cloud hosting providers make Terminal Server available with default configurations when provisioning cloud systems in an ad-hoc manner. This is likely a HUGE cause for concern and may be what is keeping scans and probes for 3389/TCP so active, particularly amongst cloud-hosted HITME end points.

PSS – We also thought you might enjoy seeing a sample of the videos that show entry level attackers exactly how to crack weak passwords via Terminal Services using tools easily available on the Internet. These kinds of videos are common for low hanging fruit attack vectors. This video was randomly pulled from the Twitter stream with a search. We did not make it and are not responsible for its content. It may not be safe for work (NSFW), depending on your organization’s policies. 

 

Yandex.ru Indexing Crawler Issues

The yandex.ru crawler is an indexing application that spiders hosts and puts the results into the yandex.ru search engine. Like Google, Bing and other search engines, the system searches out new contents on the web continually and adds the content to the search engine database. Usually, these types of activities cause little issues for those whose sites are being indexed, and in fact, over the years an etiquette system based on rules placed in the robots.txt file of a web site has emerged.

Robots.txt files provide a rule set for search engine behaviors. They indicate what areas of a site a crawler may index and what sections of the site are to be avoided. Usually this is used to protect overly dynamic areas of the site where a crawler could encounter a variety of problems or inputs that can have either bandwidth or application issues for either the crawler, the web host or both. 

Sadly, many web crawlers and index bots do not honor the rules of robots.txt. Nor do attackers who are indexing your site for a variety of attack reasons. Given the impacts that some of these indexing tools can have on bandwidth, CPU use or database connectivity, other options for blocking them are sometimes sought. In particular, there are a lot of complaints about yandex.ru and their aggressive parsing, application interaction and deep site inspection techniques. They clearly have been identified as a search engine that does not seem to respect the honor system of robots.txt. A Google search for “yandex.ru ignores robots.txt” will show you a wide variety of complaints.

In our monitoring of the HITME traffic, we have observed many deep crawls by yandex.ru from a variety of IP ranges. In the majority of them, they either never requested the robots.txt file at all, or they simply ignored the contents of the file altogether. In fact, some of our HITME web applications have experienced the same high traffic cost concerns that other parts of the web community have been complaining about. In a couple of cases, the cost for supporting the scans of yandex.ru represent some 30+% of the total web traffic observed by the HITME end point. From our standpoint, that’s a pain in the pocketbook and in our attention span, to continually parse their alert traffic out of our metrics.

Techniques for blocking yandex.ru more forcibly than robots.txt have emerged. You can learn about some of them by searching “blocking yandex.ru”. The easiest and what has proven to be an effective way, is to use .htaccess rules. We’ve also had some more modest success with forcibly returning redirects to requests with known url parameters associated with yandex.ru, along with some level of success by blocking specific IPs associated with them via an ignore rule in HoneyPoint.

If you are battling yandex.ru crawling and want to get some additional help, drop us a comment or get in touch via Twitter (@lbhuston, @microsolved). You can also give an account representative a call to arrange for a more technical discussion. We hope this post helps some folks who are suffering increased bandwidth use or problems with their sites/apps due to this and other indexing crawler issues. Until next time, stay safe out there!

Exposed Terminal Services Remains High Frequency Threat

GlobalDisplay Orig

Quickly reviewing the HITME data gathered from our global deployment of HoneyPoint continues to show that exposed Terminal Services (RDP) on port 3389 remains a high frequency threat. In terms of general contact with the attack surface of an exposed Terminal Server connection, direct probes and attacker interaction is seen on an average approximately two times per hour. Given that metric, an organization who is using exposed Terminal Services for remote access or management/support, may be experiencing upwards of 48 attacks per day against their exposed remote access tool. In many cases, when we conduct penetration testing of organizations using Terminal Services in this manner, remote compromise of that service is found to lead to high levels of access to the organization’s data, if not complete control of their systems.

Many organizations continue to use Terminal Services without tokens or VPN technologies in play. These organizations are usually solely dependent on the security of login/password combinations (which history shows to be a critical mistake) and the overall security of the Terminal Services code (which despite a few critical issues, has a pretty fair record given its wide usage and intense scrutiny over the last decade). Clearly, deploying remote access and remote management tools is greatly preferred behind VPN implementations or other forms of access control. Additionally, upping Terminal Services authentication controls by requiring tokens or certificates is also highly suggested. Removing port 3389 exposures to the Internet will go a long way to increasing the security of organizations dependent on RDP technology.

If you would like to discuss the metrics around port 3389 attacks in more detail, drop us a line or reach out on Twitter (@microsolved). You can also see some real time metrics gathered from the HITME by following @honeypoint on Twitter. You’ll see lots of 3389 scan and probe sources in the data stream.

Thanks for reading and until next time, stay safe out there!

Raising Your Security Vision

 

 

 

 

 

 

If your security program is still focused on patching, responding to vulnerability scans and mitigating the monthly churn of product updates/hotfixes and the like, then you need to change.

Sure, patching is important, but that should truly NOT be the focus of your information security initiative.

Today, organizations need to raise their vision. They need to be moving to automate as much of prevention and baseline processes of detection, as possible. They need to be focused on doing the basics better. Hardening, nuance detection, incident investigation/isolation/mitigation — these are the things they should be getting better at. 
 
Their increased vision and maturity should let them move away from vulnerability-focused security and instead, concentrate their efforts on managing risk. They need to know where their assets are, what controls are in place and what can be done to mitigate issues quickly. They also should gain detection capability where needed and know how to respond when something bad happens. 
 
Check out tools like our 80/20 Rule for Information Security for tips on how to get there. Feel free to reach out and engage us in discussion as well. (@lbhuston) We would be happy to set up a call with our security experts to discuss your particular needs and how we can help you get farther faster.
 
As always, thanks for reading and stay safe out there!

Don’t Freak Out, It’s Only Defcon

It’s that time of year again. The time of year when the hype cycle gets its yearly injection of fear and hysteria from overheated, overstimulated, dehydrated journalists baking in the Las Vegas summer heat. It happens every year around this time, the journalists and bloggers flock to the desert to hear stories of emerging hacks, security researcher data, marketing spin and a ton of first person encounters with party goers and the followers of the chaos that has become Defcon.

It is, after all, one of the largest, oldest and most attended events in the hacker community. It mixes technology, business, hacking, marketing, drinking, oddity and a sprinkle of carnival into an extreme-flavored cocktail fed to the public in a biggie-sized martini glass that could only be made in the playground that is Las Vegas.

There are a ton of legitimate researchers there, to be sure. There are an army of folks who represent a large part of the core of the infosec hacker world brain trust. They will be consistently demonstrating their points throughout the events of BlackHat and Defcon. You can tell them apart from the crowd and scene mongers by the rational approaches they take. You can find them throughout the year, presenting, writing, coding and educating the world on information security, risk and other relevant topics. Extending from them, you can also find all of the extremes that such events attract. These are the “hackers” with green hair, destroying casino equipment, throwing dye and shampoo into the fountains, breaking glass in the pool and otherwise acting as if they have never been to outside of the jungle before. These are the ones that the journalists LOVE to talk about. Extreme views within the community, the irrational party goer who offers a single tech tidbit along with a smorgasbord of rhetoric. These interviews spin up the hype cycle. These interviews sell subscriptions, papers and advertising. Sadly, they also represent a tiny percentage of the truth and value of the gatherings in Vegas.
 
Over the next week or so, you’ll see many stories aimed at telling you how weak the security is on everything from hotel door locks to the power grid. The press will spin up a bunch of hype about the latest hacks, zero day exploits and other fearsome “cyber stuff”. Then, when the conference is over and the journalists and circus leave Las Vegas, everyone will come back and have to continue to make the same rational, risk based decisions about what to do about this issue and that issue. 
 
I mention this, not to disparage the events in Vegas or the participants. I think the world of them and call many my personal friends and partners. However, I do want to prep folks for the press cycle ahead. Take the over the top stories and breathless zero-day announcements in the coming weeks with a grain of salt. Disregard the tales of drunken hackers menacing Vegas hotels, changing signs and doing social engineering attacks in front of audiences as human interest stories. They are good for amusement and awareness, maybe even at piquing the interest of line management folks to get a first hand view, but they are NOT really useful as a lens for viewing your organization’s risk or the steps you should be taking to protect your data. Instead, stick to the basics. Do them well. Stay aware, but rational when the hype cycle spins up and hacks of all sorts are on the front page of papers and running as headlines at the bottom of TV screen news channels. Rational responses and analysis are your best defense against whatever comes out of the hacker gathering in the desert, or wherever they happen to meet up in the future.
 
Until next time, stay safe out there, and if you happen to be in Vegas, stay hydrated. The desert winds are like a furnace and they will bake you in no time!

3 Things Security Vendors Wished CIOs Knew

Brent Huston, CEO and Founder of MicroSolved, answered a few questions regarding CIO’s and information security. If Brent could speak to a room full of CIO’s, these are a few things he’d share:

1)  CIOs are often unaware of what assets their organization have and how are they protected.

One problem we continually run into is the CIO folks know what the assets are they have, what’s critical and what isn’t. Often, they don’t have a good feel for the lifecycle of that critical data. Knowing what they have and how they currently protect it is a huge step forward for a CIO.

Does that have to be the ability to whip out a map? In a perfect world, yes. It just means the CIO needs to be able to reiterate to the vendor particularly when we’re talking about nuanced protection. And if we’re talking about penetration testing, why not consider this: instead of talking about penetration testing the whole environment, let’s test the stuff that matters. CIOs need to effectively and clearly communicate where that stuff is that matters. The systems it interacts with and what controls are in place today is what we need to focus on for testing or leverage them to do detection.

2)  A lot of CIOs don’t have any idea of what their real threat profile looks like.

When you talk to a CIO about the threat, their image of a threat is either script kiddies sitting in the basement of their mom’s house, or they’re so deeply entrenched in the cyber-crime thing that they think of it as credit card theft. They haven’t reached the level where they have any measurement or understanding of the different levels of threats that are focused on them — and how their responses would vary. The problem is they then treat all threats as the same. 

You expend the resources at a continual burn rate, so you’re probably using more resources than what you need, and then, when something really bad happens (because they’re used to treating it like a minor thing), they don’t feel like they need to pay attention. I’d love to see a CIO grow their attention to the threat profile and be able to communicate that upwards and to us as a vendor. 

3)  Some CIOs don’t understand the organization’s appetite for risk.

This is probably the hardest one. I love to meet with CIOs who already know their organization’s appetite for risk.  It seems like many organizations, even those who should be far enough along and mature and understand an appetite for risk (I’m talking about critical infrastructures, here), don’t understand it.  They have no way to quantify or qualify risk and decide what is acceptable and what isn’t. There may be complex policies in place and there are exceptions, but many CIO’s don’t have a clear “line in the sand” to help them determine what to respond to.

These kinds of initiatives are growing, but that’s one of those things that separates a mature, security-focused organization, and a risk-focused organization from folks who haven’t moved into more of a risk and threat management interface. Many folks still are managing at a vulnerability layer, i.e. “If X vendor releases a Y patch, and I need the Z team to apply it, then I’ll do it.” They think that’s the extent of their security effort. 

 

To consider your security posture, why not take a look at our “80/20 Rule for Information Security” page? Did you know that 80% of an organizations’ real information security comes from only 20% of the assets and effort put into the program? These 13 security projects will give your organization the most effective information security coverage for the least expenditure of time and resources.

Contact us if you have questions! We’ve seen how these projects have helped our clients and would love to help you!

Smart Grid Security is Getting Better – But Still Has Ways to Improve

Our testing lab has spent quite a bit of time over the last several years testing smart grid devices. We are very happy to say that we are seeing strong improvement in the general security controls in this space.

Many of the newer smart grid systems we are testing have implemented good basic controls to prevent many of the attacks we used to see in these devices in the early days of the smart grid movement. Today, for example, most of the devices we test, have implemented at least basic controls for firmware update signing, which was almost unheard of when we first started testing these systems years ago. 

Other improvements in the smart grid systems are also easily identifiable. Cryptographic protocols and hardened system configurations are two more controls that have become pretty well standard in the space. The days of seeing  silly plain-text protocols between the field devices or the field deployments and the upstream controls systems are pretty well gone (there are still SOME, albeit fewer exceptions…).
 
Zigbee and communications of customer premise equipment to the smart grid utility systems is getting somewhat better (still little crypto and a lot of crappy bounds checking), but still has a ways to go. Much of this won’t get fixed until the various protocols are revised and upgraded, but some of the easy, low hanging vulnerability fruit IS starting to get cleaned up and as CPU capability increases on customer devices, we are starting to see more folks using SSL overlays and other forms of basic crypto at the application layer. All of this is pretty much a good thing. 
 
There are still some strong areas for improvement in the smart grid space. We still have more than a few battles to fight over encryption versus encoding, modern development security, JTAG protection, input validation and the usual application security shortcomings that the web and other platforms for app development are still struggling with.
 
Default passwords, crypto keys and configurations still abound. Threat modeling needs to be done in deeper detail and the threat metrics need to be better socialized among the relevant stakeholders. There is still a plethora of policy/process/procedure development to be done. We need better standards, reporting mechanisms, alerting capabilities, analysis of single points of failure, contingency planning and wide variety of devices and applications still need to be thoroughly tested in a security lab. In fact, so many new applications, systems and devices are coming into the smart grid market space, that there is a backlog of stuff to test. That work needs to be done to harden these devices while their footprint is still small enough to manage, mitigate and mature.
 
The good news is that things are getting better in the smart grid security world. Changes are coming through the pipeline of government regulation. Standards are being built. Vendors are doing the hard, gut check work of having devices tested and vulnerabilities mitigated or minimized. All of this, culminates in one of the primary goals of MicroSolved for the last two decades – to make the world and the Internet safer for all of you.
 
As always, thanks for reading and stay safe out there!

Talking to Your Management Rationally About Malware

Malware with comparisons to Stuxnet are all the rage these days. CNN and other popular media outlets now run stories about new Trojans, viruses and exploits. Much of what is in the media is either hysteria, hype, confusion or outright wrong.
 
There are often nuggets of truth scattered about in the stories, but few of the fears and scenarios whipped into a frothy story have a rational bearing on reality, let alone your business. Nonetheless, executives and even end-users take this stuff in and start to talk about information security topics (which is usually a good thing), but without a rational view, they may use that information to make decisions without regard to risk or the exposures that truly matter to the organization.
 
This is where YOU come in. As an infosec practitioner, your job is to explain to folks in a rational way about the trends and topics in the news. You need to be able to discuss the new piece of malware they saw last night on the news and explain carefully, truthfully, and rationally how it might impact your organization.
 
You need to discuss the controls you have in place. You need to explain the recovery and response processes you have been honing over the last few years. You also need to carefully walk them through how attacks like this work, how your team would be able to detect it (or not), and what you need to be able to do in the future.
 
You need to do this without breathlessly going into detail about the newest evasion techniques it uses, how cool the new exploits are that it leverages, or otherwise spreading uncertainty or fear to your management team. Now, I am NOT suggesting you tell them you have everything under control if you don’t. However, I am suggesting that this conversation should be rational, fair and flat — and offer to come by their office later to discuss future enhancement capabilities and projects that could be funded to assist your team with defending against these and other threats in the future. Then, do it at a time when they have intellectual and emotional stability. 
 
You must also learn about these threats. Be ready to discuss them in real-world (non-IT geek), business language. You have to be able to explain them clearly and concisely, including their rational impacts. If, for example, CNN is running a story about malware that destroys reactors or deletes records of uranium deposits and your organization doesn’t own a reactor or track uranium, then explain the impacts of the attack are not likely to be anything more than an annoyance to your organization and offer to discuss it with them or present on the topic at a later time. Keep them up to date, but whatever you do, keep them rational and make sure that you precisely explain potential impacts clearly. If the worst outcome of a popular malware infection is that your network traffic would rise 12% for a 48 hour period and then drop back to previous levels when the malware doesn’t find what it’s looking for and deletes itself, explain that to them.
 
If the malware is designed to target and exfiltrate the secret sauce to your chicken nuggets, and that’s how your company derives income, then explain that to them in clear, unemotional terms and tell them what you are doing about it and how they can help. 
 
That’s about it. I think the point is clear, but I will repeat it again. Explain new threats rationally to your management when they ask. Share with them realistic impacts, what you are doing about them and how they can help. Offer to give them a deep dive at a later time when they are emotionally and intellectually stable. Avoid the FUD and stick to the facts. You will be doing yourself, your organization, your profession, and maybe even the world a big favor in doing so.
 
Thanks for reading!

Audio Blog Post: Twitter Favorites

We’re kicking off the week by talking about some of our favorite feeds on Twitter!

Brent Huston, CEO and Security Evangelist for Microsolved, Inc., interviews Chris Lay, Account Executive and Mary Rose Maguire, Marketing Communication Specialist, about their favorite kinds of tweets. 

We like Twitter to keep up with other security professionals to discover what’s trending. It’s a great way to exchange quick information and alert others when a security issue arises. Plus, our #HITME stream through our MSI HoneyPoint Feed Twitter account has already helped other organizations by alerting them to suspicious activity caught on various ports.

If you’d like to follow the MSI crew, here we are:

Here are a few of our favorites we mentioned:

Click Here To Listen To The Audio Blog Post!

 

 

Hooray! An Open-Source Password Analyzer Tool!

 

 

 

 

 

 

 

I’m one of the resident “Password Hawks” in our office. Our techs consistently tell people to create stronger passwords because it is still one of the most common ways a hacker is able to infiltrate a network.

However, we live in an age where it’s not just hackers who are trying to steal an organization’s data. There are also a variety of malcontents who simply want to hack into someone’s account in order to embarrass them, confirm something negative about them, or be a nuisance by sending spam.

This is why it is important to create a strong password; one that will not be easily cracked.

Enter password analyzer tools. Sophos’ “Naked Security” blog posted a great article today about the often misleading security policies of popular online social sites. Developer Cameron Morris discovered that if he followed one social site’s policy, he actually created a more easily “crackable” password than the one they deemed weak.

About three years ago, developer Cameron Morris had a personal epiphany about passwords, he recently told ZDNet’s John Fontana: The time it takes to crack a password is the only true measure of its worth.

Read the rest of the article here.

There is a free analyzer you can use and I strongly suggest you test the strength of your passwords with it.

Passfault Analyzer

Also, Morris created a tool for administrators that would allow them to configure a password policy based on the time to crack, the possible technology that an attacker might be using (from an everyday computer on up to a $180,000 password attacker), and the password protection technology in use (from Microsoft Windows System security on up to 100,000 rounds of the cryptographic hash function SHA-1/).

OWASP Password Creation Slide-Tool

This is one of the best articles I’ve read on password security, plus it has tools for both the end-user and the administrator. Test them out yourself to see if you have a password that can resist a hacker! 

As for me, I think I need to do a little more strengthening…

Have a great Memorial Day weekend (for our U.S. readers) and stay safe out there!