Keep Your Hands Off My SSL Traffic

Hey, you, get off my digital lawn and put down my binary flamingos!!!!! 

If you have been living under an online rock these last couple of weeks, then you might have missed all of the news and hype about the threats to your SSL traffic. It seems that some folks, like Lenovo and Comodo, for example, have been caught with their hands in your cookie jar. (or at least your certificate jar, but cookie jars seem like more of a thing…) 

First, we had Superfish, then PrivDog. Now researchers are saying that more and more examples of that same code being used are starting to emerge across a plethora of products and software tools.

That’s a LOT of people, organizations and applications playing with my (and your) SSL traffic. What is an aging infosec curmudgeon to do except take to the Twitters to complain? :)

There’s a lot of advice out there, and if you are one of the folks impacted by Superfish and/or PrivDog directly, it is likely a good time to go fix that stuff. It also might be worth keeping an eye on for a while and cleaning up any of the other applications that are starting to be outed for the same bad behaviors.

In the meantime, if you are a privacy or compliance person for a living, feel free to drop us a line on Twitter (@lbhuston, @microsolved) and let us know what your organization is doing about these issues. How is the idea of prevalent man-in-the-middle attacks against your compliance-focused data and applications sitting with your security team? You got this, right? :)

As always, thanks for reading, and we look forward to hearing more about your thoughts on the impacts of SSL tampering on Twitter! 

3 Things I Learned While Responding to Security Incidents

Unfortunately, if you work in IT long enough, you’re likely to encounter a security incident. Having experienced these incidents as a Systems Administrator and as a consultant, I felt that it would benefit others if I shared 3 things that I learned while responding to security issues.

  1. Stay calm – If you’ve noticed malicious activity on your network, your first reaction might be to panic. While time is of the essence, you don’t want stress to negatively impact your decision making. If you need to, give yourself a minute to collect your thoughts prior to proceeding with resolving the issue. Once you’re ready to start working on the problem, begin by attempting to gain an understanding of the type and severity of the attack. This information will go a long way towards mitigating the issue.
  2. Don’t be shortsighted – Whether you’re dealing with a targeted attack or a random malware infection, it’s important to consider the long term effects of your decisions. It is likely that you will receive pressure from various business units to bring systems back online as soon as possible. While it’s important that staff regains access to their applications, it could lead to larger problems down the line if that access is restored prematurely. For example, removing network connectivity or isolating affected systems is obviously going to upset some staff members due to the loss of productivity. However, it’s possible that the malware or attacks could become more widespread if the affected systems are not properly isolated.
  3. Hindsight is 20/20 – I’ve seen individuals waste time during incidents pointing fingers at other team members. I’ve also witnessed individuals procrastinate resolving the issue while they agonize over ways they could have prevented the incident from occurring. After the issue has been resolved, it’s important to have a post mortem meeting to take the proper steps to make sure that history does not repeat itself. However, those conversations can wait until the incident has been fully resolved.

I sincerely hope you don’t have to deal with any security incidents.  However, if you need help resolving an issue involving a malware outbreak or targeted attack, do not hesitate to contact us for assistance.

Telnet!? Really!?

I was recently analyzing data from the HITME project that was collected during the month of January. I noticed a significant spike in the observed attacks against Telnet. I was surprised to see that Telnet was being targeted at such a high rate. After all, there can’t be that many devices left with Telnet exposed to the internet, right?

Wrong. Very wrong. I discovered that there are still MILLIONS of devices with Telnet ports exposed to the internet. Due to Telnet’s lack of security, be sure to use SSH as opposed to Telnet whenever possible. If you absolutely must control a device via Telnet, at least place it behind a firewall. If you need to access the device remotely, leverage the use of a VPN. Finally, be sure to restrict access to the device to the smallest possible IP range.

The map below shows the geographical locations and number of attacks against Telnet that we observed last month. If you need any help isolating Telnet exposures, feel free to contact us by emailing info <at>

Screen Shot 2015-02-10 at 11.28.10 AM


Podcast Episode 1 is Now Available

This episode is about 45 minutes in length and features an interview with Dave Rose (@drose0120) and Helen Patton (@OSUCISOHelen) about ethics in security, women in STEM roles and career advice for young folks considering Infosec as a career. Have feedback, let me know via Twitter (@lbhuston).

As always, thanks for listening and reading!
Listen here: 
PS – We decided to restart the episode numbers, move to pod as a hosting company and make the podcast available through iTunes. We felt all of those changes, plus the informal date-based episode titles we were using before made the change a good idea.

Social Media Targeting: A Cautionary Tale

I was recently doing some deep penetration testing against an organization in a red-team, zero knowledge type exercise. The targets were aware of the test at only the highest levels of management, who had retained myself and my team for the engagement. The mission was simple, obtain either a file that listed more than 100 of their key suppliers, or obtain credentials and successfully logon to their internal supply system from an account that could obtain such a file.

Once we laid some basic groundwork, it was clear that we needed to find the key people who would have access to such data. Given the size of this multi-national company and the thousands of employees they had across continents, we faced two choices – either penetrate the network environment and work our way through it to find and obtain the victory data and/or find a specific person or set of persons who were likely to have the data themselves or have credentials and hack them get a shortcut to victory.
We quickly decided to try the shortcut for a week or less, preserving time for a hack the network approach should we need it as a backup. We had approximately 6 weeks to accomplish the goal. It turned out, it took less than 6 hours…
We turned our TigerTrax intelligence & analytics platform to the task of identifying the likely targets for the shortcut attack. In less than 30 minutes, our intelligence team had identified three likely targets who we could direcly link to the internal systems in question, or the business processes associated with the victory condition. Of these three people, one of them was an extensive participant in their local dance club scene. Their social media profile was loaded with pictures of them dancing at various locales and reviewing local dance clubs and DJs. 
A plan was quickly developed to use the dance club angle as an approach for the attack, and a quick malware serving web site was mocked up to look like an new night club in the target’s city. The team them posted a few other sites pointing to a new club opening and opened a social media account for the supposed club’s new name. The next day, the penetration team tested the exploits and malware against the likely OS installs of the victim (obtained from some of their social media data that was shared publicly). Once the team was sure the exploits and malware were likely to function properly, the club’s social media account sent a tweet to the account of the target and several other people linked to the club scene, inviting them to a private “soft opening” of the club — starring the favorite DJ of the target (obtained from his twitter data). Each person was sent a unique link, and only the target’s link contained the exploit and malware. Once the hook was delivered, the team sat back and waited a bit. They continued to tweet and interact with people using the club’s account throughout the rest of the day. Within hours, the target followed the club’s account and visited the exploit site. The exploit worked, and our remote access trojan (RAT) was installed and connected back to us.
It took the team about an hour to hoover through the laptop of the target and find the file we needed. About the same time, an automated search mechanism of the RAT returned a file called passwords.xls with a list of passwords and login information, including the victory system in question. The team grabbed the victory files, screen shotted all of our metrics and data dashboards and cleaned up after themselves. The target was none the wiser.
When we walked the client through this pen-test and explained how we performed our attack, what controls they lacked and how to improve their defenses, the criticality of social media profiling to attackers became crystal clear. The client asked for examples of real world attackers using such methods, and the team quickly pulled more than a dozen public breach profiles from the last few years from our threat intelligence data.
The bottom line is this – this is a COMMON and EFFECTIVE approach. It is trivial for attackers to accomplish these goals, given the time and will to profile your employees. The bad guys ARE doing it. The bigger question is – ARE YOU?
To learn more about our penetration testing, social engineering and other security testing services, please call your account executive to book a free education session or send us an email to As always, thanks for reading and until next time, stay safe out there!

RansomWeb Attacks Observed in HITME

Unfortunately, the destructive nature of Ransomware has taken a new turn for the worse.  A new technique called RansomWeb is affecting production web-based applications.  I recently analyzed data from the HITME project and observed several RansomWeb attacks against PHP applications.  I can only assume the frequency of these attacks will increase throughout the year.  As a former Systems Administrator, I can definitively say that it would be a nightmare to bring an application back online that was affected by this variant of Ransomware.  Due to RansomWeb’s destructive nature, it is important to ensure that your organization is actively working to prevent RansomWeb from destroying any critical systems.

The attackers begin the RansomWeb process by exploiting a vulnerability within a web server or web-based application.  Once the server or application have been exploited, the attackers slowly begin encrypting key databases and files.  Once the encryption is complete, the hackers shut down the website/application and begin to demand ransom in exchange for the decryption of the corporation’s files.  Unfortunately, the attackers have even perfected using this process to encrypt system-level backups.

To prevent RansomWeb from affecting your organization, please be sure to complete the following steps on a regular basis:

  • Perform regular vulnerability assessments and penetration testing against your critical applications and servers.
  • Audit your application and system logs for any irregular entries.
  • Verify that you are performing regular application and system backups.
  • Be sure to test the backup/ restore process for your applications and systems on a regular basis.  After all, your backup/ DR process is only as effective as your last successful restore.

If you would like to discuss how we can help you prevent RansomWeb from affecting your production applications, do not hesitate to contact us by emailing info <at>

The Need for an Incident Recovery Policy (IRP)

Organizations have been preparing for information security issues for a number of years now and many, if not most, have embraced the need for an incident response policy and process. However, given the recent spate of breaches and compromises that we have analyzed and been involved in over the last year, we have seen an emerging need for organizations to now embrace a new kind of policy – a security incident RECOVERY policy.
This policy should extend from the incident response policy and create a decision framework, methodology and taxonomy for managing the aftermath of a security incident. Once the proverbial “fire has been put out”, how do we clean up the mess, recreate the records we lost, return to business as usual and analyze the impacts all of this had on our operations and long term bottom line? As a part of this process, we need to identify what was stolen, who the likely benefactors are, what conversion events have taken place or may occur in the future, how the losses impact our R&D, operational state, market position, etc. We also need to establish a good working model for communicating the fallout, identified issues, mitigations, insurance claims, discoveries and lessons learned to stakeholders, management, customers, business partners and shareholders – in addition to the insurance companies, regulators and law enforcement.
As you can imagine, this can be a very resource intensive process and since post-incident pressues are likely to remain high, stress levels can be approaching critical mass and politics can be rampant, having a decision framework and pre-developed methodology to work from can be a life saver. We suggest following the same policy development process, update timeframes and review/practice schedules as you do for your incident response policy.
If your organization would like assistance developing such a policy, or would like to work through a training exercise/practice session with an experienced team, please feel free to work with your account executive to schedule such an engagement. We also have policy templates, work sheets and other materials available to help with best practice-based approaches and policy creation/reviews.

Recently Observed Attacks By Compromised QNAP Devices

Despite the fact that the Shellshock bug was disclosed last fall, it appears that a wide variety of systems are still falling victim to the exploit.  For example, in the last 30 days, our HoneyPoint Internet Threat Monitoring Environment has observed attacks from almost 1,000 compromised QNAP devices.  If you have QNAP devices deployed, please be sure to check for the indicators of a compromised system.  If your device has not been affected, be sure to patch it immediately.

Once compromised via the Shellshock bug, the QNAP system downloads a payload that contains a shell script designed specifically for QNAP devices.  The script acts as a dropper and downloads additional malicious components prior to installing the worm and making a variety of changes to the system.  These changes include: adding a user account, changing the device’s DNS server to, creating an SSH server on port 26 and downloading/installing a patch from QNAP against the Shellshock bug.

The map below shows the locations of compromised QNAP systems that we observed to be scanning for other unpatched QNAP systems.  If you have any questions regarding this exploit, feel free to contact us by emailing info <at>

Screen Shot 2015-01-27 at 1.41.31 PM

The Devil You Think You Know: Risks from Third Party Infrastructure

All modern information infrastructure tends to be an amalgam of stuff you built, other people’s stuff you know you use, and hidden stuff that you are unwittingly dependent on ( yours or someone else’s).

This blog entry is about part of that middle ground – on-premise services that you pay for and are integral to your operation but are in fact built and managed by a third party with whom you have a contractual relationship.

It’s based on my actual experience of late.

Here’s the nut: Any vendor who has a presence within your infrastructure that they manage may have connectivity into that infrastructure that you are unaware of.

The vendor’s infrastructure and yours may effectively be one thing.

The example:

Unplanned Connectivity

The company knew that the vendor had used the provided contractor VPN access at one time.  They had used it to set up their equipment initially.  What they did not know was that part of that set-up routinely involved the establishment of outbound site-to-site VPN tunnels from the vendor’s equipment to the vendor’s datacenter.  Those connections were built at boot time and maintained.  Vendor staff used that VPN access to get to their equipment and, if needed, from their equipment to the company equipment that they provided services for.  There was no company-accessible audit trail.  No log.

Under these conditions, the vendor and the company infrastructure are effectively one.  A compromise of the vendor is a compromise of the company.

What to do?

  • doveryai no proveryai:   It’s an old saw at this point, but always true.   “Trust, but verify”.  That trust may not just be of the vendor.  It may be of your own upper management who engineered the deal.  That can be a tough,  particularly if you are calling into question arrangements that have already been made.   But you didn’t choose this career because it was easy.   Read the doc, ask the questions, get the answers in writing.
  • Egress Filtering: You should control what traffic leaves your enterprise. Strict egress filtering rules would have denied that outbound VPN connection described above.  A daily report of such denies would alert staff to the attempt – and start the necessary round of questions.
  • Monitor your outbound traffic:  Know what’s normal.  You should be generating daily reports from your network logs and from all other intermediary devices (e.g. proxies) about outbound communication sessions – particularly ones of long duration and consistent external IP address targets.  Know what radiates out!
  • Watch your VPN logs:  The vendor stopped using company VPN once it was no longer needed.  VPN access logs would have recorded that cessation. That was an anomaly that should have been called out.  The implication is the company VPN logs were not being analyzed and reported on.  You need to know what normal traffic is for your front door.

Finally: There was nothing intentionally malicious about the vendor’s actions in the example cited.  The vendor techs were just doing their job.

It’s your job to question theirs.



How I leveraged HoneyPoint during Corporate Acquisitions

Throughout my career, I have worked for organizations that have purchased and integrated 4 companies.  The acquired companies ranged from an organization with revenues of less than $3 million per year to a publicly traded company with annualized revenues of almost $1 billion.  While the acquisitions all carried their own set of challenges, they remain among the highlights of my career.

When I pictured corporate acquisitions, I always envisioned purchasing the next big startup or buying out your leading competitor.  I didn’t realize that a majority of corporate acquisitions are an attempt to leverage existing infrastructure and shared services to turn a failing company into a profitable organization.  When I was informed that my company was about to purchase another organization, I instantly realized I was going to be working with a lot of old hardware, disgruntled employees and vulnerable systems.  Fortunately, I was able to leverage HoneyPoint to address several the aforementioned challenges.

Completing an acquisition can be overwhelming at times.  It’s important to take a step back and look at systems from a bird’s-eye view.  I always found it extremely helpful to deploy HoneyPoint Agent at the start of an acquisition.  I worked diligently to create an Agent deployment that mimicked the infrastructure of the acquired company.  This allowed me to have a centralized view of their network from one HoneyPoint console.  On more than one occasion, HoneyPoint Agent helped me to identify infected machines on the network of a recently acquired company.

Having worked for a company that has been acquired on two separate occasions, I always empathize with the employees of an acquired organization.  While it can be a scary time, it can also be looked at as an opportunity to demonstrate your talent to a new company.  I have met several talented IT Professionals throughout the 4 acquisitions that I have had the privilege of completing.  I was frequently amazed at their ability keep a critical infrastructure running on a nonexistent budget.  Unfortunately, for every talented and cooperative professional, I have encountered a few disgruntled employees.

HoneyPoint has several great features that can help identify a disgruntled employee.  For example, I was able to place documents throughout our network that would log an alert to my HoneyPoint console each time they were opened.  This would have allowed me to easily identify any disgruntled employee that was searching a file server for confidential information.  Deploying these trojanized documents throughout our network taught me a valuable lesson about HoneyPoint…it should be considered a good thing when a deployment does not generate any alerts.  In this instance, it meant that I did not identify any employees that were digging through our file shares for confidential information.

Unfortunately, I have been a part of acquisitions where the IT staff of the acquired organization were not retained.  While it was purely a business decision, the layoffs posed a serious risk of creating disgruntled employees.  This could lead an employee of the acquired company to attempt to cause harm to systems owned and operated by the acquiring organization.  During each acquisition, I deployed HoneyPoint Agents that mimicked the Infrastructure of my company.  This allowed me to identify any instance of an individual attempting to scan systems that were owned by the parent organization.  While I did not catch any individuals in the act, I was able to rest assured knowing that I had the capability to do so.

I highly recommend leveraging HoneyPoint during your next M&A.  It will help you address several of the challenges that are associated with the M&A process.  If you have any questions about HoneyPoint and how it can help your organization, please contact us at info <at>