How do you “Identify”?

Recently, we posted the Business Email Compromise (BEC) checklist. We’ve gotten a lot of great feedback on the checklist…as well as a few questions. What if you’re new to security? What if your organization’s security program is newer, and still maturing? How can you leverage this list?

Since the checklist is based on the NIST model, there’s a lot of information here to help your security program mature, as well as to help you mature as a security practitioner. MSI’s engineers have discussed a few ways to leverage the checklist as a growth mechanism.

Let’s begin with the first category – Discover. The first action item under Discover is:

  • Identify and catalog all of the authentication portals where attackers could test stolen credentials or leverage them for access.

What does that mean from a real-world perspective? The first step is understanding the external exposure of the organization. How can you do this? A few items that you might leverage to catalog the exposure would be:

  • Asset tracking – what do you know about?
  • Firewall audits (both ingress and egress) – what are you allowing in and out, and how can they proceed past the firewall protections?
  • Network maps – do you have them? Are they up-to-date? These should be a living document, and reviewed regularly with staff who are not maintaining them as a sanity check.
  • Vulnerability assessments – when is the last time you have one performed? These are valuable not only from a catalog perspective, but for detecting old, defunct, and unintentional exposure.

Don’t forget to consider access points that you may not directly control. Remote systems, such as cloud based email servers that rely on network authentication or vendor systems that leverage SSO (single sign on) are also attack vectors.

The next step would be to catalog your exposed surfaces and their authentication mechanisms. Do they leverage Active Directory (AD), SSO, or another method? Is multi-factor authentication available, and if so, is it employed? What is the crossover between these exposures – what could one compromised password potentially access in other systems?

Once you feel you have an adequate catalog, it’s time to consider the individual exposures, and review the information you have available from each.

  • Does the device support logging, and is logging enabled?
  • Are the logs being preserved? A minimum standard would be your organization’s records retention policy.
  • Are the logs configured properly, so that success and failure are logged, and the specifics of the attempt are captured?
  • Are the logs capturing information sufficient for investigation? For example, if the logs are capturing IP information, but you are using a DHCP scenario where IPs are leased for a short period of time, are the logs configured to capture hostname and user ID to properly identify any problematic systems?

This is a pretty big step for step 1 – so we’ll pause here for now. How does your organization compare against these suggestions?

Questions? Comments? I’d love to hear from you –, or @TheTokenFemale on Twitter!



Inventory Control a Must for Effective System Security Maintenance & Config Control

Some security controls can’t reach maximum effectiveness unless other, related controls are also in place. This is the case with system security maintenance and configuration control. If you don’t tie these controls to well maintained and updated inventories of all network assets you are bound to see vulnerabilities cropping up on your systems.

We have done many vulnerability assessments and penetration tests over the years, and we notice the same things again and again. We find that most of our clients do a good job of keeping up security maintenance on their Windows-based systems, and that most network assets are well configured and hardened. But even among the best of networks, there always seems to be hosts that are running out-of-date firmware, that were configured with their default admin passwords in place, or that have some other anomalies present that give cyber-criminals attacker surfaces to work with. And almost universally it was because these assets were simply forgotten. This is where comprehensive inventory control comes into the picture.

The first job is to ensure that every network entity is included in the inventory. This means each piece of hardware, software, firmware and operating system. If it addressable, it needs to be included. The next job is to ensure that the inventory is kept as current as possible. It should be automatic that when assets are added, dropped or change status, the inventory is updated. Unless your network is particularly small and simple, asset tracking software packages are recommended for this task.

The next task is to ensure that each network asset in the inventory is included in the configuration control process. Before the asset was deployed on the network, was it configured according to a best practices-based onboarding checklist? Were default passwords changed? Were unnecessary services disabled? It is also important for some devices such as routers and firewalls to have their configurations checked and updated on a regular basis. Have all such devices in the inventory been identified and included in the system?

The next and probably the most difficult thing to accomplish is to ensure that each asset in the inventory is included in the security maintenance program. Have vendor and security web sites providing vulnerability and updating/patching information been identified and notated for each kind of software, operating system, firmware and hardware asset in the inventory? Have network assets in the inventory that are not automatically updated via WSUS or some other updating service been identified? Once they have been identified, is there a system in place to ensure that they are manually updated? Are there any licenses that need to be kept current? Ensuring that questions such as those above are addressed and that all inventory assets are properly handled will help to keep your networks as secure as possible.

The Magic of Hash

Hi, all –

Time for a bedtime story? A little light reading? Something to listen to on the treadmill?

Come listen to our CEO, Brent Huston, riff on blockchain, trust models, and ancillary bits.

The audio is HERE. And the accompanying slides are HERE.

Until next time, stay safe out there…take care of earth, it’s the only planet with chocolate!

Positive Train Control: Skating away on the thin ice of a new day?

Positive Train Control: Skating away on the thin ice of a new day?

From the movie “The Polar Express

That line: “Skating away on the thin ice of a new day” is from a Jethro Tull song by the same name. (Yes – I am that old 😉 ).

It came to me as I was reflecting on the reading I’ve been doing on the topic of Positive Train Control (PTC).

PTC is an idea rather than any specific technology or architecture. 

Its intent is to minimize the chance that railroad operator error (e.g. missed communication) can lead to loss of life or a major train-to-train collision. It’s “positive” in that a positive action (e.g. stop the train) is automatically done if an operator does not respond appropriately.

Think a pinball machine with a “smart” controllable pinball.

In the US it is mandated for a large subset of railroads by the Rail Safety Improvement Act of 2008. The current target date is December 31, 2018, moved from the original date of December 31, 2015 when it became clear that a number of railroads would not be able to make the original date. Even now there are provisions for moving the target to December 2020 for those railroads that have shown real progress in implementation. Financial penalties for non-compliance eventually kick in.

Several highly-publicized rail accidents were the motivation for the law, notably the 2008 Chatsworth collision in which a commuter and freight train collided, causing 25 deaths. The commuter train engineer missed a stop signal. He had worked a split shift, was likely fatigued, and was using a cell phone to text repeatedly prior to the collision.

So – all good, right?  Government acting responsively to protect citizens?

Well… yes.   But…

PTC is not a prescribed architecture. It’s a desired outcome. Industry groups, often dominated by the big Class 1 freight railroads, are adopting various technologies to accomplish the goal. Interoperability between railroads is an issue, with some railroads having to install 2 or more PTC solutions on their infrastructure to accommodate the different PTC environments that may be encountered on the various track they run on.

PTC also introduces a dependency on software and network communication in an arena in which local human control was traditionally paramount.

And there’s the issue for me.

Given the urgency of the federal mandate and the associated hurried activity, the apparent lack of defined software security standards, and the historic preference for availability over information security that understandably pervades ICS environments, it’s seems very likely that the solutions being implemented will have exploitable software flaws.

Europe has been down this path much longer than the US – and the problem appears to be very real there, as revealed by the SCADA Strangelove researchers in a “sanitized” presentation on the security of European train control systems a few years ago.

Note: These same researchers published a collection of default logins that are found hardcoded in some of the equipment used by PTC equipment vendors here in the US.

In their presentation, this slide appears as part of the discussion of what was found when analyzing European train control software. .

Not a pretty picture.

My take is that “nation-state” attackers will be going after the suppliers of the tech intended to be used for PTC in the US. Going after an industry target by way of its vendors, some of whom can always be relied on to have inadequate cyber-defenses, has a well established history.

Just ask Target.

Suppliers to the railroad industry, particularly those supplying tech for PTC, will have to start seeing themselves as just that sort of “target” and ensure that their own information infrastructure is secure, monitored, and – most importantly – that their employees are trained to detect and report the inevitable phishing attacks that have become the fast path to compromise.

PTC, on the surface, appears to be an inarguably good idea and has achieved considerable momentum in the US.

But without coordinated consideration of the new cyber attack surfaces that may be opened to America’s enemies, that momentum could well put a critical part of US infrastructure out on to a new, and very dangerous, patch of “thin ice”.











  • SCADA Strangelove presentation:



Freight Rail & Positive Train Control

Encrypt That Drive

Promise me you’ll return to this blog piece, but go ahead and open a new tab and search for “stolen laptop.” Filter the search results for a specific year. Or refine the search within an industry, eg. healthcare or financial. Too many results. Too many incidents. The U.S. Department of Health and Human Services, Office for Civil Rights, has a breach portal – – only incidents involving more than 500 PHI records are in the database. Search for theft of laptop.

Stolen laptops from a car, home or office. Lost, misplaced, theft or burglary. All industries have been affected – healthcare systems, clinics and labs, state and city agencies, universities and schools, accounting firms, financial and insurance firms, energy and gas companies, the largest soda company in the world…

After a laptop is reported stolen, one of the first defensive action is to disable access of the laptop and the employee into the corporate domain. Removing user access from the corporate domain does not disable local access to the stolen laptop hard drive. Bypassing the desktop logon can be as easy as a Google search. Or mounting the hard drive to another operating system. Files and data can then be accessed, in clear text.

Access to the desktop can lead to access to the laptop owner’s personal email or other saved logins. Logins to a 3rd party vendor that may have sensitive information on clients and patients. Or the vendor site may have programmatic or API access for the thief to pivot to another site for additional information and access.

Laptops can contain local databases containing PII or PHI. Or downloaded lab reports for a patient. Or email attachments of tax documents for a mortgage refinancing application. Or credentials to other database portals.

More companies are encrypting the mobile devices they provide their employees, but many still do not. Furthermore, too many employees are accessing work email or downloading client documents on their own personal devices.

An enterprise security program should include the encryption of its hard drives, particularly in laptop drives. The policy should include encryption of its data-at-rest. This is the additional layer of security where ALL the data, the entire physical hard drive, is encrypted.

All the files in the drive remain encrypted when the laptop is powered off. Upon powering on, the user is prompted for a password to decrypt the drive, which will then continue to boot up to the logon into the operating system desktop. Without the first (encryption) password, the drive and all its data – system and data files – remain encrypted.

The requirement for encrypted hard drives may vary by industry, whether the vendor is under a military or government contract, or operating under PCI- or HIPAA- compliance. But if one of your corporate laptops gets stolen and the first thought that crosses your mind is, “I hope it doesn’t contain any of **MY** sensitive information?,” then that laptop needs to be encrypted.

It should be company policy to encrypt all company-issued laptop drives. Even if the drive does not have any PII or PHI, work documents, e-mail and browser history access alone through the stolen laptop can be used to obtain further access into sensitive corporate, staff, customer or client information. Whole volume encryption will secure the data-at-rest. Well, it’s a start.

Cartoon courtesy


Business E-mail Compromise (BEC) Checklist

MSI has recently received requests from a variety of sources for guidance around the configuration and management of business e-mail.

In response, our CEO Brent Huston has created a checklist (link is below) that:

  • Enumerates attack vectors
  • Briefly reviews impacts
  • Lists control suggestions mapped back to the NIST framework model

This is a must read for Security and IT practitioners as it helps to make sure you have your bases covered! As always, if you have questions or want to know more please reach out to!

IoT Smart Devices: The Honeymoon is Over!

What isn’t an Internet of Things device these days?! Companies are literally flooding the consumer market with smart chip-equipped devices you can control with your iPhone or Android (which themselves are equipped with smart chips – sigh!). Smart bike locks, smart egg trays, smart water bottles, smart dental floss dispensers, smart baby-changing pads!! These are all real devices.

What’s next?! Smart bicycle seats? Smart toenail clippers? Smart butter boxes? Smart golf balls? Actually, I kinda like that one. After I hit a smart ball it could tell me where it was, how fast the club was going when I hit it, which way it was spinning, all kinds of little things that really don’t matter much, but that you love to know.

The rub is that almost any of these devices may be a conduit that lets attackers hack their way into your network… your computer… your bank accounts… your life! The same is true for businesses. Maybe it’s the smart coffee pot in the break room, or the smart TV monitor in the Boardroom, or the smart device controller on the CEO’s desk. You may scoff at the danger, and I don’t blame you really. But I see things everyday on our threat intelligence engine that challenge one’s credulity.

Here is an item from this week’s feed about how it’s possible to exfiltrate user data covertly using smart light bulbs. Researchers Anindya Maiti and Murtuza Jadliwala from the University of Texas studied how LIFX and Philips Hue smart bulbs receive their commands for playing visualizations into a room and developed a model to interpret brightness and color modulations occurring when listening to music or watching a video. They can use this to exfiltrate data from personal devices. They can also use this to determine multi-media preferences by recording luminance patterns.

In another article this week, it was reported that Mirai botnet variants were increasingly being developed to take advantage of IoT devices. They are upping the Malware’s ability to run on different architectures and platforms. At the end of July, a live remote server was identified hosting multi-platform Malware that sequentially tries downloading and executing executables until a binary compliant with current architecture is found.

One of the reasons attackers are targeting IoT devices so vigorously is because of the industry itself. Manufacturers are developing products and shoving them into the marketplace so quickly that little proper security planning is being done. Most of the products on the market are not receiving patches and updates, even against well-known exploits that exist in the wild right now.

In my opinion, it’s time to wake up to the reality of what we are doing and apply proper security mechanisms to these devices; they should be treated like any other network device with and IP address. First, don’t connect your devices to the Internet unless you need to. The fact that your coffee pot and washing machine are capable of being run over the Internet is no reason to actually do so.

Next, if you really want to control a device remotely, make sure you are the only one that can access it. Change any default access credentials you can. Use strong passwords, and if you are really with it, apply multi-factor authentication to devices.

Ensure that you keep track of any updates that are available for device firmware and software and apply them to your own devices. Also make sure you keep yourself aware of any known vulnerabilities that can affect your devices. In addition, ensure that the device is configured as securely as possible. The rule of thumb is to turn off everything you can, and then only enable those features that you actually want.

Monitor these devices. Apply security monitoring software to them if you can. If not, monitor the devices yourself. Check the logs and see who/what is trying to touch your devices and if there has been any success. Also, consider making a special network segment just for IoT devices that has no direct connection to your other networks.

I know that most of you are groaning right now. I’m sure you already have plenty of tasks and considerations to occupy your time. But if you want the conveniences and the bells and whistles provided by smart devices, you need to pay the bill.

Do You Have Production Data in your Test Environment?

We’ve talked about development servers, and the perils of internet facing development environments.  Now, let’s talk about what is IN your development environment.

Another issue we run into fairly often with dev environments,…they are set up to use production data, and sometimes this data is piped in directly at night with no modification. This introduces a risk of not only exposing this data through vulnerabilities within the development environment but could allow a contractor or unauthorized employee to view sensitive information.

On many assessments in the past we have found and accessed data through various mechanisms. In some cases the production data remained but “test” users with weak passwords were able to authenticate. In other cases, system misconfiguration or missing patches allowed access to the application and the real data inside of it. Developers also might be leaving production data or fragments of it on their laptop which offers another way for that data to be exposed.

So if you are currently using production data to test development environments, what can be done? Encrypting the database and the fields containing the PII data certainly helps. There’s no one size fits all solution but here are a few suggestions that can be used depending on the nature of the data and application requirements. Care must be taken to make sure that data that needs to pass checksum tests (such as credit card numbers) will still pass, without modifying the application code.

Examples of what can be done so there isn’t sensitive data in test environments:

  • Apply Data Masking to the real data. Data Masking is changing data so that it keeps  the structure of production data, but isn’t real. If you use Oracle, the Enterprise version has a feature to do this built in “Oracle Data Masking Pack”. SQL Server 2014 has a similar feature named “Dynamic Data Masking”.
  • Use scripts to generate fake data
  • Maintain a curated database of invalid data
  • If tests require real data, ensure that at least all PII or other equally sensitive data is masked and encrypted
  • Don’t forget to use different environment variables, such as a different database password

These are just some examples of what can be done to reduce the risk of sensitive data being leaked. Data Masking is often the most viable solution as it is being built into many databases now. You can also look at tools such as mockaroo, which help generating test data.


It’s Dev, not Diva – Don’t set the “stage” for failure

Development: the act, process, or result of developing, the development of new ideas. This is one of the Merriam-Webster definitions of development.

It doesn’t really matter what you call it…dev, development, stage, test. Software applications tend to be in flux, and the developers, programmers, testers, and ancillary staff need a place to work on them.

Should that place be out on the internet? Let’s think about that for a minute. By their very nature, dev environments aren’t complete. Do you want a work in progress, with unknown holes, to be externally facing? This doesn’t strike me as the best idea.

But, security peeps, we HAVE to have it facing the internet – because REASONS! (Development types…tell me what your valid reasons are?)

And it will be fine – no one will find it, we won’t give it a domain name!

Security through obscurity will not be your friend here…with the advent of Shodan,, and other venues…they WILL find it. Ideally, you should only allow access via VPN or other secure connection.

What could possibly go wrong? Well, here’s a short list of SOME of the things that MSI has found or used to compromise a system, from an internet facing development server:

  • A test.txt file with sensitive information about the application, configuration, and credentials.
  • Log files with similar sensitive information.
  • .git directories that exposed keys, passwords, and other key development information.
  • A development application that had weak credentials was compromised – the compromise allowed inspection of the application, and revealed an access control issue. This issue was also present in the production application, and allowed the team to compromise the production environment.
  • An unprotected directory that contained a number of files including a network config file. The plain text credentials in the file allowed the team to compromise the internet facing network devices.

And the list keeps going.

But, security peeps – our developers are better than that. This won’t happen to us!

The HealthCare.Gov breach in 2014 was the result of a development server that was improperly connected to the internet. “Exact details on how the breach occurred were not shared with the public, but sources close to the investigation said that the development server was poorly configured and used default credentials.”

Another notable breach occurred in 2016 – an outsourcing company named Capgemini exposed the personal information of millions of job seekers when their IT provider connected a development server to the internet.

The State of Vermont also saw their health care exchange – Vermont Connected – compromised in 2014 when a development server was accessed. The state indicates this was not a breach, because the development server didn’t contain any production data.

So, the case is pretty strongly on the side of – internet facing development servers is a bad idea.

Questions? Comments? What’s your take from the development side? I’d love to hear from you –, or @TheTokenFemale on Twitter!