Positive Train Control: Skating away on the thin ice of a new day?

Positive Train Control: Skating away on the thin ice of a new day?

From the movie “The Polar Express


That line: “Skating away on the thin ice of a new day” is from a Jethro Tull song by the same name. (Yes – I am that old 😉 ).

It came to me as I was reflecting on the reading I’ve been doing on the topic of Positive Train Control (PTC).

PTC is an idea rather than any specific technology or architecture. 

Its intent is to minimize the chance that railroad operator error (e.g. missed communication) can lead to loss of life or a major train-to-train collision. It’s “positive” in that a positive action (e.g. stop the train) is automatically done if an operator does not respond appropriately.

Think a pinball machine with a “smart” controllable pinball.

In the US it is mandated for a large subset of railroads by the Rail Safety Improvement Act of 2008. The current target date is December 31, 2018, moved from the original date of December 31, 2015 when it became clear that a number of railroads would not be able to make the original date. Even now there are provisions for moving the target to December 2020 for those railroads that have shown real progress in implementation. Financial penalties for non-compliance eventually kick in.

Several highly-publicized rail accidents were the motivation for the law, notably the 2008 Chatsworth collision in which a commuter and freight train collided, causing 25 deaths. The commuter train engineer missed a stop signal. He had worked a split shift, was likely fatigued, and was using a cell phone to text repeatedly prior to the collision.

So – all good, right?  Government acting responsively to protect citizens?

Well… yes.   But…

PTC is not a prescribed architecture. It’s a desired outcome. Industry groups, often dominated by the big Class 1 freight railroads, are adopting various technologies to accomplish the goal. Interoperability between railroads is an issue, with some railroads having to install 2 or more PTC solutions on their infrastructure to accommodate the different PTC environments that may be encountered on the various track they run on.

PTC also introduces a dependency on software and network communication in an arena in which local human control was traditionally paramount.

And there’s the issue for me.

Given the urgency of the federal mandate and the associated hurried activity, the apparent lack of defined software security standards, and the historic preference for availability over information security that understandably pervades ICS environments, it’s seems very likely that the solutions being implemented will have exploitable software flaws.

Europe has been down this path much longer than the US – and the problem appears to be very real there, as revealed by the SCADA Strangelove researchers in a “sanitized” presentation on the security of European train control systems a few years ago.

Note: These same researchers published a collection of default logins that are found hardcoded in some of the equipment used by PTC equipment vendors here in the US.

In their presentation, this slide appears as part of the discussion of what was found when analyzing European train control software. .

Not a pretty picture.

My take is that “nation-state” attackers will be going after the suppliers of the tech intended to be used for PTC in the US. Going after an industry target by way of its vendors, some of whom can always be relied on to have inadequate cyber-defenses, has a well established history.

Just ask Target.

Suppliers to the railroad industry, particularly those supplying tech for PTC, will have to start seeing themselves as just that sort of “target” and ensure that their own information infrastructure is secure, monitored, and – most importantly – that their employees are trained to detect and report the inevitable phishing attacks that have become the fast path to compromise.

PTC, on the surface, appears to be an inarguably good idea and has achieved considerable momentum in the US.

But without coordinated consideration of the new cyber attack surfaces that may be opened to America’s enemies, that momentum could well put a critical part of US infrastructure out on to a new, and very dangerous, patch of “thin ice”.


 

See:

 

 

 

 

 

 

 

 

  • SCADA Strangelove presentation:

 


Videos:

Freight Rail & Positive Train Control

Encrypt That Drive

Promise me you’ll return to this blog piece, but go ahead and open a new tab and search for “stolen laptop.” Filter the search results for a specific year. Or refine the search within an industry, eg. healthcare or financial. Too many results. Too many incidents. The U.S. Department of Health and Human Services, Office for Civil Rights, has a breach portal – https://ocrportal.hhs.gov/ocr/breach/breach_report.jsf – only incidents involving more than 500 PHI records are in the database. Search for theft of laptop.

Stolen laptops from a car, home or office. Lost, misplaced, theft or burglary. All industries have been affected – healthcare systems, clinics and labs, state and city agencies, universities and schools, accounting firms, financial and insurance firms, energy and gas companies, the largest soda company in the world…

After a laptop is reported stolen, one of the first defensive action is to disable access of the laptop and the employee into the corporate domain. Removing user access from the corporate domain does not disable local access to the stolen laptop hard drive. Bypassing the desktop logon can be as easy as a Google search. Or mounting the hard drive to another operating system. Files and data can then be accessed, in clear text.

Access to the desktop can lead to access to the laptop owner’s personal email or other saved logins. Logins to a 3rd party vendor that may have sensitive information on clients and patients. Or the vendor site may have programmatic or API access for the thief to pivot to another site for additional information and access.

Laptops can contain local databases containing PII or PHI. Or downloaded lab reports for a patient. Or email attachments of tax documents for a mortgage refinancing application. Or credentials to other database portals.

More companies are encrypting the mobile devices they provide their employees, but many still do not. Furthermore, too many employees are accessing work email or downloading client documents on their own personal devices.

An enterprise security program should include the encryption of its hard drives, particularly in laptop drives. The policy should include encryption of its data-at-rest. This is the additional layer of security where ALL the data, the entire physical hard drive, is encrypted.

All the files in the drive remain encrypted when the laptop is powered off. Upon powering on, the user is prompted for a password to decrypt the drive, which will then continue to boot up to the logon into the operating system desktop. Without the first (encryption) password, the drive and all its data – system and data files – remain encrypted.

The requirement for encrypted hard drives may vary by industry, whether the vendor is under a military or government contract, or operating under PCI- or HIPAA- compliance. But if one of your corporate laptops gets stolen and the first thought that crosses your mind is, “I hope it doesn’t contain any of **MY** sensitive information?,” then that laptop needs to be encrypted.

It should be company policy to encrypt all company-issued laptop drives. Even if the drive does not have any PII or PHI, work documents, e-mail and browser history access alone through the stolen laptop can be used to obtain further access into sensitive corporate, staff, customer or client information. Whole volume encryption will secure the data-at-rest. Well, it’s a start.

Cartoon courtesy https://xkcd.com/

Resources:

https://www.dataev.com/it-experts-blog/why-laptop-encryption-is-a-must-for-all-businesses-not-just-big-ones

https://www.businessnewsdaily.com/9391-computer-encryption-guide.html

Business E-mail Compromise (BEC) Checklist

MSI has recently received requests from a variety of sources for guidance around the configuration and management of business e-mail.

In response, our CEO Brent Huston has created a checklist (link is below) that:

  • Enumerates attack vectors
  • Briefly reviews impacts
  • Lists control suggestions mapped back to the NIST framework model

This is a must read for Security and IT practitioners as it helps to make sure you have your bases covered! As always, if you have questions or want to know more please reach out to Info@microsolved.com!

https://s3.amazonaws.com/MSIMedia/BECChecklist082918.pdf

IoT Smart Devices: The Honeymoon is Over!

What isn’t an Internet of Things device these days?! Companies are literally flooding the consumer market with smart chip-equipped devices you can control with your iPhone or Android (which themselves are equipped with smart chips – sigh!). Smart bike locks, smart egg trays, smart water bottles, smart dental floss dispensers, smart baby-changing pads!! These are all real devices.

What’s next?! Smart bicycle seats? Smart toenail clippers? Smart butter boxes? Smart golf balls? Actually, I kinda like that one. After I hit a smart ball it could tell me where it was, how fast the club was going when I hit it, which way it was spinning, all kinds of little things that really don’t matter much, but that you love to know.

The rub is that almost any of these devices may be a conduit that lets attackers hack their way into your network… your computer… your bank accounts… your life! The same is true for businesses. Maybe it’s the smart coffee pot in the break room, or the smart TV monitor in the Boardroom, or the smart device controller on the CEO’s desk. You may scoff at the danger, and I don’t blame you really. But I see things everyday on our threat intelligence engine that challenge one’s credulity.

Here is an item from this week’s feed about how it’s possible to exfiltrate user data covertly using smart light bulbs. Researchers Anindya Maiti and Murtuza Jadliwala from the University of Texas studied how LIFX and Philips Hue smart bulbs receive their commands for playing visualizations into a room and developed a model to interpret brightness and color modulations occurring when listening to music or watching a video. They can use this to exfiltrate data from personal devices. They can also use this to determine multi-media preferences by recording luminance patterns.

In another article this week, it was reported that Mirai botnet variants were increasingly being developed to take advantage of IoT devices. They are upping the Malware’s ability to run on different architectures and platforms. At the end of July, a live remote server was identified hosting multi-platform Malware that sequentially tries downloading and executing executables until a binary compliant with current architecture is found.

One of the reasons attackers are targeting IoT devices so vigorously is because of the industry itself. Manufacturers are developing products and shoving them into the marketplace so quickly that little proper security planning is being done. Most of the products on the market are not receiving patches and updates, even against well-known exploits that exist in the wild right now.

In my opinion, it’s time to wake up to the reality of what we are doing and apply proper security mechanisms to these devices; they should be treated like any other network device with and IP address. First, don’t connect your devices to the Internet unless you need to. The fact that your coffee pot and washing machine are capable of being run over the Internet is no reason to actually do so.

Next, if you really want to control a device remotely, make sure you are the only one that can access it. Change any default access credentials you can. Use strong passwords, and if you are really with it, apply multi-factor authentication to devices.

Ensure that you keep track of any updates that are available for device firmware and software and apply them to your own devices. Also make sure you keep yourself aware of any known vulnerabilities that can affect your devices. In addition, ensure that the device is configured as securely as possible. The rule of thumb is to turn off everything you can, and then only enable those features that you actually want.

Monitor these devices. Apply security monitoring software to them if you can. If not, monitor the devices yourself. Check the logs and see who/what is trying to touch your devices and if there has been any success. Also, consider making a special network segment just for IoT devices that has no direct connection to your other networks.

I know that most of you are groaning right now. I’m sure you already have plenty of tasks and considerations to occupy your time. But if you want the conveniences and the bells and whistles provided by smart devices, you need to pay the bill.

Do You Have Production Data in your Test Environment?

We’ve talked about development servers, and the perils of internet facing development environments.  Now, let’s talk about what is IN your development environment.

Another issue we run into fairly often with dev environments,…they are set up to use production data, and sometimes this data is piped in directly at night with no modification. This introduces a risk of not only exposing this data through vulnerabilities within the development environment but could allow a contractor or unauthorized employee to view sensitive information.

On many assessments in the past we have found and accessed data through various mechanisms. In some cases the production data remained but “test” users with weak passwords were able to authenticate. In other cases, system misconfiguration or missing patches allowed access to the application and the real data inside of it. Developers also might be leaving production data or fragments of it on their laptop which offers another way for that data to be exposed.

So if you are currently using production data to test development environments, what can be done? Encrypting the database and the fields containing the PII data certainly helps. There’s no one size fits all solution but here are a few suggestions that can be used depending on the nature of the data and application requirements. Care must be taken to make sure that data that needs to pass checksum tests (such as credit card numbers) will still pass, without modifying the application code.

Examples of what can be done so there isn’t sensitive data in test environments:

  • Apply Data Masking to the real data. Data Masking is changing data so that it keeps  the structure of production data, but isn’t real. If you use Oracle, the Enterprise version has a feature to do this built in “Oracle Data Masking Pack”. SQL Server 2014 has a similar feature named “Dynamic Data Masking”.
  • Use scripts to generate fake data
  • Maintain a curated database of invalid data
  • If tests require real data, ensure that at least all PII or other equally sensitive data is masked and encrypted
  • Don’t forget to use different environment variables, such as a different database password

These are just some examples of what can be done to reduce the risk of sensitive data being leaked. Data Masking is often the most viable solution as it is being built into many databases now. You can also look at tools such as mockaroo, which help generating test data.

 

It’s Dev, not Diva – Don’t set the “stage” for failure

Development: the act, process, or result of developing, the development of new ideas. This is one of the Merriam-Webster definitions of development.

It doesn’t really matter what you call it…dev, development, stage, test. Software applications tend to be in flux, and the developers, programmers, testers, and ancillary staff need a place to work on them.

Should that place be out on the internet? Let’s think about that for a minute. By their very nature, dev environments aren’t complete. Do you want a work in progress, with unknown holes, to be externally facing? This doesn’t strike me as the best idea.

But, security peeps, we HAVE to have it facing the internet – because REASONS! (Development types…tell me what your valid reasons are?)

And it will be fine – no one will find it, we won’t give it a domain name!

Security through obscurity will not be your friend here…with the advent of Shodan, Censys.io, and other venues…they WILL find it. Ideally, you should only allow access via VPN or other secure connection.

What could possibly go wrong? Well, here’s a short list of SOME of the things that MSI has found or used to compromise a system, from an internet facing development server:

  • A test.txt file with sensitive information about the application, configuration, and credentials.
  • Log files with similar sensitive information.
  • .git directories that exposed keys, passwords, and other key development information.
  • A development application that had weak credentials was compromised – the compromise allowed inspection of the application, and revealed an access control issue. This issue was also present in the production application, and allowed the team to compromise the production environment.
  • An unprotected directory that contained a number of files including a network config file. The plain text credentials in the file allowed the team to compromise the internet facing network devices.

And the list keeps going.

But, security peeps – our developers are better than that. This won’t happen to us!

The HealthCare.Gov breach https://www.csoonline.com/article/2602964/data-protection/configuration-errors-lead-to-healthcare-gov-breach.html in 2014 was the result of a development server that was improperly connected to the internet. “Exact details on how the breach occurred were not shared with the public, but sources close to the investigation said that the development server was poorly configured and used default credentials.”

Another notable breach occurred in 2016 – an outsourcing company named Capgemini https://motherboard.vice.com/en_us/article/vv7qp8/open-database-exposes-millions-of-job-seekers-personal-information exposed the personal information of millions of job seekers when their IT provider connected a development server to the internet.

The State of Vermont also saw their health care exchange – Vermont Connected – compromised in 2014 https://www.databreachtoday.asia/hackers-are-targeting-health-data-a-7024 when a development server was accessed. The state indicates this was not a breach, because the development server didn’t contain any production data.

So, the case is pretty strongly on the side of – internet facing development servers is a bad idea.

Questions? Comments? What’s your take from the development side? I’d love to hear from you – lwallace@microsolved.com, or @TheTokenFemale on Twitter!

 

 

 

 

 

 

 

 

 

 

Stopping the Flow of Business: EDI as a Natural Gas Pipeline Attack Vector

In the not too-distant past I was involved in helping secure the information infrastructure of a major EDI “VAN”.

How’s that  for gibberish?   Some definitions are in order:

EDI = “Electronic Data Interchange”.  Effectively, a collection of standards for the encoding of documents such as invoices, purchase orders, bills of lading, medical information, and – it seems – information pertaining to the business of buying, selling and moving natural gas.

EDI dates from the 1970’s. It took advantage of pre-Internet communication mechanisms but quickly was adapted to the Internet and likely will be to blockchain.

EDI “trading partners” can communicate directly, but often they rely on  third-party EDI specialists to handle communication with their various trading partners.  These are the EDI “Value Added Networks” (VAN).

EDI is the unsung hero of modern commerce.

Everything we buy or sell has a secret life as an EDI document. Usually a number of them.

Not surprisingly, natural gas pipeline companies use EDI in the running of their business, communicating information about availability and pricing to their customers and government.  A few months ago,  the business of some natural gas pipeline companies was disrupted by the sudden unavailability of those EDI services.

The attack, in March 2018, was directed against a central provider of EDI services to several major natural gas pipeline operators. Although it did not affect actual in-field operations, it did stop all normal business traffic for several days, causing confusion and a fall-back to alternate communication mechanisms.

Of greater concern was the loss of potentially sensitive information about internal business structure, all of which can be inferred from the ebb and flow of EDI data.  Such information can be invaluable to an attacker and in this case can be an aid in eventually attacking actual pipeline operations.

The point here is that it is easy to view such operations as strictly an ICS security concern, and that with proper segmentation of business from ICS infrastructure all will be well.

I’ve had some experience in that ICS world over the last few years and know that segmentation is often incomplete at best. Even when segmentation is present, your business can still be vulnerable to attacks on exposed business systems that have process flow links to ICS.

What to do?

  • Know how you use EDI and what your supporting infrastructure is.
  • Know who your EDI providers are and what security measures they employ
  • Do a business impact analysis of your EDI environment. What happens if it goes away?
  • Ensure you really do have segmentation of your business and ICS worlds. Make sure the places they touch are known, secured, and monitored.

 


See:

EDI defined: 

https://www.edibasics.com/what-is-edi 

https://en.wikipedia.org/wiki/Electronic_data_interchange

https://www.edibasics.com/edi-resources/document-standards

Natural Gas Industry Usage of EDI:

http://latitudestatus.com/

https://www.naesb.org/pdf4/update031413w4.docx

Quote: “The NAESB wholesale natural gas cybersecurity standards facilitate an infrastructure of secure electronic communications under which the electronic transmission of data via EDI or browser based transactions is protected. There are more than fifty separate transactions identified for nominations, confirmations, scheduling of natural gas; flowing gas transactions including measurement, allocations, and imbalances; invoicing related transactions including invoices, remittances, statement of account; and capacity release transactions.”

https://www.edigas.org/faq/

http://www.rrc.texas.gov/oil-gas/applications-and-permits/oil-gas-edi-filing-deadlines/

The Attack:

https://www.eenews.net/stories/1060078327

http://securityaffairs.co/wordpress/71040/hacking/gas-pipeline-operators-hack.html

https://www.bloomberg.com/news/articles/2018-04-03/day-after-cyber-attack-a-third-gas-pipeline-data-system-shuts

EDI Security:

https://www.acsac.org/secshelf/book001/18.pdf

Quote:  “EDI security appears at several interrelated stages:

  • The user/application interface,
  • EDI applications and value added services,
  • The processing (both batch and interactive) and storage of EDI messages,
  • The communication of these messages in an open systems environment”

 

They Price It Right! Come on down…

Healthcare from United States, come on down! Welcome to “They Price It Right!” There goes the industry, high-fiving all the other industries in the studio as it rushes towards Drew Carrey and the stage. And pays the ransom.

In 2017, healthcare organizations accounted for 15% of all security incidents and data breaches, second only to financial institutions (from Verizon’s 2017 DBIR). 66% of malware was installed through either email links or attachments. The healthcare industry has also been hard hit with ransomware in recent years.

* The above images captured from Verizon’s 2017 Data Breach Investigations Report

The last several years have seen a dramatic increase in ransomware within healthcare. To quote the CEO of an organization that DID pay out the ransom demand, “These folks have an interesting business model. They make it just easy enough. They price it right.” Symantec’s ISTR on Ransomware 2017 reports the average ransom demand “appears to have stabilized at US$544 indicating attackers may have found their sweet spot.” Ahhh…can just picture the blackmailer getting a notification that their target had succumbed and paid up…that hit the sweet spot.

However, a reminder; a $500 ransom may not seem much to an organization with millions or billions in revenue, but that’s per infection (sorry, pun not intended as we’re discussing the healthcare industry). Dozens or hundreds of infection can easily tally up the ransom to total in the tens or hundreds of thousands.

Furthermore, paying the sweet spot ransom does not guarantee even a bittersweet outcome. SentinelOne’s 2018 Ransomware Study shows 42% of ransom payments did not result in data recovery. 58% demanded a second payment.

* The above image captured from SentinelOne’s Global Ransomware Study 2018

Most ransomware is delivered through email. Phishing. Spearphishing. Targeted targets. Email addresses for an organization can easily be harvested using readily available open source tools. 15 minutes to create a phishing campaign with the newly found targets with a link or malicious attachment. The context of the email can be social media related, user needs to reset their password, they have a package that was undelivered, the CEO has attached a memo addressed to all staff. The recent Russian indictments – regardless of the reader’s political leanings – are proof that PHISHING WORKS! (Also blogged here in stateofsecurity.com)

Technology has come a long way – email filters, domain verification, Sender Policy Framework, malware and link scanners – plus many more help in filtering out the 50-70% of the email traffic that is spam. But they still get through. I know for one my Inbox is not spam-free or devoid of any phishing messages.

Since technology is not at the point where it’s able to stop all phishing email, it is up to the user to NOT click on that link or attachment. Sure, there are technologies that prevent bad things from happening if a user DOES click on a phishing link or malicious attachment. But then again, technology is not at the point where they are 100% effective.

Businesses with big budgets buy all kinds of hardware and software solutions to try to counter phishing. But they ignore a big piece of the phishing attack model, and that is the end user. And here, education and training is imperative.

Repeating phishing exercises should be conducted on all or selected groups of employees. These campaigns should be at not-too-regular intervals, so as not to evoke an anticipation from the employees – alright, here come some vaguely suspicious email on the first day of each quarter; I’ll just delete them. Then the rest of the year, they blatantly open, view and click on any and all email links. The simulated campaigns should be randomized and as unexpected as possible.

These campaigns should also be followed up with some education, either some static web pages, training video or live in person session. Phishers are always coming up with new tricks and methods. As a result, end users should be brought up to speed with their new tricks. A couple academic research papers on the efficacy of phishing training demonstrate that EDUCATION WORKS! (links under Resources below)

Then there needs to be a culture of non-retribution. Phishing exercises should be conducted with learning as the objective. Employees should come away with a heightened awareness of phishing and the social engineering tricks used by phishers that make you just want to click that link/attachment.

Employees should be encouraged to report any suspicious email so that word gets around. Homeland Security’s “See something, say something” campaign applies here too; someone is perhaps targeting your firm, alert your fellow colleagues.

Resources:

https://www.verizonenterprise.com/resources/reports/2017_dbir_en_xg.pdf

https://go.sentinelone.com/rs/327-MNM-087/images/Ransomware%20Research%20Data%20Summary%202018.pdf

https://www.healthcaredive.com/news/must-know-healthcare-cybersecurity-statistics/435983/

https://www.symantec.com/content/dam/symantec/docs/security-center/white-papers/istr-ransomware-2017-en.pdf

https://blog.barkly.com/phishing-statistics-2016

http://www.cs.cmu.edu/~jasonh/publications/apwg-ecrime2007-johnny.pdf

https://www.usenix.org/system/files/conference/soups2017/soups2017-lastdrager.pdf

https://www.dhs.gov/see-something-say-something/about-campaign

Lighting up BEC, not Bic – Business Email Compromise…

What’s a bit of spam and a bit of phishing, right? It’s all the cost of doing business…until you look at what it really CAN cost your business.

The latest statistics from the Internet Crime Complaint Center (IC3) are enlightening – taken directly from the IC3 site:

The following BEC/EAC statistics were reported to the IC3 and are derived from multiple sources, including IC3 and international law enforcement complaint data and filings from financial institutions between October 2013 and May 2018:

Domestic and international incidents: 78,617
Domestic and international exposed dollar loss: $12,536,948,299

The following BEC/EAC statistics were reported in victim complaints where a country was identified to the IC3 from October 2013 to May 2018:

Total U.S. victims: 41,058
Total U.S. victims: $2,935,161,457
Total non-U.S. victims: 2,565
Total non-U.S. exposed dollar loss: $671,915,009

The following BEC/EAC statistics were reported by victims via the financial transaction component of the IC3 complaint form, which became available in June 20163. The following statistics were reported in victim complaints to the IC3 from June 2016 to May 2018:

Total U.S. financial recipients: 19,335
Total U.S. financial recipients: $1,629,975,562
Total non-U.S. financial recipients: 11,452
Total non-U.S. financial recipients exposed dollar loss: $1,690,788,278

That’s billions with a B…and the dollars and cents cannot measure the intangible costs like reputation, consumer confidence, etc.

What are the growing targets, and vectors of compromise? Financial transactions of all kinds tend to be the low hanging fruit. Real estate transactions, wire transfers, anything with a routine methodology of process, where information requests are constant, and a change of source or target would not be unusual. What’s another call from the bank, asking to verify your account information for payment? Another wire transfer request from the CFO?

There are also information breaches to consider. Let’s look at DocuSign for a moment – their own statement admits that email addresses were compromised, but indicates that additional personal information was not at risk. This statement is a bit misleading. A threat actor could collate the additional info to make an attack appear legitimate through other sources – and the fact that these emails came from DocuSign means that they would legitimately expect to receive email FROM DocuSign! In sales, that’s a pre-qualified lead, and it’s no less valuable to an attacker.

Another high-profile incident is the indictment of Russian operatives in the DCCC and DNC compromise – MSI has written about that here.

Add the preponderance of mobile devices, webmail, and online portals to your business of all kinds…it’s a risk. And any breach of your business data, client/customer data, and/or employee data is high profile as a risk to YOU. MSI has had a number of clients this year with compromises of Office 365 email accounts, administrative accounts that were externally facing, wire transfer issues, etc. On a personal level, individuals have had fraudulent tax returns filed under their SSN, etc. Size is irrelevant when it’s your data (and money) at risk.

So, what can you do to protect yourself, and your company? Email filtering, mobile device management, and other security measures can help – but the one measure that is consistently most effective against these attacks is MFA – multi-factor authentication. MFA is, at its core, something you know and something you have.

Often, this is an SMS code, or something physical like an RSA hard or soft token. However, do not rule out MFA for less technical transactions. In a situation where the CFO emails in a wire transfer, also add a vocal component – the individual must call and answer a challenge response question.

Are there challenges to implementing MFA? Of course. One of the primary challenges is user resistance – one of my favorite sayings is…change is inevitable, except in vending machines. But humans are wired to see their consistent patterns as a comfort, and you’re asking them to leave their comfort zone.

Another challenge is the technology gap. NIST is no longer recommending SMS as a component of MFA – but if that is all your organization is capable of leveraging, is it better than nothing? That’s a question for your technical and risk staff to consider.

The solution you choose will always NOT work for someone or something in your organization – someone will have a device that is too old, or incompatible, and they’re high enough up the corporate ladder that allowances will be considered. If you use a hardware token, someone will break it at a critical moment – or the USB token won’t work with their new whizz bang device.

And once you begin implementation, your organization won’t go from zero to 100% compliant immediately – in addition to dealing with the outliers, you’ll need a transition plan while implementation is underway.

Documented policies and procedures will need to be present – create these as you go, it will be a less onerous task than after the fact. In the case of our verbal challenge and response for a wire transfer example, where will those procedures be kept and how will they be protected – they should be safe from easy compromise, but not invalidate the solution when the primary person is out of the office?

Then there’s the issue of critical software that may need to be externally facing, but doesn’t support MFA. What do you do when the developers cannot implement this in a manner to protect your company? “The program wouldn’t do it” will be of little comfort when you’ve been compromised.

Are the challenges overwhelming? We cannot LET them be, folks. Scroll back up to those numbers – that’s billions with a B. Consider the challenges as things to rise up and meet, in the best way for your organization – rather than mountains that you simply cannot climb.

Questions, comments? I’d love to hear from you – lwallace@microsolved.com, or @TheTokenFemale on Twitter!