Move over Intel – here comes AMD…

Following close behind Spectre, Meltdown, et al…CTS-Labs announced on Tuesday, March 13th that it’s researchers had discovered 13 new critical security vulnerabilities with AMD’s Ryzen and EPYC processors. The Israel based company presents the vulnerabilities as allowing attackers to not only access data stored on the processors, but would also allow them to install malware.

Of some note is the fact that it appears that CTS-Labs gave AMD less than 24 hours to respond to the vulnerabilities rather than the customary 90 day notice for standard vulnerability disclosure. As such, there is no readily available information from AMD.

Another item of note is that the domain name “amdflaws.com” was registered February 22, 2018. Presumably this belongs to CTS-Labs or an associate.

Ryzen chips typically power desktop and laptop computers, while EPYC processors are generally found in servers. A quick rundown of the vulnerabilities as presented as of this writing:

RYZENFALL – four variants, affects the Ryzen family of processors: This vulnerability purports to allow malicious software to take full control of the AMD Secure Processor. The resulting Secure Processor privileges could allow read and write in protected memory areas, such as SMRAM and the Windows Credential Guard isolated memory. This could allow attackers to bypass controls such as Windows Credential Guard to compromise credentials, and potentially move laterally through the affected network.

Attackers could also theoretically use this vulnerability in conjunction with MasterKey to install persistent malware on the Secure Processor.

FALLOUT – three variants, affects the EPYC family of processors: This vulnerability purports to allow attackers to read from and write to protected memory areas, such as SMRAM and Windows Credential Guard isolated memory (VTL-1).

Attackers could theoretically leverage these vulnerabilities to steal network credentials protected by Windows Credential Guard, as well as to bypass BIOS flashing protections implemented in SMM.

CHIMERA – two variants, affects the Ryzen family of processors: This vulnerability purports to have discovered two sets of manufacturer backdoors: One implemented in firmware, the other in hardware (ASIC). The backdoors allow malicious code to be injected into the AMD Ryzen chipset.

The chipset links the CPU to USB, SATA, and PCI-E devices. Network, WiFi and Bluetooth traffic often flows through the chipset as well. The attack potential for this vector is significant, and malware could evade virtually all endpoint security solutions on the market.

Malware running on the chipset could leverage the latter’s Direct Memory Access (DMA) engine to attack the operating system. This kind of attack has been demonstrated.

MASTERKEY – three variants, affects both the Ryzen and EPUC families of processors:  Multiple vulnerabilities in AMD Secure Processor firmware allow attackers to infiltrate the Secure Processor.

This vulnerability purports to allow the deployments stealthy and persistent malware, resilient against virtually all security solutions on the market. It also appears to allow tampering with AMD’s firmware-based security features such as Secure Encrypted Virtualization (SEV) and Firmware Trusted Platform Module (fTPM).

As in RyzenFall, this could allow attackers to bypass controls such as Windows Credential Guard to compromise credentials, and potentially move laterally through the affected network.

Another consideration is potential physical damage and bricking of hardware. It could also potentially be leveraged by attackers in hardware-based “ransomware” scenarios.

The full whitepaper is here.

Given the continued impact of the Intel patches on performance and stability, and conflicts with other vendor products – hardware and software – hang on, folks. We’re going to see some chaos in this space.

What are your thoughts? Do you feel the responsible disclosure path is to give manufacturers the customary 90 day window, or is immediate disclosure of risk preferable to you?

Let me know what you think. I can be reached at lwallace@microsolved.com, or on Twitter as @TheTokenFemale

Enter the game master….disaster recovery tabletops!

I snagged this line from the most excellent Lesley Carhart the other day, and it’s been resonating every since.

“You put your important stuff in a fire safe, have fire drills, maintain fire insurance, and install smoke detectors even though your building doesn’t burn down every year.”

When’s the last time you got out your business continuity/disaster recovery plan, dusted it off, and actually READ it? You have one, so you can check that compliance box…but is it a living document?

It should be.

All of the box checking in the world isn’t going to help you if Step #2 of the plan says to notify Fred in Operations…and Fred retired in 2011. Step #3 is to contact Jason in Physical Security to discuss placement of security resources…and Jason has changed his cell phone number three times since your document was written.

I’ve also seen a disaster recovery plan, fairly recently, that discussed the retrieval and handling of some backup….floppy disks. That’s current and up-do-date?

Now, I am an active tabletop gamer. Once a week I get together with like-minded people to roll the dice and play various board games.

For checking the validity of your disaster recovery plan there is an excellent analog to the tabletop gaming world:

Tabletop DR exercises!

Get BACK here….I see you in the third row, trying to sneak out. I’ll admit, I LOVE doing tabletops. Hello? I get to play game master, throw in all kinds of random real life events, and help people in the process – that’s the trifecta of awesome, right there. If it’s a really good day, I get to use dice, as well!

The bare minimum requirements for an effective tabletop:

  • A copy of  your most recent DR/BC plan
  • Your staff – preferably cooperative. Buy ’em a pizza or three, will you? The good kind. Not the cheap ones.
  • An observer. This person’s job is to review your plan in advance, and observe the tabletop exercise while taking notes. They will note WHAT happens, and what actions your team takes during the exercise. This role is silent, but detail oriented.
  • And the game master. The game master will present the scenario to the team. They will interact with the team during the exercise, and will also be the one who generates the random events that may throw the plan off track. It’s always shocking to me how many people would rather be the observer….to me, game master is where the fun is.

Your scenario, and the random event happenings, should fit your business. I tend to collect these for fun….and class them accordingly. A random happening where all credit card processing is doubling due to an error in the point of sale process is perfect for a retail establishment…but an attorney’s office is going to look at me like I have three heads.

Once the exercise is over, the game master and observer should go over all notes, and generate a report. What did the team do well, what fell off track, what updates does the plan need, and what is missing from the plan entirely?

Get the team together again. Buy ’em donuts – again, the good ones. Good coffee. Or lunch. Never underestimate the power of decent food on technical resources.

Try to start on a high note, and end on a high note. Make plans, as you review – what are the action items, and who owns them? When and how will the updates be done? When will you reconvene to review the updates and make sure they’re clear and correct?

Do this, do it regularly, and do NOT punish for the outcome. It’s an exercise in improvement, always…not something that your staff should dread.

Have a great DR exercise story? Have a REALLY great random event for my collection? I’d love to hear it – reach out. I’m on Twitter @TheTokenFemale, or lwallace@microsolved.com

Because you know it’s all about them apps, ’bout them apps…

Know thyself – Socrates

I ran across this link last week, from SANS, and it’s one of the better basic checklists I’ve seen for application security. With all due respect to OWASP, their information is more technical, and useful for practitioners – their testing guide is here. For the CIO level crowd, I’d highly recommend a look at their top 10 for 2017. And a serious nod to Bill Sempf – if you haven’t heard his talk about care and feeding of developers in the security space, go find it!

Since this missive was designed to have pretty pictures and convince you to send your developers to the SANS courses listed, it’s a nice start for security practitioners that may need to work with developers, but aren’t 100% versed in application security. Some of this info is more basic than OWASP’s, as well – which does not diminish it’s importance. Let’s talk about what they list here, and why it’s important.

Error handling and logging:

Don’t display the specific error messages generated by your programs/architecture, and don’t allow unhandled exceptions – both of these items can display information about the underlying architecture of your application. Attackers can leverage this information and any associated vulnerabilities to compromise the application. If the user creates a condition that generates an error, offer them enough information to fix the problem – nothing more, nothing less.

Don’t allow specific framework errors…”the X program says you broke Y variable” – suppress them. Allowing these errors discloses potentially useful information about the framework and architecture to attackers.

Log all the things! Log authentication attempts – successful or not. Log privilege changes – successful or not. Log all administrative activity, or administrative attempts. Log any and all access and access attempts to sensitive information.

Log all the things….except when you don’t. Don’t log sensitive information. Log the admin attempts, but not admin passwords. Don’t log any information that falls under HIPAA, PCI, or other regulatory spheres.

Store logs securely. Plain text in an internet facing share? Not the world’s best idea. Encrypt, secure, and protect against data loss and tampering. If you have a data retention policy, make sure that logs are included and the policy is followed.

Data Protection:

Turn ON HTTPS, turn OFF HTTP. The same URL should not be accessible via HTTP. Get your HTTPS certificates from a respectable CA – no self-signed certificates. Accepting them is bad practice, and you run the risk of the impression that you haven’t done your due diligence, AND of conditioning your users to bypass this simple security measure.

Disable weak ciphers. Don’t wait for the 4,732 vulnerability, and don’t argue that these vulnerabilities are difficult to exploit. The NEXT one might not be. Get your SSL sanitization house in order.

Don’t allow auto-complete. Yes, some browsers will ignore things – their bad practices shouldn’t be used to justify your bad practices.

Avoid storing user info. Tokenize when possible. If you have to store password, encrypt, salt, spindle, mutilate and fold. There’s no such thing as TOO safely here.

Operations:

Have a consistent, repeatable process for…application development, testing, change control. Include security requirements at the beginning of the design – don’t try to shoehorn them in after the fact.

Review, review, review. Code reviews. Design reviews. Security testing – as you go, not at the end. Harden the environment per best practices.

Train your developers on security! Work as partners, not as the guys who make stuff and those security guys that always say no.

Have an incident response plan. TEST your plan, evaluate your plan, use your plan. Do not wait til something DOES happen to discover the holes in your plan. Keep your plan updated, as staff contacts and responsibility changes. Do disaster recovery exercises.

Authentication:

Hard coded credentials. Don’t. Just don’t. But I need to because….no. You do not. There are safer ways to do this.

Have a strong password policy. Have a strong password reset – do not accidentally disclose things like the validity of an account via the password reset mechanism. Do have a password lockout policy – unlimited attempts is an invitation to a brute force attack.

Again, make sure your error messages aren’t handing valuable information to attackers.

Run applications and middleware with the least privilege required. Database passwords are gold – do not put them in code. Guard them. But I need to because…again, you do not. Do it right, don’t do it over.

Session management:

Put a logout button on every page. Every. Page. Then, invalidate the session once they’ve logged out – no back button resumption of the session.

Randomize your session tokens, so that they are not vulnerable to predictive attacks. Regenerate them as user permissions change. Unless the application requires multiple connections – and you have a legitimate need to DO this – destroy tokens in multiple sessions. Don’t leave yourself open to session cloning.

Cookies. And not the chocolate chip kind. Set the domain and path correctly. Use secure cookie attributes, and expire cookies as appropriate.

Log users out automatically on reasonable idle periods. Implement an absolute logout – there are few, if any, legitimate reasons to be logged in forever.

Input & Output handling:

Whitelist over blacklist. Only accept data that meets the criteria for your application.

Validate, validate, validate. Validate uploaded files – consider all uploads as suspect, and sandbox accordingly. Validate input sources.

Follow the OWASP recommendations, many detailed in the link above, for input, output, and safe transport.

Access control:

Apply access controls consistently. Use “gate keeper” technology, so that all requests are validated and verified, whether the user is logged in or not.

Don’t allow unvalidated forwards or redirects. This gives an attacker potential capability to access content without authentication.

Least privilege rules. Make access control mandatory, don’t elevate rights when you don’t absolutely need to. Don’t use direct object references to validate access.

There’s a lot more than I’ve include here….don’t understand these? Need more info? Talk to your developers. Buy ’em a burger. Buy ’em a beer. Become the guy who listens, and attempts to understand….not the jerk that always says no. If you make an honest effort to understand them, and to help them understand you, you’ll both be better for the attempt.

Got a development war story? Got a good development story? Please reach out – @TheTokenFemale on Twitter. Let’s keep the conversation going.

Scope….or, why can’t you just send me a form?

Scoping….the process of gathering data to put together a statement of work for a client.

To be 100% honest, I love scoping. And MSI doesn’t scope via form letter, although I’ve seen a variety of companies take this approach.

Is it because I want to talk to you? Well, partially – I do enjoy the vast majority of our clients. But here’s where I think the “fill in the form” plan fails.

First, when you’re not engaged in conversation, you’re viewing the client requirements with an eye towards putting a peg in the hole of one of your offerings. Even if that ends up to be a square peg in a round hole.

Second, the conversation often takes many twists and turns. As we talk about MSI, and our capabilities…it will happen that what a client asks for isn’t precisely what they need. We can offer a different service, and help them get to their end goal in a different way. And this isn’t always more services…it’s equally likely that it will be less, or a custom variation on a service we already have. The majority of clients don’t fall into “canned” services….and it’s refreshing to talk to them when they’re also engaging other vendors simply dropping them into a slot.

So the first question of any scoping conversation is – what is the purpose? What problem are you trying to solve? Is it regulatory – you have to have X assessment? Is something broken? Or are you trying to become aware of some security gaps – whatever they may be.

That’s the springboard of the conversation, helps us get to know you, and helps you to get the right mix of services. It’s personalized, customized, and based on individual attention from our sales and technical staff.

The next piece of serious hands-on attention comes when we’ve gathered the details for the engagement. Does the information provided make sense? If you’re a financial services firm, and you’ve chosen to be measured against HIPAA, is that really the right choice for you? The push-button approach may miss that.

Another item that’s fairly common is typos or inaccurate information in the network space provided. So we’ll do passive recon on the information provided. Does the IP space really belong to your company? Are you using hosting via AWS, which requires an additional penetration test form? Are you using a host like Rackspace that has additional contract stipulations on penetration testing?

Throughout the engagement, there are more personal touches. Via our project management portal, the engineers working on your engagement touch base every day, every other day as work progresses. If a highly critical issue is discovered, all work stops, and the engineers will get on the phone with you. We don’t believe in a situation where a critical vulnerability is only shared in a report, weeks after the discovery.

Now the reports are in your hands. We keep those reports for ~90 days – after that, all reports are purged from the system. During that 90 days, we can supply replacement copies – we can also supply the password used for encryption, if you’ve misplaced it. Sanitized copies of reports can be produced as well, for dissemination to vendors, clients, regulatory bodies, or any interested parties that you need to share this information with – a small fee may apply.

At the end of the day, the question is – who did you help today? It’s rare for MSI to end the day where we can’t answer that question in multiple avenues. It’s one of my favorite things, and we’d love to help you!

Spectre and Meltdown and Tigers, Oh my….well, maybe not tigers….

On January 3rd, three new vulnerabilities were disclosed. These vulnerabilities take advantage of how various CPU’s handle processing in order to return a faster result.

The technical details for Spectre and Meltdown are addressed by the papers linked to their names above. And some POC’s from the Project Zero team.

A few observations on how the industry is addressing this issue…and a few points of interest that I’ve found along the way. First, let’s note that the CVE’s for these are 2017…when in 2017? We don’t know. But the catchy domain names were registered around the third week in December, 2017.

The full vendor matrix at CERT – this is always worth watching, and there are some useful tips for cloud implemenations via Amazon and Microsoft Azure:

Operating system manufacturers:

Apple

  • Will release updates for Safari and iOS in coming days. Some speculation that iOS on Mac’s that is 10.13.2 or higher has some protection from one or more variants – not verified
  • https://support.apple.com/en-us/HT208394

Windows

Linux

Some antivirus solutions are causing blue screens after application of these patches:

This is particularly interesting to me – the browsers. I did not expect to see the browser patch bandwagon to be as rapid as it has been:

Firefox

Internet Explorer

Safari

  • Will be addressed in approximately the same timeframe as Apple iOS patches – current ETA unknown

Chrome

The long and short. Is the sky falling? Probably not. If you have solutions that are hosted with a cloud provider, check in with them. What are their recommended mitigations, and have you implemented them? In an enterprise environment, do your due diligence on patches. Patch in your test environment first, and research your antivirus solution for potential impact.

And I believe I’m paraphrasing the excellent Graham Cluley. Calm down, make a cup of tea – although mine is salted caramel coffee. Patch during your normal cadence for critical patches, and keep the ship afloat!

Office 365 – all your stuff belongs to…who?

We’ve had a surprising number of incident response engagements involving Office 365 lately, and I’d like to discuss some best practices to keep you from an incident. There are also some actions that should be taken to allow effective investigation if you should suspect a user or resource is compromised.

The single most important thing that would have kept most of these incidents from occurring? Enable multi-factor authentication. Period.

Yes, I know. But our users complain! It’s a hassle! It’s an extra step!

Let’s consider carefully. Look at each user in the organization. Consider what they have access to, if their credentials are compromised. Look at these resources in your organization:

  • Exchange
  • All Office 365 documents
  • Sharepoint

Weigh out the user inconvenience vs. the loss of any and all data that they have access to….and discuss with your risk and compliance staff.

Still with me? Here’s how to enable multifactor authentication in Office 365.

If you are on a hosted Office 365 infrastructure, your service provider should be ready and willing to help with this if you do not have access in place to enable the option.

Next, let’s talk about investigating an actual compromise. Microsoft has some fairly robust mailbox audit capability for user access, etc. And…it’s not turned on by default.

Crazy, you say? Just a bit!

First, you need to turn the options on – instructions to do that are here.

Then you need to enable it for mailboxes. Instructions to do that are here.

Please note that this second step requires Powershell access – so if you are in a managed Office 365 environment, your service provider will likely need to assist. (and don’t take no for an answer!)

There are a number of other options that are useful for fine-tuning the spam and malware settings, enabling DLP, and other useful things that are not on by default – or not configured for the most optimal settings.

Would you like an audit of your Office 365 environment? Our engineers can help you fine tune your settings to optimize the available options. Reach out, we’d love to help.

Questions, comments? Twitter – @TheTokenFemale, or lwallace@microsolved.com. I’d love to hear from you!

We’re all on….Krack?

Yesterday, the media exploded with news of the new Krack attack against WPA2. What’s Krack? While it sounds like a new designer drug, it’s short for – Key Reinstallation Attacks.

What’s that mean in English? Due to a flaw in the WPA2 protocol, a determined attacker can target and break the 4-way handshake and compromise the unique session keys. They can force reuse of a key, and intercept (as well as potentially modify) unencrypted traffic. The full whitepaper is here.

The good news, such as it is:

  • This is not a remote attack – the attacker needs to be in range of the affected Wifi connection
  • At this point, there is no evidence of these attacks in the wild. Caveat, at this point.

There have been a couple of reports suggesting that WPA2 should be disabled in lieu of WPA or (ack) WEP. These are not better options, as a general rule. WEP in particular is vulnerable far beyond the WPA2 protocol issues.

What can we do, and what should our user base do?

  • As soon as there’s a patch available, take advantage of it.
  • Watch your vendors – who is patching quickly, and who is not? If your vendor is months behind the curve, why? It may be time to look at other options.
  • Wired connections are immune. This should go without saying – this includes tethering directly to mobile devices, not tethering via WiFi.
  • You’re already horribly suspicious of free public WiFi, right? Double that paranoia.
  • If you must use a public WiFi, do not do so without using a VPN or other traffic encryption mechanism. Split tunnel VPN is not your friend here – configure a full tunnel VPN connection
  • Bring your physical security staff into the conversation. They will be integral in observation of potential attackers in the sphere of your physical presence. (Have you checked how far your WiFi signal is traveling lately? Consider checking…)

CERT has a complete list of vendors and their known status here.

Questions, comments, things I haven’t thought about? Reach out, I’d love to hear from you. Twitter @TheTokenFemale at any time.

Time Warner – 320,000 passwords compromised

Knock knock! Who’s there? The FBI….

This is never the way you’d like your day to play out. Last week, Time Warner was notified by the FBI that a cache of stolen credentials that appear to belong to Time Warner customers had been discovered.

At this point, the origination of the usernames and passwords is a bit of a mystery. Time Warner states: 

“We have not yet determined how the information was obtained, but there are no indications that TWC’s systems were breached.

The emails and passwords were likely previously stolen either through malware downloaded during phishing attacks or indirectly through data breaches of other companies that stored TWC customer information, including email addresses.

For those customers whose account information was stolen, we are contacting them individually to make them aware and to help them reset their passwords.”

Time Warner customers who have not yet been contacted should still consider changing their  passwords – there is no indication at this point if this is new or previously compromised password data, and a new password is never a bad idea.

Please share with anyone who is using Time Warner systems – friends, co-workers, weird relatives and neighbors as well. Remember that any password that is used twice isn’t a safe password – unique passwords are always the best practice. Password managers (LastPass, KeePass, etc.) are often a good idea to help maintain unique, difficult to decipher passwords.