Let’s Talk… But First “Let’s Encrypt”!

One of the common findings security assessment services encounter when they evaluate the Internet presence of an organization are sites where login credentials travel over the Internet unencrypted.

So: HTTP not HTTPS.

You’ve all seen the “not secure” warnings:

The intent of the browser’s warning is to make the user aware that there is a risk of credential loss with this “http-only” unencrypted login.

The solution is HTTPS of course, but that requires that a “certificate” be created, signed by some “certificate authority” (CA), and that the certificate be installed on the website.

If the user connects via HTTPS ( not HTTP ) the browser and the server negotiate an encrypted communication channel based on the SSL/TLS protocol and all is well…. almost.

The traffic will be encrypted, but if the particular CA that created the website’s certificate is not itself known to the browser (and those trusts are either built-in or added by an IT department) the user may see something like the following:

The browser will give the user the option to trust the certificate and proceed – but this is universally regarded as a bad practice as it makes the end user more inclined to disregard all such security warnings and thus be more susceptible to phishing attacks that intentionally direct them to sites that are not what they claim to be.

The usual solution is to purchase certificates from a known and accepted CA (e.g. DigiCert, Comodo, GoDaddy, Entrust, etc.).

The CA’s that issue these certificates will perform some checks to ensure the requester is legitimate and that the domain is in fact actually controlled by the requester. The extent to which identity is verified and the breadth of coverage largely determines the cost – which can be anywhere from tens to thousands of dollars.  SSLShopper provides some examples.

Users who browse to such sites will see the coveted “green lock” indicating an encrypted HTTPS connection to a validated website:

The problem is the cost and administrative delay associated with purchasing certificates from accepted CA’s, keeping track of those certificates, and ensuring they are renewed when they expire.

As a result some sites either dispense with HTTPS altogether or rely on “self-signed” certificates with the attendant risk.

This is where  “Let’s Encrypt” comes in.

“Let’s Encrypt” is an effort lead by some of the major players in the industry to lower (read: eliminate) the cost of legitimate HTTPS communications and move the Internet as much as possible to HTTPS (and other SSL/TLS encrypted traffic) as the norm.

The service is entirely free and the administrative costs are minimal. Open-source software (Certbot) exists to automate installation and maintenance.

Quoting from the Certbot site:

Certbot is an easy-to-use automatic client that fetches and deploys SSL/TLS certificates for your webserver. Certbot was developed by EFF and others as a client for Let’s Encrypt … Certbot will also work with any other CAs that support the ACME protocol.

Certbot is part of EFF’s larger effort to encrypt the entire Internet. Websites need to use HTTPS to secure the web. Along with HTTPS Everywhere, Certbot aims to build a network that is more structurally private, safe, and protected against censorship

The software allows “Let’s Encrypt” – acting as a CA – to dynamically validate that the requester has control of the domain associated with the website and issue the appropriate cert.  The certificates are good for 90 days – but the entire process of revalidation can be easily automated.

The upshot is that all the barriers that have prevented universal adoption of HTTPS have largely been removed.  The CA’s economic model is changing and some have left the business entirely.

Do you still have HTTP-only sites and are they a chronic finding in your security assessments?

Has cost and administrative overhead been a reason for that?

Take a look at “Let’s Encrypt” now!

One caveat:

Just because a site has a legitimate certificate and is using HTTPS does NOT mean it is necessarily safe,  You may find your users are under the impression that is the case. As always, user education is a required part of your security program.

See:

HTTPS:

 

“Let’s Encrypt”:

User Education:

Make Smart Devices Your Business Friend, Not Your Business Foe

One good way to improve your information security posture and save resources at the same time is to strictly limit the attack surfaces and attack vectors present on your network. It’s like having a wall with a thousand doors in it. The more of those doors you close off, the easier it is to guard the ones that remain. However, we collectively continue to let personnel use business assets and networks for high-risk activities such as web surfing, shopping, checking social media sites and a plethora of other activities that have nothing to do with business.

Most organizations to this day still allow their personnel to access the Internet at will, download and upload programs from there, employ computer ports like USB, etc. But the thing is, this is now; not ten years ago. Virtually everyone in the working world has a smart phone with them at all times. Why not just let folks use these devices for all their ancillary online activities and save the business systems for business purposes?

And for those employees and job types that truly need access to the Internet there are other protections you can employ. The best is to whitelist sites available to these personnel while ensuring that even this access is properly monitored. Another way is to stand up a separate network for approved Internet access with no (or strictly filtered) access to the production network. In addition, it is important to make sure employees use different passwords for business access and everything else; business passwords should only be used for that particular access alone.

Another attack vector that should be addressed is allowing employees local administration rights to their computers. Very few employees in most organizations actually need USB ports, DVD drives and the like to perform their business tasks. This goes double for the ability to upload and download applications to their computers. Any application code present on an organization’s production network should be authorized, approved and inventoried. Applications not on this list that are detected should be immediately researched and dealt with.

Imagine how limiting attacks vectors and surfaces in these ways would help ease the load on your system security and administrative personnel. It would give them much less to keep track of, and, consequently give them more time to properly deal with the pure business assets that remained.

Cloudy With a Chance of Misconfigurations

Many organizations have embraced cloud platforms now, like Amazon AWS or Microsoft Azure, whether they are using it for just a few services or moved part or all of their infrastructure there. No matter the service though, configuration isn’t foolproof and they all require specific knowledge to configure in a secure way.

In some cases we have seen these services configured in a way that isn’t best practice, which led to exposure of sensitive information, or compromises through services that should not have been exposed. In many instances there are at least some areas that can be hardened, or features enabled, to reduce risk and improve monitoring capabilities.

So, what should you be doing? We’ll take a look at Amazon AWS today, and some of the top issues.

One issue, that is seemingly pervasive, is inappropriate permissions on S3 buckets. Searches on S3 incidents will turn up numerous stories about companies exposing sensitive data due to improper configuration.  How can you prevent that?

Firstly, when creating your buckets, consider your permissions very carefully. If you want to publicly share data from a bucket, consider granting ‘Everyone’ read permissions to the specific resources, instead of the entire bucket. Never allow the ‘Everyone’ group to have write permissions, on the bucket, or on individual resources. The ‘Everyone’ group applies literally to everyone, your employees and any attackers alike.

Secondly, take advantage of the logging capability of S3, and monitor the logs. This will help identify any inappropriately accessed resources, whether through inadvertently exposed buckets, or through misuse of authorization internally.

Another common issue is ports unnecessarily exposed on EC2 resources. This happens through misconfigurations in VPC NACLs or Security Groups, which act as a firewall, sometimes found configured with inbound traffic allowed to any port from any ip. NACLs and Security Groups should be configured to allow the least amount of traffic to the destination as possible, restricting by port and by ip. Don’t forget about restricting outbound traffic as well. For example, your database server probably only needs to talk to the web server and system update servers.

The last issue we’ll discuss today is the IAM,  the Identity and Access Management interface. Firstly, you should be using IAM to configure users and access, instead of sharing the root account among everyone. Secondly, make sure IAM users and keys are configured correctly, with the least amount of privileges necessary for that particularly user. I also recommend requiring multifactor authentication, particularly on the root account, any users in the PowerUsers or Admins group, or any groups you have with similar permissions.

That’s all for today. And remember, the good news here is that you can configure these systems and services to be as secure as what is sitting on your local network.

Weekly Threat Brief 4/20

Starting today MSI will begin publishing our Weekly Threat Brief! This is a light compilation of cyber articles and highlights. Some articles have been broken down by verticals which include Financial, Business and Retail, Scada, Healthcare and Government/Military. Please take a moment to click on the link and read! Hope you enjoy it!

WTB Apr 16-20 2018

Please provide any feedback to info@microsolved.com

State Of Security Micro-Podcast Episode 14

This is my first foray into the world of Podcast! I am the new Chief Operating Officer here at MicroSolved and we are getting ready to launch a new Log Analysis product. This micro-episode features members of the staff at MSI and will talk through some of the highlights of our new offering. This is our first attempt as a team so be gentle as we will get better! Thanks for listening!

 

Business Impact Analysis: A Wealth of Information at Your Fingertips

Are you a Chief Information Security Officer or IT Manager stuck with the unenviable task of bringing your information security program into the 21st Century? Why not start the ball rolling with a business impact analysis (BIA)? It will provide you with a wealth of useful information, and it takes a good deal of the weight from your shoulders by involving personnel from every business department in the organization.

BIA is traditionally seen as part of the business continuity process. It helps organizations recognize and prioritize which information, hardware and personnel assets are crucial to the business so that proper planning for contingency situations can be undertaken. This is very useful in and of itself, and is indeed crucial for proper business continuity and disaster recovery planning. But what other information security tasks can BIA information help you with?

When MSI does a BIA, the first thing we do in issue a questionnaire to every business department in the organization. These questionnaires are completed by the “power users” in each department, who are typically the most experienced and knowledgeable personnel in the business. This means that not only do you get the most reliable information possible, but that one person or one small group is not burdened with doing all of the information gathering. Typical responses include (but are not limited to):

  • A list of every business function each department undertakes
  • All of the hardware assets needed to perform each business function
  • All of the software assets needed to perform each business function
  • Inputs needed to perform each business function and where they come from
  • Outputs of each business function and where they are sent
  • Personnel needed to perform each business function
  • Knowledge and skills needed to perform each business function

So how does this knowledge help jumpstart your information security program as a whole? First, in order to properly protect information assets, you must know what you have and how it moves. In cutting-edge information security guidance, the first controls they recommend instituting are inventories of devices and software applications present on company networks. The BIA lists all of the hardware and software assets needed to perform each business function. So in effect you have your starting inventories. This not only tells you what you need, but is useful in exposing assets wasting time and effort on your network that are not necessary; if it’s not on the critical lists, you probably don’t need it.

In MSI’s own 80/20 Rule of Information Security, the first requirement is not only producing inventories of software and hardware assets, but mapping of data flows and trust relationships. The inputs and outputs listed by each business department include these data flows and trust relationships. All you have to do is compile them and put them into a graphical map. And I can tell you from experience; this is a great savings in time and effort. If you have ever tried to map data flows and trust relationships as a stand-alone task, you know what I mean!

Another security control a BIA can help you implement is network segmentation and enclaving. The MSI 80/20 Rule has network enclaving as their #6 control. The information from a good BIA makes it easy to see how assets are naturally grouped, and therefore makes it easy to see the best places to segment the network.

How about egress filtering? Egress filtering is widely recognized as one of the most effect security controls in preventing large scale data loss, and the most effective type of egress filtering employs white listing. White listing is typically much harder to tune and implement than black listing but is very much more effective. With the information a BIA provides you, it is much easier to construct a useful white list; you have what each department needs to perform each business function at your fingertips.

Then there is security and skill gap training. The BIA tells you what information users need to know to perform their jobs, so that helps you make sure that personnel are trained correctly and with enough depth to deal with contingency situations. Also, knowing where all your critical assets lie and how they move helps you make sure you provide the right people with the right kind of security training.

And there are other crucial information security mechanisms that a BIA can help you with. What about access control? Wouldn’t knowing the relative importance of assets and their nexus points help you structure AD more effectively? In addition, there is physical security to consider. Knowing where the most crucial information lies and what departments process it would help you set up internal secure areas and physical safeguards, wouldn’t it?

The upshot of all of this is that where information security is concerned, you can’t possibly know too much about how your business actually works. Ensure that you maintain detailed BIA and it will pay you back for the effort every time.

I’m running out of Post-Its to write down my passwords

We all know to use non-dictionary, complex passwords for our email or online banking or online shopping accounts; whether we put that into practice is another issue. Even less in practice is, using a different password for each of our accounts; that is, never use the same password twice.

Why? The online gaming site that you logon to crush candy may not be as prudent in its security as the financial advisor site that is managing your 401K. The gaming site may store your password in cleartext in their database, or use a weak encryption algorithm. They may not be subject to regulations and policies that require them to have a regular vulnerability assessment. Using the same password for both sites will place either of your accounts vulnerable and at risk.

If a breach occurs and a site’s user data and passwords are unscrambled – as with 3.3 million users of a popular gaming site (article here) – then the hacker can try the discovered password on the user’s other accounts – email, bank, company site logon. And if the user uses the same password across the board, bingo.

You might think unlikely, improbable – how will the hacker know which website to try the discovered credentials? If the email harvested from the gaming site is myemailaddress@gmail.com, they could try the credentials to log into gmail. If the email is @mycompany.com, the hacker would look for a login portal into mycompany.com. The attacker could look for social media accounts registered with that email address. Or any other website that may have an account registered with that email address. The last estimate in 2017 is that there are over 300 million Amazon.com users. The attacker could try the discovered credentials on this popular site; if your favorite password is your birthdate – 12250000 – and you use it for all your logons, the attacker would be on an Amazon shopping spree as you read this blog.

This cross-site password use is not a security issue only through an online data breach; you may have misplaced your trust and shared your password, or entered your credentials on someone else’s computer that had a key logger or you accidentally saved your logon, or browsed the internet using an open wireless hotspot where someone was sniffing the traffic, or through any other instance that your password finds its way to the wrong eyes.

OK, so I need a different password for each different account that I have. I’m gonna need a bigger keyboard to stick all the Post-It notes with the passwords to every account I have underneath it. Or, maybe I could use a password manager.

A password manager is a database program that you can use to store information for each of your online accounts, website, username, password, security questions, etc. They are encrypted, requiring one master password to unlock its contents, all your saved passwords; “Ash nazg durbatulûk” – one ring to rule them all.

Remembering one long, strong, complex, impossible-to-brute-force-or-guess password, you can then gain access to all your other impossible to guess passwords. Almost all password managers also have a feature to generate random, complex passwords that you can use for each of your accounts.

There are many password managers out there, some commercial paid-for programs, some free open-source, with varying features. Some store your data in the cloud, some fill-in the login form automatically in the browser with your account credentials, some you can copy and paste the credentials from the program and the data in the clipboard is erased after a specified time period… You should choose a password manager that is both secure and usable.

Secure in that the encryption used to store the saved credentials and data is impossible to crack. Research what level of encryption your organization requires data to be stored with. When using the password manager, is the data self contained or is it exposed or available for use to other programs, and how. Does the password manager program run in secure memory space or written to a pagefile or swap memory that can be dumped by an attacker.

The password manager should be usable so that the user will be more likely to use it on a daily basis. If it slows down the user too much, it will be ignored and old habits die hard, the user will revert to poor password use behaviors.

An example real-world use of a password manager: Desktop and mobile versions of an open-source password manager can be installed on the Mac, Windows, Linux, Android and iOS operating systems with the one database file containing the credentials data saved in a cloud service. The user can access, view and edit the credentials from any of the devices with the installed program.

Password managers can be an an essential tool in securing your credentials. Do your research; research specifications, read reviews, compare functionality and usability. Also look up which managers have had bugs or vulnerabilities, how quick were the patches released, how was the vendor’s response to the flaws.

Using the same password for even only 2 websites should be a no-no. And forget trying to remember unique passwords to over 20 online accounts (recent research found the average US user has 130 online accounts). Plus, many sites force you to change passwords (rightfully so) on a regular basis. What is my current password to xyz.com that I last logged on 18 months ago?

Password managers can help you use a unique, strong password for each account. A data breach at one website (which seems to be reported on a weekly basis now) should not force you to change your password for any other websites. But protect that ONE master password. It is the one ring that rules them all.

Resources:
https://expandedramblings.com/index.php/amazon-statistics/
https://blog.dashlane.com/infographic-online-overload-its-worse-than-you-thought/

Move over Intel – here comes AMD…

Following close behind Spectre, Meltdown, et al…CTS-Labs announced on Tuesday, March 13th that it’s researchers had discovered 13 new critical security vulnerabilities with AMD’s Ryzen and EPYC processors. The Israel based company presents the vulnerabilities as allowing attackers to not only access data stored on the processors, but would also allow them to install malware.

Of some note is the fact that it appears that CTS-Labs gave AMD less than 24 hours to respond to the vulnerabilities rather than the customary 90 day notice for standard vulnerability disclosure. As such, there is no readily available information from AMD.

Another item of note is that the domain name “amdflaws.com” was registered February 22, 2018. Presumably this belongs to CTS-Labs or an associate.

Ryzen chips typically power desktop and laptop computers, while EPYC processors are generally found in servers. A quick rundown of the vulnerabilities as presented as of this writing:

RYZENFALL – four variants, affects the Ryzen family of processors: This vulnerability purports to allow malicious software to take full control of the AMD Secure Processor. The resulting Secure Processor privileges could allow read and write in protected memory areas, such as SMRAM and the Windows Credential Guard isolated memory. This could allow attackers to bypass controls such as Windows Credential Guard to compromise credentials, and potentially move laterally through the affected network.

Attackers could also theoretically use this vulnerability in conjunction with MasterKey to install persistent malware on the Secure Processor.

FALLOUT – three variants, affects the EPYC family of processors: This vulnerability purports to allow attackers to read from and write to protected memory areas, such as SMRAM and Windows Credential Guard isolated memory (VTL-1).

Attackers could theoretically leverage these vulnerabilities to steal network credentials protected by Windows Credential Guard, as well as to bypass BIOS flashing protections implemented in SMM.

CHIMERA – two variants, affects the Ryzen family of processors: This vulnerability purports to have discovered two sets of manufacturer backdoors: One implemented in firmware, the other in hardware (ASIC). The backdoors allow malicious code to be injected into the AMD Ryzen chipset.

The chipset links the CPU to USB, SATA, and PCI-E devices. Network, WiFi and Bluetooth traffic often flows through the chipset as well. The attack potential for this vector is significant, and malware could evade virtually all endpoint security solutions on the market.

Malware running on the chipset could leverage the latter’s Direct Memory Access (DMA) engine to attack the operating system. This kind of attack has been demonstrated.

MASTERKEY – three variants, affects both the Ryzen and EPUC families of processors:  Multiple vulnerabilities in AMD Secure Processor firmware allow attackers to infiltrate the Secure Processor.

This vulnerability purports to allow the deployments stealthy and persistent malware, resilient against virtually all security solutions on the market. It also appears to allow tampering with AMD’s firmware-based security features such as Secure Encrypted Virtualization (SEV) and Firmware Trusted Platform Module (fTPM).

As in RyzenFall, this could allow attackers to bypass controls such as Windows Credential Guard to compromise credentials, and potentially move laterally through the affected network.

Another consideration is potential physical damage and bricking of hardware. It could also potentially be leveraged by attackers in hardware-based “ransomware” scenarios.

The full whitepaper is here.

Given the continued impact of the Intel patches on performance and stability, and conflicts with other vendor products – hardware and software – hang on, folks. We’re going to see some chaos in this space.

What are your thoughts? Do you feel the responsible disclosure path is to give manufacturers the customary 90 day window, or is immediate disclosure of risk preferable to you?

Let me know what you think. I can be reached at lwallace@microsolved.com, or on Twitter as @TheTokenFemale

Enter the game master….disaster recovery tabletops!

I snagged this line from the most excellent Lesley Carhart the other day, and it’s been resonating every since.

“You put your important stuff in a fire safe, have fire drills, maintain fire insurance, and install smoke detectors even though your building doesn’t burn down every year.”

When’s the last time you got out your business continuity/disaster recovery plan, dusted it off, and actually READ it? You have one, so you can check that compliance box…but is it a living document?

It should be.

All of the box checking in the world isn’t going to help you if Step #2 of the plan says to notify Fred in Operations…and Fred retired in 2011. Step #3 is to contact Jason in Physical Security to discuss placement of security resources…and Jason has changed his cell phone number three times since your document was written.

I’ve also seen a disaster recovery plan, fairly recently, that discussed the retrieval and handling of some backup….floppy disks. That’s current and up-do-date?

Now, I am an active tabletop gamer. Once a week I get together with like-minded people to roll the dice and play various board games.

For checking the validity of your disaster recovery plan there is an excellent analog to the tabletop gaming world:

Tabletop DR exercises!

Get BACK here….I see you in the third row, trying to sneak out. I’ll admit, I LOVE doing tabletops. Hello? I get to play game master, throw in all kinds of random real life events, and help people in the process – that’s the trifecta of awesome, right there. If it’s a really good day, I get to use dice, as well!

The bare minimum requirements for an effective tabletop:

  • A copy of  your most recent DR/BC plan
  • Your staff – preferably cooperative. Buy ’em a pizza or three, will you? The good kind. Not the cheap ones.
  • An observer. This person’s job is to review your plan in advance, and observe the tabletop exercise while taking notes. They will note WHAT happens, and what actions your team takes during the exercise. This role is silent, but detail oriented.
  • And the game master. The game master will present the scenario to the team. They will interact with the team during the exercise, and will also be the one who generates the random events that may throw the plan off track. It’s always shocking to me how many people would rather be the observer….to me, game master is where the fun is.

Your scenario, and the random event happenings, should fit your business. I tend to collect these for fun….and class them accordingly. A random happening where all credit card processing is doubling due to an error in the point of sale process is perfect for a retail establishment…but an attorney’s office is going to look at me like I have three heads.

Once the exercise is over, the game master and observer should go over all notes, and generate a report. What did the team do well, what fell off track, what updates does the plan need, and what is missing from the plan entirely?

Get the team together again. Buy ’em donuts – again, the good ones. Good coffee. Or lunch. Never underestimate the power of decent food on technical resources.

Try to start on a high note, and end on a high note. Make plans, as you review – what are the action items, and who owns them? When and how will the updates be done? When will you reconvene to review the updates and make sure they’re clear and correct?

Do this, do it regularly, and do NOT punish for the outcome. It’s an exercise in improvement, always…not something that your staff should dread.

Have a great DR exercise story? Have a REALLY great random event for my collection? I’d love to hear it – reach out. I’m on Twitter @TheTokenFemale, or lwallace@microsolved.com

Is your website in a “bad” neighborhood?

If, when you wake up in the morning, you look out outside and view something like the image below, you probably understand that you are not in the best of all possible worlds.

So, what “neighborhood” does your website see when it “wakes up”?

It could be just as disquieting.


It is not uncommon for MSI to do an an analysis of the Internet services offered by an organization and find that those services are being delivered from a “shared service” environment.

The nature of those shared services can vary.

VM Hosting:

Often they are simply the services of an virtual machine hosting provider such as Amazon AWS. Sometimes we find the entire computing infrastructure of a customer within such an environment.

The IP addressing is all private – the actual location is all “cloud”.

The provider in this case is running a “hypervisor” on it’s own hardware to host the many virtual machines used by its clients.

Application Hosting:

Another common occurrence is to find third-party “under the covers” core application services being linked to from a customer’s website. An example of such a service is that provided by commercial providers of mortgage loan origination software to much of the mortgage industry.

For example, see: https://en.wikipedia.org/wiki/Ellie_Mae

A quick google of “site:mortgage-application.net” will give you an idea of the extent to which the service is used by mortgage companies. The landing sites are branded to the customer, but they are all using common shared infrastructure and applications.

Web Site hosting:

Most often the shared service is simply that provided by a website hosting company. Typically many unique websites are hosted by such companies. Although each website will have a unique name (e.g. mywebsite.com) the underlying infrastructure is common. Often many websites will share a common IP address.

It is in this particular “shared service” space we most often see potential issues.

Often it’s simply a reputation concern. For instance:

host www.iwantporn.net
www.iwantporn.net is an alias for iwantporn.net.
iwantporn.net has address 143.95.152.29

These are some of the sites that are (or have recently been) on that same IP address according to Microsoft’s Bing search engine:

My guess I some of the website owners would be uncomfortable knowing they are being hosted via the same IP address and same infrastructure as is www.iwantporn.com.

They might also be concerned about this:

https://www.virustotal.com/#/ip-address/143.95.152.29

Virustotal is reporting that a known malicious program was seen  communicating with a listening service running on some site with the IP address 143.95.152.29 .

The implication is that some site hosted at 143.95.152.29 had in the past been compromised and was being used for communications in what may have been a ransomware attack.

The IP address associated with such a compromised system can ultimately be blacklisted as a known suspicious site,

All websites hosted on the IP address can be affected.

Website traffic and the delivery of emails can all be affected as a result of the misfortune to share an IP address with a suspect site.

“Backplaning”

When such a compromise of the information space used by a client in a shared service occurs, all other users of that service can be at risk. Although the initial compromise may simply be the result of misuse of the website owner’s credentials (e.g. stolen login/password), the hosting provider needs to ensure that such a compromise of one site does not allow the attacker to compromise other websites hosted in the same environment – an attack pattern sometimes referred to as backplaning.

The term comes from electronics and refers to a common piece of electronics circuity (e.g a motherboard, an IO bus, etc. ) that separate “plugin” components use to access shared infrastructure.

See: https://en.wikipedia.org/wiki/Backplane

Example:

The idea is that a compromised environment becomes the doorway into the “backplane” of underlying shared services.  (e.g. possibly shared database infrastructure).

If the provider has not taken adequate precautions such an attack can affect all hosted websites using the shared service.

Such things really can happen.

In 2015 a vulnerability in commonly used hypervisor software was announced. See:  http://venom.crowdstrike.com/

An attacker who had already gained administrative rights on a hosted virtual machine could directly attack the hypervisor and – by extension – all other virtual machines hosted in the same environment. Maybe yours?

What to do?

Be aware of your hosted environment’s neighborhood. Use the techniques described above to find out who else is being hosted by your provider. If the neighborhood looks bad, consider a dedicated IP address to help isolate you from the poor administrative practices of other hosted sites.

Contact your vendor to and find out what steps they have in place to protect you from “backplane” attacks and what contractual protections you have if such an attack occurs.

Questions?  info@microsolved.com