Business Impact Analysis: A Wealth of Information at Your Fingertips

Are you a Chief Information Security Officer or IT Manager stuck with the unenviable task of bringing your information security program into the 21st Century? Why not start the ball rolling with a business impact analysis (BIA)? It will provide you with a wealth of useful information, and it takes a good deal of the weight from your shoulders by involving personnel from every business department in the organization.

BIA is traditionally seen as part of the business continuity process. It helps organizations recognize and prioritize which information, hardware and personnel assets are crucial to the business so that proper planning for contingency situations can be undertaken. This is very useful in and of itself, and is indeed crucial for proper business continuity and disaster recovery planning. But what other information security tasks can BIA information help you with?

When MSI does a BIA, the first thing we do in issue a questionnaire to every business department in the organization. These questionnaires are completed by the “power users” in each department, who are typically the most experienced and knowledgeable personnel in the business. This means that not only do you get the most reliable information possible, but that one person or one small group is not burdened with doing all of the information gathering. Typical responses include (but are not limited to):

  • A list of every business function each department undertakes
  • All of the hardware assets needed to perform each business function
  • All of the software assets needed to perform each business function
  • Inputs needed to perform each business function and where they come from
  • Outputs of each business function and where they are sent
  • Personnel needed to perform each business function
  • Knowledge and skills needed to perform each business function

So how does this knowledge help jumpstart your information security program as a whole? First, in order to properly protect information assets, you must know what you have and how it moves. In cutting-edge information security guidance, the first controls they recommend instituting are inventories of devices and software applications present on company networks. The BIA lists all of the hardware and software assets needed to perform each business function. So in effect you have your starting inventories. This not only tells you what you need, but is useful in exposing assets wasting time and effort on your network that are not necessary; if it’s not on the critical lists, you probably don’t need it.

In MSI’s own 80/20 Rule of Information Security, the first requirement is not only producing inventories of software and hardware assets, but mapping of data flows and trust relationships. The inputs and outputs listed by each business department include these data flows and trust relationships. All you have to do is compile them and put them into a graphical map. And I can tell you from experience; this is a great savings in time and effort. If you have ever tried to map data flows and trust relationships as a stand-alone task, you know what I mean!

Another security control a BIA can help you implement is network segmentation and enclaving. The MSI 80/20 Rule has network enclaving as their #6 control. The information from a good BIA makes it easy to see how assets are naturally grouped, and therefore makes it easy to see the best places to segment the network.

How about egress filtering? Egress filtering is widely recognized as one of the most effect security controls in preventing large scale data loss, and the most effective type of egress filtering employs white listing. White listing is typically much harder to tune and implement than black listing but is very much more effective. With the information a BIA provides you, it is much easier to construct a useful white list; you have what each department needs to perform each business function at your fingertips.

Then there is security and skill gap training. The BIA tells you what information users need to know to perform their jobs, so that helps you make sure that personnel are trained correctly and with enough depth to deal with contingency situations. Also, knowing where all your critical assets lie and how they move helps you make sure you provide the right people with the right kind of security training.

And there are other crucial information security mechanisms that a BIA can help you with. What about access control? Wouldn’t knowing the relative importance of assets and their nexus points help you structure AD more effectively? In addition, there is physical security to consider. Knowing where the most crucial information lies and what departments process it would help you set up internal secure areas and physical safeguards, wouldn’t it?

The upshot of all of this is that where information security is concerned, you can’t possibly know too much about how your business actually works. Ensure that you maintain detailed BIA and it will pay you back for the effort every time.

I’m running out of Post-Its to write down my passwords

We all know to use non-dictionary, complex passwords for our email or online banking or online shopping accounts; whether we put that into practice is another issue. Even less in practice is, using a different password for each of our accounts; that is, never use the same password twice.

Why? The online gaming site that you logon to crush candy may not be as prudent in its security as the financial advisor site that is managing your 401K. The gaming site may store your password in cleartext in their database, or use a weak encryption algorithm. They may not be subject to regulations and policies that require them to have a regular vulnerability assessment. Using the same password for both sites will place either of your accounts vulnerable and at risk.

If a breach occurs and a site’s user data and passwords are unscrambled – as with 3.3 million users of a popular gaming site (article here) – then the hacker can try the discovered password on the user’s other accounts – email, bank, company site logon. And if the user uses the same password across the board, bingo.

You might think unlikely, improbable – how will the hacker know which website to try the discovered credentials? If the email harvested from the gaming site is myemailaddress@gmail.com, they could try the credentials to log into gmail. If the email is @mycompany.com, the hacker would look for a login portal into mycompany.com. The attacker could look for social media accounts registered with that email address. Or any other website that may have an account registered with that email address. The last estimate in 2017 is that there are over 300 million Amazon.com users. The attacker could try the discovered credentials on this popular site; if your favorite password is your birthdate – 12250000 – and you use it for all your logons, the attacker would be on an Amazon shopping spree as you read this blog.

This cross-site password use is not a security issue only through an online data breach; you may have misplaced your trust and shared your password, or entered your credentials on someone else’s computer that had a key logger or you accidentally saved your logon, or browsed the internet using an open wireless hotspot where someone was sniffing the traffic, or through any other instance that your password finds its way to the wrong eyes.

OK, so I need a different password for each different account that I have. I’m gonna need a bigger keyboard to stick all the Post-It notes with the passwords to every account I have underneath it. Or, maybe I could use a password manager.

A password manager is a database program that you can use to store information for each of your online accounts, website, username, password, security questions, etc. They are encrypted, requiring one master password to unlock its contents, all your saved passwords; “Ash nazg durbatulûk” – one ring to rule them all.

Remembering one long, strong, complex, impossible-to-brute-force-or-guess password, you can then gain access to all your other impossible to guess passwords. Almost all password managers also have a feature to generate random, complex passwords that you can use for each of your accounts.

There are many password managers out there, some commercial paid-for programs, some free open-source, with varying features. Some store your data in the cloud, some fill-in the login form automatically in the browser with your account credentials, some you can copy and paste the credentials from the program and the data in the clipboard is erased after a specified time period… You should choose a password manager that is both secure and usable.

Secure in that the encryption used to store the saved credentials and data is impossible to crack. Research what level of encryption your organization requires data to be stored with. When using the password manager, is the data self contained or is it exposed or available for use to other programs, and how. Does the password manager program run in secure memory space or written to a pagefile or swap memory that can be dumped by an attacker.

The password manager should be usable so that the user will be more likely to use it on a daily basis. If it slows down the user too much, it will be ignored and old habits die hard, the user will revert to poor password use behaviors.

An example real-world use of a password manager: Desktop and mobile versions of an open-source password manager can be installed on the Mac, Windows, Linux, Android and iOS operating systems with the one database file containing the credentials data saved in a cloud service. The user can access, view and edit the credentials from any of the devices with the installed program.

Password managers can be an an essential tool in securing your credentials. Do your research; research specifications, read reviews, compare functionality and usability. Also look up which managers have had bugs or vulnerabilities, how quick were the patches released, how was the vendor’s response to the flaws.

Using the same password for even only 2 websites should be a no-no. And forget trying to remember unique passwords to over 20 online accounts (recent research found the average US user has 130 online accounts). Plus, many sites force you to change passwords (rightfully so) on a regular basis. What is my current password to xyz.com that I last logged on 18 months ago?

Password managers can help you use a unique, strong password for each account. A data breach at one website (which seems to be reported on a weekly basis now) should not force you to change your password for any other websites. But protect that ONE master password. It is the one ring that rules them all.

Resources:
https://expandedramblings.com/index.php/amazon-statistics/
https://blog.dashlane.com/infographic-online-overload-its-worse-than-you-thought/

Move over Intel – here comes AMD…

Following close behind Spectre, Meltdown, et al…CTS-Labs announced on Tuesday, March 13th that it’s researchers had discovered 13 new critical security vulnerabilities with AMD’s Ryzen and EPYC processors. The Israel based company presents the vulnerabilities as allowing attackers to not only access data stored on the processors, but would also allow them to install malware.

Of some note is the fact that it appears that CTS-Labs gave AMD less than 24 hours to respond to the vulnerabilities rather than the customary 90 day notice for standard vulnerability disclosure. As such, there is no readily available information from AMD.

Another item of note is that the domain name “amdflaws.com” was registered February 22, 2018. Presumably this belongs to CTS-Labs or an associate.

Ryzen chips typically power desktop and laptop computers, while EPYC processors are generally found in servers. A quick rundown of the vulnerabilities as presented as of this writing:

RYZENFALL – four variants, affects the Ryzen family of processors: This vulnerability purports to allow malicious software to take full control of the AMD Secure Processor. The resulting Secure Processor privileges could allow read and write in protected memory areas, such as SMRAM and the Windows Credential Guard isolated memory. This could allow attackers to bypass controls such as Windows Credential Guard to compromise credentials, and potentially move laterally through the affected network.

Attackers could also theoretically use this vulnerability in conjunction with MasterKey to install persistent malware on the Secure Processor.

FALLOUT – three variants, affects the EPYC family of processors: This vulnerability purports to allow attackers to read from and write to protected memory areas, such as SMRAM and Windows Credential Guard isolated memory (VTL-1).

Attackers could theoretically leverage these vulnerabilities to steal network credentials protected by Windows Credential Guard, as well as to bypass BIOS flashing protections implemented in SMM.

CHIMERA – two variants, affects the Ryzen family of processors: This vulnerability purports to have discovered two sets of manufacturer backdoors: One implemented in firmware, the other in hardware (ASIC). The backdoors allow malicious code to be injected into the AMD Ryzen chipset.

The chipset links the CPU to USB, SATA, and PCI-E devices. Network, WiFi and Bluetooth traffic often flows through the chipset as well. The attack potential for this vector is significant, and malware could evade virtually all endpoint security solutions on the market.

Malware running on the chipset could leverage the latter’s Direct Memory Access (DMA) engine to attack the operating system. This kind of attack has been demonstrated.

MASTERKEY – three variants, affects both the Ryzen and EPUC families of processors:  Multiple vulnerabilities in AMD Secure Processor firmware allow attackers to infiltrate the Secure Processor.

This vulnerability purports to allow the deployments stealthy and persistent malware, resilient against virtually all security solutions on the market. It also appears to allow tampering with AMD’s firmware-based security features such as Secure Encrypted Virtualization (SEV) and Firmware Trusted Platform Module (fTPM).

As in RyzenFall, this could allow attackers to bypass controls such as Windows Credential Guard to compromise credentials, and potentially move laterally through the affected network.

Another consideration is potential physical damage and bricking of hardware. It could also potentially be leveraged by attackers in hardware-based “ransomware” scenarios.

The full whitepaper is here.

Given the continued impact of the Intel patches on performance and stability, and conflicts with other vendor products – hardware and software – hang on, folks. We’re going to see some chaos in this space.

What are your thoughts? Do you feel the responsible disclosure path is to give manufacturers the customary 90 day window, or is immediate disclosure of risk preferable to you?

Let me know what you think. I can be reached at lwallace@microsolved.com, or on Twitter as @TheTokenFemale

Enter the game master….disaster recovery tabletops!

I snagged this line from the most excellent Lesley Carhart the other day, and it’s been resonating every since.

“You put your important stuff in a fire safe, have fire drills, maintain fire insurance, and install smoke detectors even though your building doesn’t burn down every year.”

When’s the last time you got out your business continuity/disaster recovery plan, dusted it off, and actually READ it? You have one, so you can check that compliance box…but is it a living document?

It should be.

All of the box checking in the world isn’t going to help you if Step #2 of the plan says to notify Fred in Operations…and Fred retired in 2011. Step #3 is to contact Jason in Physical Security to discuss placement of security resources…and Jason has changed his cell phone number three times since your document was written.

I’ve also seen a disaster recovery plan, fairly recently, that discussed the retrieval and handling of some backup….floppy disks. That’s current and up-do-date?

Now, I am an active tabletop gamer. Once a week I get together with like-minded people to roll the dice and play various board games.

For checking the validity of your disaster recovery plan there is an excellent analog to the tabletop gaming world:

Tabletop DR exercises!

Get BACK here….I see you in the third row, trying to sneak out. I’ll admit, I LOVE doing tabletops. Hello? I get to play game master, throw in all kinds of random real life events, and help people in the process – that’s the trifecta of awesome, right there. If it’s a really good day, I get to use dice, as well!

The bare minimum requirements for an effective tabletop:

  • A copy of  your most recent DR/BC plan
  • Your staff – preferably cooperative. Buy ’em a pizza or three, will you? The good kind. Not the cheap ones.
  • An observer. This person’s job is to review your plan in advance, and observe the tabletop exercise while taking notes. They will note WHAT happens, and what actions your team takes during the exercise. This role is silent, but detail oriented.
  • And the game master. The game master will present the scenario to the team. They will interact with the team during the exercise, and will also be the one who generates the random events that may throw the plan off track. It’s always shocking to me how many people would rather be the observer….to me, game master is where the fun is.

Your scenario, and the random event happenings, should fit your business. I tend to collect these for fun….and class them accordingly. A random happening where all credit card processing is doubling due to an error in the point of sale process is perfect for a retail establishment…but an attorney’s office is going to look at me like I have three heads.

Once the exercise is over, the game master and observer should go over all notes, and generate a report. What did the team do well, what fell off track, what updates does the plan need, and what is missing from the plan entirely?

Get the team together again. Buy ’em donuts – again, the good ones. Good coffee. Or lunch. Never underestimate the power of decent food on technical resources.

Try to start on a high note, and end on a high note. Make plans, as you review – what are the action items, and who owns them? When and how will the updates be done? When will you reconvene to review the updates and make sure they’re clear and correct?

Do this, do it regularly, and do NOT punish for the outcome. It’s an exercise in improvement, always…not something that your staff should dread.

Have a great DR exercise story? Have a REALLY great random event for my collection? I’d love to hear it – reach out. I’m on Twitter @TheTokenFemale, or lwallace@microsolved.com

Is your website in a “bad” neighborhood?

If, when you wake up in the morning, you look out outside and view something like the image below, you probably understand that you are not in the best of all possible worlds.

So, what “neighborhood” does your website see when it “wakes up”?

It could be just as disquieting.


It is not uncommon for MSI to do an an analysis of the Internet services offered by an organization and find that those services are being delivered from a “shared service” environment.

The nature of those shared services can vary.

VM Hosting:

Often they are simply the services of an virtual machine hosting provider such as Amazon AWS. Sometimes we find the entire computing infrastructure of a customer within such an environment.

The IP addressing is all private – the actual location is all “cloud”.

The provider in this case is running a “hypervisor” on it’s own hardware to host the many virtual machines used by its clients.

Application Hosting:

Another common occurrence is to find third-party “under the covers” core application services being linked to from a customer’s website. An example of such a service is that provided by commercial providers of mortgage loan origination software to much of the mortgage industry.

For example, see: https://en.wikipedia.org/wiki/Ellie_Mae

A quick google of “site:mortgage-application.net” will give you an idea of the extent to which the service is used by mortgage companies. The landing sites are branded to the customer, but they are all using common shared infrastructure and applications.

Web Site hosting:

Most often the shared service is simply that provided by a website hosting company. Typically many unique websites are hosted by such companies. Although each website will have a unique name (e.g. mywebsite.com) the underlying infrastructure is common. Often many websites will share a common IP address.

It is in this particular “shared service” space we most often see potential issues.

Often it’s simply a reputation concern. For instance:

host www.iwantporn.net
www.iwantporn.net is an alias for iwantporn.net.
iwantporn.net has address 143.95.152.29

These are some of the sites that are (or have recently been) on that same IP address according to Microsoft’s Bing search engine:

My guess I some of the website owners would be uncomfortable knowing they are being hosted via the same IP address and same infrastructure as is www.iwantporn.com.

They might also be concerned about this:

https://www.virustotal.com/#/ip-address/143.95.152.29

Virustotal is reporting that a known malicious program was seen  communicating with a listening service running on some site with the IP address 143.95.152.29 .

The implication is that some site hosted at 143.95.152.29 had in the past been compromised and was being used for communications in what may have been a ransomware attack.

The IP address associated with such a compromised system can ultimately be blacklisted as a known suspicious site,

All websites hosted on the IP address can be affected.

Website traffic and the delivery of emails can all be affected as a result of the misfortune to share an IP address with a suspect site.

“Backplaning”

When such a compromise of the information space used by a client in a shared service occurs, all other users of that service can be at risk. Although the initial compromise may simply be the result of misuse of the website owner’s credentials (e.g. stolen login/password), the hosting provider needs to ensure that such a compromise of one site does not allow the attacker to compromise other websites hosted in the same environment – an attack pattern sometimes referred to as backplaning.

The term comes from electronics and refers to a common piece of electronics circuity (e.g a motherboard, an IO bus, etc. ) that separate “plugin” components use to access shared infrastructure.

See: https://en.wikipedia.org/wiki/Backplane

Example:

The idea is that a compromised environment becomes the doorway into the “backplane” of underlying shared services.  (e.g. possibly shared database infrastructure).

If the provider has not taken adequate precautions such an attack can affect all hosted websites using the shared service.

Such things really can happen.

In 2015 a vulnerability in commonly used hypervisor software was announced. See:  http://venom.crowdstrike.com/

An attacker who had already gained administrative rights on a hosted virtual machine could directly attack the hypervisor and – by extension – all other virtual machines hosted in the same environment. Maybe yours?

What to do?

Be aware of your hosted environment’s neighborhood. Use the techniques described above to find out who else is being hosted by your provider. If the neighborhood looks bad, consider a dedicated IP address to help isolate you from the poor administrative practices of other hosted sites.

Contact your vendor to and find out what steps they have in place to protect you from “backplane” attacks and what contractual protections you have if such an attack occurs.

Questions?  info@microsolved.com

Spend Your Infosec Dollars on the Things that Work Best First

If your organization is like most of the ones we deal with every day, you have a lot of information security controls that you are being pressed to implement, but you only have a limited budget to implement them with. How are you supposed to decide where those very scarce dollars go? I recommend implementing those controls that have proven their worth through time and trial first.

Just about nine years ago, early in the Obama administration, there was a big push to improve information cybersecurity across the board. Infosec experts from all disciplines shared ideas and information, debated strategies and mechanisms, and developed what was then called the Consensus Audit Guidelines. Around this same time Brent Huston and the MSI team developed our 80/20 Rule for Information Security. The goal of both of these endeavors was the same: rank infosec controls hierarchically according to necessity and effectiveness. This is, of course, an ongoing process subject to disagreement and periodic changes in thinking. But here are some of the primary controls that we champion.

Inventories of hardware and software assets. You can’t protect your network if you don’t know what is on it. Ensuring that your organization has mechanisms and processes in place to constantly monitor network inventories is well worth the cost. We also recommend that organizations leverage inventory processes to map data flows and trust relationships among network entities. This information can help you spot weak points in your security posture and is very useful in business continuity planning.

Configuration control and security maintenance. I can’t tell you how many network compromises that I have seen that were the result of systems that were misconfigured, or that were missing security updates. All network entities should be fully “hardened” and included in the security maintenance program. Configuration and security maintenance processes should be fully documented, maintained and overseen. Forgetting to change one default administrator password or to apply one security patch can mean the difference between security and compromise. Although these processes are labor-intensive, there are devices and applications available that can help your personnel to keep on top of them.

Vulnerability and security assessment processes. Humans are fallible. Even if you have good configuration and maintenance processes in place, you still need to check and make sure that nothing has fallen through the cracks. You also need to see if there are any access control problems, miscoding in applications or other vulnerabilities on your networks. This means regular vulnerability assessments of networks and applications. If your budget allows it, assessments such as penetration testing and social engineering exercises can also be very illuminating.

Privileged access control and monitoring. Attaining administrative-level access is the Holy Grail of cyber criminals. If you can achieve domain admin access privileges, you pretty much have the keys to the kingdom. So, ensure that privileged access is fully controlled and monitored on your network. Admins should use separate passwords for admin duties and simple network access, and adding/changing admin accounts or out-of-bounds admin activities should create alerts on the system. This is inexpensive to implement, and more than worth the effort.

Security monitoring and egress filtering. One of the processes that everyone seems to have trouble doing well is security monitoring. This is probably because it is at once a daunting and boring task. However, security monitoring is essential. It also demands a good deal of human participation. Although we strongly advocate using tools to help aggregate, parse and supply basic analysis of log data, only humans are fit to do the final analysis. One very effective part of this task is egress filtering. Egress filtering is the practice of monitoring and restricting the flow of information outbound from the network. This control is relatively easy to implement and can save the day by stopping large-scale exfiltration of data from your network in the event other security controls have been circumvented.

Security training and awareness mechanisms. It should always be remembered that information security is a human problem, not a technological problem. Because of this, your own personnel can either be your greatest security threat or your greatest security asset. Security training (accompanied by employee buy-in to the security program) can help assure that your employees are security assets. Security training should be provided to new hires and all employees on a recurrent basis. Awareness reminders should reflect real-world threats and should be provided on an as-needed basis. In addition, we recommend high-risk job titles such as system admins and code developers should be provided with security gap training to help ensure that they have all the skills needed to prevent and detect security incidents in your environment.

The controls mentioned above are certainly not all that are needed for a well-balanced information security program, but they do carry a lot of bang for the buck. So, make sure you have these primary controls in place before you waste your security dollar on flashier, but less effective mechanisms.

Because you know it’s all about them apps, ’bout them apps…

Know thyself – Socrates

I ran across this link last week, from SANS, and it’s one of the better basic checklists I’ve seen for application security. With all due respect to OWASP, their information is more technical, and useful for practitioners – their testing guide is here. For the CIO level crowd, I’d highly recommend a look at their top 10 for 2017. And a serious nod to Bill Sempf – if you haven’t heard his talk about care and feeding of developers in the security space, go find it!

Since this missive was designed to have pretty pictures and convince you to send your developers to the SANS courses listed, it’s a nice start for security practitioners that may need to work with developers, but aren’t 100% versed in application security. Some of this info is more basic than OWASP’s, as well – which does not diminish it’s importance. Let’s talk about what they list here, and why it’s important.

Error handling and logging:

Don’t display the specific error messages generated by your programs/architecture, and don’t allow unhandled exceptions – both of these items can display information about the underlying architecture of your application. Attackers can leverage this information and any associated vulnerabilities to compromise the application. If the user creates a condition that generates an error, offer them enough information to fix the problem – nothing more, nothing less.

Don’t allow specific framework errors…”the X program says you broke Y variable” – suppress them. Allowing these errors discloses potentially useful information about the framework and architecture to attackers.

Log all the things! Log authentication attempts – successful or not. Log privilege changes – successful or not. Log all administrative activity, or administrative attempts. Log any and all access and access attempts to sensitive information.

Log all the things….except when you don’t. Don’t log sensitive information. Log the admin attempts, but not admin passwords. Don’t log any information that falls under HIPAA, PCI, or other regulatory spheres.

Store logs securely. Plain text in an internet facing share? Not the world’s best idea. Encrypt, secure, and protect against data loss and tampering. If you have a data retention policy, make sure that logs are included and the policy is followed.

Data Protection:

Turn ON HTTPS, turn OFF HTTP. The same URL should not be accessible via HTTP. Get your HTTPS certificates from a respectable CA – no self-signed certificates. Accepting them is bad practice, and you run the risk of the impression that you haven’t done your due diligence, AND of conditioning your users to bypass this simple security measure.

Disable weak ciphers. Don’t wait for the 4,732 vulnerability, and don’t argue that these vulnerabilities are difficult to exploit. The NEXT one might not be. Get your SSL sanitization house in order.

Don’t allow auto-complete. Yes, some browsers will ignore things – their bad practices shouldn’t be used to justify your bad practices.

Avoid storing user info. Tokenize when possible. If you have to store password, encrypt, salt, spindle, mutilate and fold. There’s no such thing as TOO safely here.

Operations:

Have a consistent, repeatable process for…application development, testing, change control. Include security requirements at the beginning of the design – don’t try to shoehorn them in after the fact.

Review, review, review. Code reviews. Design reviews. Security testing – as you go, not at the end. Harden the environment per best practices.

Train your developers on security! Work as partners, not as the guys who make stuff and those security guys that always say no.

Have an incident response plan. TEST your plan, evaluate your plan, use your plan. Do not wait til something DOES happen to discover the holes in your plan. Keep your plan updated, as staff contacts and responsibility changes. Do disaster recovery exercises.

Authentication:

Hard coded credentials. Don’t. Just don’t. But I need to because….no. You do not. There are safer ways to do this.

Have a strong password policy. Have a strong password reset – do not accidentally disclose things like the validity of an account via the password reset mechanism. Do have a password lockout policy – unlimited attempts is an invitation to a brute force attack.

Again, make sure your error messages aren’t handing valuable information to attackers.

Run applications and middleware with the least privilege required. Database passwords are gold – do not put them in code. Guard them. But I need to because…again, you do not. Do it right, don’t do it over.

Session management:

Put a logout button on every page. Every. Page. Then, invalidate the session once they’ve logged out – no back button resumption of the session.

Randomize your session tokens, so that they are not vulnerable to predictive attacks. Regenerate them as user permissions change. Unless the application requires multiple connections – and you have a legitimate need to DO this – destroy tokens in multiple sessions. Don’t leave yourself open to session cloning.

Cookies. And not the chocolate chip kind. Set the domain and path correctly. Use secure cookie attributes, and expire cookies as appropriate.

Log users out automatically on reasonable idle periods. Implement an absolute logout – there are few, if any, legitimate reasons to be logged in forever.

Input & Output handling:

Whitelist over blacklist. Only accept data that meets the criteria for your application.

Validate, validate, validate. Validate uploaded files – consider all uploads as suspect, and sandbox accordingly. Validate input sources.

Follow the OWASP recommendations, many detailed in the link above, for input, output, and safe transport.

Access control:

Apply access controls consistently. Use “gate keeper” technology, so that all requests are validated and verified, whether the user is logged in or not.

Don’t allow unvalidated forwards or redirects. This gives an attacker potential capability to access content without authentication.

Least privilege rules. Make access control mandatory, don’t elevate rights when you don’t absolutely need to. Don’t use direct object references to validate access.

There’s a lot more than I’ve include here….don’t understand these? Need more info? Talk to your developers. Buy ’em a burger. Buy ’em a beer. Become the guy who listens, and attempts to understand….not the jerk that always says no. If you make an honest effort to understand them, and to help them understand you, you’ll both be better for the attempt.

Got a development war story? Got a good development story? Please reach out – @TheTokenFemale on Twitter. Let’s keep the conversation going.

Vendor Management: Are You Doing all of the Right Things?

In the not-so-distant past, organizations let service providers connect to their internal networks without a great deal of concern. At that time, attackers could generally find a more direct route into business networks, and although the security vulnerabilities inherent in 3rd party connections to networks were known, they received much less attention by users and regulators alike than they do today.

Now, networks are very much better protected, especially those segments that directly face the Internet. Their improved outer armor has forced attackers to come at networks in more indirect ways, such as through trusted service provider connections. Attackers reasoned that if your target’s outer security is just too good, maybe concerns such as the company that hosts their operating software suite is not so robust. Their reasoning proved to be correct. In fact, this attack vector worked so well, that governing bodies have had to tighten security requirements accordingly.

In the present environment, organizations such as financial institutions and medical concerns must be able to demonstrate due-diligence in their establishment and maintenance of vendor/3rd party relationships. They should always remember that they, as the parent organization, are ultimately responsible for the security of their client information; it doesn’t matter if the security breach originated with the service provider or not. Without mechanisms such as documented due-diligence processes, contractual security agreements and cyber-insurance policies, organizations can be left to shoulder the burden alone.

This trend toward vendor management security, and indeed toward more stringent information security regulation across the board, shows no signs of slowing. Quite the opposite. In 2017, 240 cybersecurity-related bills or resolutions were introduced in 42 states. In 2016, 28 states introduced cybersecurity-related legislation; 15 of these states actually enacted the legislation. In 2015, the numbers were 26 states and eight pieces of legislation enacted. Quite an increase in just a few years.

All of this regulation is having a direct effect on not only hosting organizations, but the businesses that provide services to them. Vendors are increasingly being asked to demonstrate the security of their cyber-systems and processes by both present and prospective clients. They must be able to show that their information security program is just as effective as that of the parent organization or no job.

The upshot of all of this is that NOW is the time ensure that your vendor management program meets all of the recommendations and regulations that are currently emerging. Playing catch-up is never a good idea.

First of all, the program should be based on risk. An assessment should be performed to identify risks to the organization associated with the use of 3rd party providers. Once that information is in place, a framework of policies and procedures designed to address these risks should be developed and implemented. Responsibilities for undertaking these tasks should be assigned to individuals, and of course, the whole program should be fully documented and maintained. Senior management should monitor the program to ensure that it is being implemented as designed, and that it is effective in its operation.

Companies should ensure that contracts with service providers are clear, comprehensive and that information security requirements and responsibilities are fully defined for all parties concerned. Results of IT audits and security assessments should be accessible and reviewed at least annually. Any significant weaknesses or security problems uncovered by these assessments should be addressed, and the effectiveness of their remediation should be monitored.

So, don’t wait. Review your own vendor management program today and see if it meets all of the current and likely future requirements. Having a compliant program in place is not only good information security, but may even be the differentiator that gets your company a few extra clients.

Scope….or, why can’t you just send me a form?

Scoping….the process of gathering data to put together a statement of work for a client.

To be 100% honest, I love scoping. And MSI doesn’t scope via form letter, although I’ve seen a variety of companies take this approach.

Is it because I want to talk to you? Well, partially – I do enjoy the vast majority of our clients. But here’s where I think the “fill in the form” plan fails.

First, when you’re not engaged in conversation, you’re viewing the client requirements with an eye towards putting a peg in the hole of one of your offerings. Even if that ends up to be a square peg in a round hole.

Second, the conversation often takes many twists and turns. As we talk about MSI, and our capabilities…it will happen that what a client asks for isn’t precisely what they need. We can offer a different service, and help them get to their end goal in a different way. And this isn’t always more services…it’s equally likely that it will be less, or a custom variation on a service we already have. The majority of clients don’t fall into “canned” services….and it’s refreshing to talk to them when they’re also engaging other vendors simply dropping them into a slot.

So the first question of any scoping conversation is – what is the purpose? What problem are you trying to solve? Is it regulatory – you have to have X assessment? Is something broken? Or are you trying to become aware of some security gaps – whatever they may be.

That’s the springboard of the conversation, helps us get to know you, and helps you to get the right mix of services. It’s personalized, customized, and based on individual attention from our sales and technical staff.

The next piece of serious hands-on attention comes when we’ve gathered the details for the engagement. Does the information provided make sense? If you’re a financial services firm, and you’ve chosen to be measured against HIPAA, is that really the right choice for you? The push-button approach may miss that.

Another item that’s fairly common is typos or inaccurate information in the network space provided. So we’ll do passive recon on the information provided. Does the IP space really belong to your company? Are you using hosting via AWS, which requires an additional penetration test form? Are you using a host like Rackspace that has additional contract stipulations on penetration testing?

Throughout the engagement, there are more personal touches. Via our project management portal, the engineers working on your engagement touch base every day, every other day as work progresses. If a highly critical issue is discovered, all work stops, and the engineers will get on the phone with you. We don’t believe in a situation where a critical vulnerability is only shared in a report, weeks after the discovery.

Now the reports are in your hands. We keep those reports for ~90 days – after that, all reports are purged from the system. During that 90 days, we can supply replacement copies – we can also supply the password used for encryption, if you’ve misplaced it. Sanitized copies of reports can be produced as well, for dissemination to vendors, clients, regulatory bodies, or any interested parties that you need to share this information with – a small fee may apply.

At the end of the day, the question is – who did you help today? It’s rare for MSI to end the day where we can’t answer that question in multiple avenues. It’s one of my favorite things, and we’d love to help you!

Ensure You Give The Client The Right Services At The Right Time

Many of our clients come to us looking for direction on the right cadence for implementing security initiatives; what’s first, what’s next and how should I space these services out to best fit our budget and needs?. These are questions that many of our clients struggle with.

As with any initiative, it is imperative to allocate resources towards activities that not only get the job done, but that also provide the “most bang for the buck.” We have found that, in many ways, information security initiatives will stack in a logical order of what we like to call the “rhythm of risk.”

We suggest that a good place to start the conversation is with the “must haves,” which means understanding compliance with regulation and all that implies. Many organizations must be able to positively check certain questionnaire boxes in order to maintain their ability to operate in regulated industries or to partner with certain companies. In these cases, achieving and maintaining specific security benchmarks is like giving companies a “hunting license” for certain types of partnerships.

This process really starts with understanding the gap between where companies are and where they need to be. This is the basis of formulating effective compliance strategies. The idea is to come to a complete understanding of the organization’s security posture and needs up front; an approach that helps ensure that security dollars are spent wisely and achieve the desired effects. Many times we see organizations rush through this step, spinning their wheels on the path to compliance by either misappropriating resources or by simply over spending. Each client and situation is a bit different; a successful approach leverages understanding what needs to be done in the cadence that stacks logically for that particular client.

Another pivotal factor guiding our recommendation starts with the maturity level of the organization’s security program. For a security program that is less mature, it is important to focus on not only those security mechanisms that most effectively address the risk, but also those that are the least expensive and easiest to implement.

We gauge maturity based on the NIST Cybersecurity Framework as it applies to the particular organization’s security posture. The core of the NIST framework sets out five functions: Identify, Protect, Detect, Respond and Recover. Within each function are related categories with activities to be rated and applied. For example, under Identity are Asset Management, Business Environment, Governance, Risk Assessment and Risk Management Strategy. These blocks stack on top of each other and layout a path based on a hierarchy of risk; what is most likely to occur weighted against the impact to the organization.

In addition to an organization’s maturity, it is important to consider the cyber-economic value of the content they manage. This usually correlates with proprietary or sensitive personal information held by companies. The higher the value of the data held by an organization, the greater the lengths hackers will go to attain it. As usual the market will drive the velocity and veracity of breach attempts along with the level of criminal attracted to them. Therefore, the value of your organization’s information should have a direct correlation with the amount of time, energy and investments needed to protect it.