Automating SSL Certificate Management with Certbot and Let’s Encrypt

As we posted previously, following best practices for SSL certificate management is critical to properly secure your site. In that post, we discussed automating certificate management as a best practice. This post is an example of how to do just that.
 
To do so, we will use the highly-trusted free certificate provider Let’s Encrypt. We will also leverage the free certificate automation tool Certbot.
 

Installing Certbot

Installing Certbot is pretty easy, overall, but you do need to be comfortable with the command line and generally know how to configure your chosen web server. That said, if you check out the Certbot site, you will find a dropdown menu that will let you pick your chosen web server and operating system. Once you make your selections, simply follow the on-screen step-by-step instructions. In our testing, we found them to be complete and intuitive.
 

That’s It!

Following the on-screen instructions will have:

  • Certbot installed
  • Configure your web server for the certificate
  • Generate, get and install the certificate
  • Implement automatic renewals of the certificate to prevent expiration

You can literally go from a basic website to fully implemented and automated SSL in a matter of moments. Plenty of support is available from EFF for Certbot, or via Let’s Encrypt. In our testing, we ran into no issues and the implementation completed successfully each time.

Give it a shot! This might be one of the easiest and most effective security controls to automate. Together, Certbot and Let’s Encrypt can create a no-cost cryptography solution for your web sites in a very short amount of time.

SSL Certificate High-Level Best Practices

SSL certificates are an essential part of online security. They protect websites against hackers who try to steal information such as credit card numbers and passwords. In addition, they ensure that customers trust the site and its content.

Almost 50% of the top one million websites use HTTPS by default (they redirect inquiries of HTTP pages to URLs with HTTPS). (comodosslstore.com)As such, even pages that don’t deal with confidential data are being deployed using SSL. The underlying certificates to power the encryption are available from a variety of commercial providers, and even the pro-bono resource https://letsencrypt.org. No matter where you get your certificate from, here are a few resources for high-level best practices.

Trust Your Certificate Provider

Since certificates provide the basis for the cryptography for your site, their source is important. You can find a trustworthy list of providers for certificates here. https://www.techradar.com/news/best-ssl-certificate-provider. Beware of commercial providers not found on this list, as some of them may be sketchy at best, or dangerous at worst. Remember, the Let’s Encrypt project above is also highly trusted, even though they are not a commercial firm.

Manage Versions and Algorithms

Make sure you disable SSL and TLS 1.0 on the server. That version has known vulnerabilities. If possible, and there are no impacts on your users, consider removing 1.1 and 1.2 as well. 1.3 fixes a lot of the known issues with the protocol and supports only the known secure algorithms.

In cryptography, cipher suites play an important part in securing connections by enabling encryption at different levels. You shouldn’t be using an old version of a cryptographic protocol if there’s a newer one available; otherwise, you may put your site’s security at risk. Using secure cipher suites that support 128-bit (or more) encryption is crucial for securing sensitive client communications.

Diffie Hellman Key Exchange has been shown to be vulnerable when used for weaker keys; however, there is no known attack against stronger keys such as 2048-bits. Make sure you use the strongest settings possible for your server.

Manage and Maintain Certificate Expiration

As of Sept. 1, 2020, Apple’s Safari browser will no longer trust certificates with validity periods longer than 398 days, and other browsers are likely to follow suit. Reducing validity periods reduces the time period in which compromised or bogus certificates can be exploited. As such, any certificates using retired encryption algorithms or protocols will need to be replaced sooner. (searchsecurity.techtarget.com)

Maintain a spreadsheet or database of your certificate expiration dates for each relevant site. Make sure to check it frequently for expiring certificates to avoid user issues and browser error messages. Even better is to use an application or certificate management platform that alerts you in plenty of time to upcoming certificate expirations – thus, you can plan accordingly. Best of all, if possible, embrace tools and frameworks for automating certificate management and rotation – that makes sure that you are less likely to have expiration issues. Most popular web frameworks now have tools and plugins available to perform this for you.

Protect Your Certificates and Private Keys

Remember that your certificate is not only a basis for cryptography, but is also a source of identification and reputation. As such, you need to make sure that all certificates are stored properly, securely and in trusted locations. Make sure that web users can’t access the private certificate files, and that you have adequate back up and restore processes in place.

Make sure that you also protect the private keys used in certificate generation. Generate them offline, if possible, protect them with strong passwords and store them in a secure location. Generate a new private key for each certificate and each renewal cycle.

Revoke your certificate or keys as quickly as possible if you believe they have been compromised.

Following these best practices will go a long way to making your SSL certificate processes safer and more effective. Doing so protects your users, your reputation and your web sites. Make sure you check back with your certificate provider often, and follow any additional practices they suggest.

 

 

 

 

3 Quick Thoughts for Small Utilities and Co-Ops

Recently I was asked to help some very small utilities and co-ops come up with some low cost/free ideas around detection. The group was very nice about explaining their issues, and here is a quick summary of some of the ideas we discussed.

1) Dump external router, firewall, AD and any remote access logs weekly to text and use simple parsers in python/perl or shell script to identify any high risk issues. Sure, this isn’t the same as having robust log monitoring tools (which none of these folks had), but even if you detect something really awful a week after it happens, you will still be ahead of the average curve of attackers having access for a month or more. You can build your scripts using some basis analytics, they will get better over time, and here are some ideas to get you started. You don’t need a lot of money to quickly handle dumped logs. Do the basics and improve.

2) Take advantage of cheap hardware, like the Raspberry Pi for easy to learn/use Linux boxes for scripting, log parsing or setting up cron jobs to automate tasks. For less than 50 bucks, you can have a powerful machine to do a lot of work for you and serve as a monitoring platform for a variety of tools. The group was all tied up in getting budget to buy server and workstation hardware – but had never taken the Pi seriously as a work platform. It’s mature enough to do a lot of non-mission critical (and some very important) work. It’s fantastic if you’re looking for a quick and dirty way to gain some Linux capabilities in confined Windows world.

3) One of the best bang for the buck services we have at MSI is device configuration reviews. For significantly less money than a penetration test, we can review your external routers, firewall and VPN for configuration issues, improper rules/ACLs and insecure settings. If you combine this with an exercise like attack surface mapping and threat modeling, you can get a significant amount of insight without resorting to (and paying for) vulnerability assessments and penetration testing. Sure, the data might not be as granular, and we still have to do some level of port scanning and service ID, but we have a variety of safe ways to do that work – and you get some great information. You can then make risk-based decisions about the data and decide what you want to act on and pay attention to. If your budget is tight – get in touch and discuss this approach with us.

I love to talk with utilities and especially smaller organizations that want to do the right thing, but might face budget constraints. If they’re willing to have an open, honest conversation, I am more than willing to get creative and engage to help them solve problems within their needs. We’d rather get creative and solve an issue to protect the infrastructure than have them get compromised by threat actors looking to do harm.

If you want to discuss this or any security or risk management issue, get in touch here.  

Network Segmentation with MachineTruth

network segmentation with MachineTruth

About MachineTruthTM

We’ve just released a white paper on the topic of leveraging MachineTruth™, our proprietary network and device analytics platform, to segment or separate network environments.

Why Network Segmentation?

The paper covers the reasons to consider network segmentation, including the various drivers across clients and industries that we’ve worked with to date. It also includes a sample work flow to guide you through the process of performing segmentation with an analytics and modeling-focused solution, as opposed to the traditional plug and pray method, many organizations are using today.

Lastly, the paper covers how MachineTruthTM is different than traditional approaches and what you can expect from such a work plan.

To find out more:

If you’re considering network segmentation, analysis, inventory or mapping, then MachineTruthTM is likely a good fit for your organization. Download the white paper today and learn more about how to make segmentation easier, safer, faster and more affordable than ever before!

Interested? Download the paper here:

https://signup.microsolved.com/machinetruth-segmentation-wp/

As always, thanks for reading and we look forward to working with you. If you have any questions, please drop us a line (info@microsolved.com) or give us a call (614-351-1237) to learn more.

BEC #6 – Recovery

A few weeks ago, we published the Business Email Compromise (BEC) Checklist. The question arose – what if you’re new to security, or your security program isn’t very mature?

Since the checklist is based on the NIST model, there’s a lot of information here to help your security program mature, as well as to help you mature as a security practitioner. MSI’s engineers have discussed a few ways to leverage the checklist as a growth mechanism.

Part 1 and Part 2 covered the first checkpoint in the list – Discover. Part 3 covered the next checkpoint – Protect. Part 4 continued the series – Detect. Part 5 addressed how to Respond.

Continue reading

How do you Identify? Business Email Compromise #1

Business Email Compromise

business email compromise

Recently, we posted the Business Email Compromise (BEC) checklist. We’ve gotten a lot of great feedback on the checklist…as well as a few questions. What if you’re new to security? What if your organization’s security program is newer, and still maturing? How can you leverage this list?

Since the checklist is based on the NIST model, there’s a lot of information here to help your security program mature, as well as to help you mature as a security practitioner. MSI’s engineers have discussed a few ways to leverage the checklist as a growth mechanism.

Continue reading

It’s Dev, not Diva – Don’t set the “stage” for failure

Development: the act, process, or result of developing, the development of new ideas. This is one of the Merriam-Webster definitions of development.

It doesn’t really matter what you call it…dev, development, stage, test. Software applications tend to be in flux, and the developers, programmers, testers, and ancillary staff need a place to work on them.

Should that place be out on the internet? Let’s think about that for a minute. By their very nature, dev environments aren’t complete. Do you want a work in progress, with unknown holes, to be externally facing? This doesn’t strike me as the best idea.

But, security peeps, we HAVE to have it facing the internet – because REASONS! (Development types…tell me what your valid reasons are?)

And it will be fine – no one will find it, we won’t give it a domain name!

Security through obscurity will not be your friend here…with the advent of Shodan, Censys.io, and other venues…they WILL find it. Ideally, you should only allow access via VPN or other secure connection.

What could possibly go wrong? Well, here’s a short list of SOME of the things that MSI has found or used to compromise a system, from an internet facing development server:

  • A test.txt file with sensitive information about the application, configuration, and credentials.
  • Log files with similar sensitive information.
  • .git directories that exposed keys, passwords, and other key development information.
  • A development application that had weak credentials was compromised – the compromise allowed inspection of the application, and revealed an access control issue. This issue was also present in the production application, and allowed the team to compromise the production environment.
  • An unprotected directory that contained a number of files including a network config file. The plain text credentials in the file allowed the team to compromise the internet facing network devices.

And the list keeps going.

But, security peeps – our developers are better than that. This won’t happen to us!

The HealthCare.Gov breach https://www.csoonline.com/article/2602964/data-protection/configuration-errors-lead-to-healthcare-gov-breach.html in 2014 was the result of a development server that was improperly connected to the internet. “Exact details on how the breach occurred were not shared with the public, but sources close to the investigation said that the development server was poorly configured and used default credentials.”

Another notable breach occurred in 2016 – an outsourcing company named Capgemini https://motherboard.vice.com/en_us/article/vv7qp8/open-database-exposes-millions-of-job-seekers-personal-information exposed the personal information of millions of job seekers when their IT provider connected a development server to the internet.

The State of Vermont also saw their health care exchange – Vermont Connected – compromised in 2014 https://www.databreachtoday.asia/hackers-are-targeting-health-data-a-7024 when a development server was accessed. The state indicates this was not a breach, because the development server didn’t contain any production data.

So, the case is pretty strongly on the side of – internet facing development servers is a bad idea.

Questions? Comments? What’s your take from the development side? I’d love to hear from you – lwallace@microsolved.com, or @TheTokenFemale on Twitter!

If you would like to know more about MicroSolved or its services please send an e-mail to info@microsolved.com or visit microsolved.com.

 

 

 

 

 

 

 

 

 

That phone call you dread…

So, you’re a sysadmin, and you get a call from that friend and co-worker…we all know that our buddies don’t call the helpdesk, right?

This person sheepishly admits that they got an email that looked maybe a bit suspicious in hindsight, it had an attachment…and they clicked.

Yikes. Now what?

Well, since you’re an EXCELLENT sysadmin, and you work for the best company ever, you’ve done a few things to make sure you’re ready for this day…

  • The company has had a business impact analysis, so all of the relevant policies and procedures are in place.
  • Your backups are in place, offsite, and you know you can restore them with a modicum of effort – and because you’ve done baselines, you know how long it will take to restore.
  • Your team has been doing incident response tabletops, so all of the IR processes are documented and up-to-date. And you set it up to be a good time, so they were fully engaged in the process.

But now, one of your people has clicked…now what, indeed..

  • Pull. The. Plug. Disconnect that system. If it’s hard wired, yank the cord. If it’s on a wifi network, kick it off – take down the whole wifi network if feasible. The productivity that you’ll lose will be outweighed by the gains if you can stop lateral spread of the infection.
  • Pull any devices – external hard drives, USB sticks, etc.
  • DO NOT power the system off – not yet! If you need to do forensics, the live system memory will be important.

Now you can breathe, but just for a minute. This is the time to act with strategy as well as haste. Establish whether you’ve got a virus or ransomware infection, or if the ill-advised click was an attachment of another nature.

If it’s spam, but not malicious:

  • Check the email information in your email administration portal, and see if it was delivered to other users. Notify them as necessary.
  • Evaluate key features of the email – are there changes you should make to your blocking and filtering? Start that process.
  • Parse and evaluate the email headers for IPs and/or domains that should be blocked. See if there are indicators of other emails with these parameters that were blocked or delivered.
  • Add the scenario of this email to your user education program for future educational use.

If it’s a real infection, full forensics is beyond the scope of this blog post. But we’ll give a few pointers to get you started.

If it’s a virus, but not ransomware:

  • If the file that was delivered is still accessible, use VirusTotal and other sites to see if it’s known to be malicious. The hash can be checked, as well as the file itself.
  • Consider a full wipe of the affected system, as opposed to a virus removal – unless you’re 100% successful with removal, repeated infection is likely.
  • All drives or devices – network, USB, etc. – that were connected to the system should be suspect. Discard those you can, clean network drives or restore from backup.
  • Evaluate the end user account – did the attacker have time to elevate privileges? Check for any newly created accounts, as well.
  • Check system and firewall logs for traffic to and from the affected system, as well as any ancillary systems.

If it’s ransomware:

  • Determine what kind of ransomware you are dealing with.
  • Determine the scope of the infection – ancillary devices, network shares, etc.
  • Check to see if a decrypt tool is available – be aware these are not always successful.
  • Paying the ransom, or not, is a business decision – often the ransom payments are not successful, and the files remain encrypted. Address this in your IR plan, so the company policy is defined ahead of time.
  • Restore files from backup.
  • Strongly consider a full wipe of the system, even if the files are decrypted.
  • Evaluate the end user account – did the attacker have time to elevate privileges? Check for any newly created accounts, as well.
  • Check system and firewall logs for traffic to and from the affected system, as well as any ancillary systems.

In all cases, go back and map the attack vector. How did the suspect attachment get in, and how can you prevent it going forward?

What are your thoughts? I’d love to hear from you – lwallace@microsolved.com, or @TheTokenFemale on Twitter!

HPSS and Splunk

We’ve had a few users ask how to feed alerts from the HPSS Console into a SIEM. In these cases it was Splunk, so I will show how to quickly get a feed going into Splunk and some basic visualizations. I chose Splunk since that’s what I helped the users with, but any SIEM that will take syslog will work.

The first step is to get the HPSS Console set up to externally log events. This can be enabled by checking the “Enable System Logging” in the preferences window. What happens with the output depends on your OS. On Windows the events are written to Event Log, and on Linux/MacOS they are handled by the syslog daemon. Alternatively you can use the Console plugins system if syslog/eventlog is not flexible enough.

HPSS Preferences Window

Before we go further, we’ll need to configure Splunk to read in the data, or even set up Splunk if you don’t have an existing system. For this blog post, I used the Splunk Docker image to get it up and running a couple minutes in a container.

In Splunk we’ll need to create a “source type”, an “index” and a “data input” to move the data into the index. To create the source type, I put the following definitions in the local props.conf file located in $SPLUNK_HOME/etc/system/local (you may need to create the props.conf file)

[hpss]
EXTRACT-HPSSAgent = Agent: (?P<Honeypoint_Agent>[^ ]+)
EXTRACT-Attacker_IP = from: (?P<Attacker_IP>[^ ]+)
EXTRACT-Port = on port (?P<Port>[^ ]+)
EXTRACT-Alert_Data = Alert Data: (?P<Alert_Data>.+)
TIME_PREFIX = at\s
MAX_TIMESTAMP_LOOKAHEAD = 200
TIME_FORMAT = %Y-%m-%d %H:%M:%S

This tells Splunk how to extract the data from the event. You can also define this in the Splunk web interface by going to Settings -> Source Types and creating a new source type.

Source Type definition

Next create the Index under Settings -> Indexes. Just giving the index a name and leaving everything default will work fine to get started. 

To create a Data Input, go to Settings -> Data Inputs.  I’m going to set it up to directly ingest the data through a TCP socket, but if you already have a setup to read files from a centralized logging system, then feel free to use that instead.

 Set the port and protocol to whatever you would like.

For the source type, manually typing in “hpss” (or whatever you named it) should bring up the already defined source type. Select that, and everything else can remain as is. Then go to review and finish. It’s now ready for you to ship the events to it.

Lastly, we need to get the logs from the Console system to Splunk. Again, this will differ depending on your OS. I will show one way to do this on Windows and one for Linux. However, there are numerous ways to do it. In both cases, replace the IP and Port of your Splunk instance.

On Windows you can use NXLog or another type of eventlog to syslog shipper. After installing NXLog, edit the following into the configuration file.

define ROOT C:\Program Files\nxlog
#define ROOT C:\Program Files (x86)\nxlog

Moduledir %ROOT%\modules
CacheDir %ROOT%\data
Pidfile %ROOT%\data\nxlog.pid
SpoolDir %ROOT%\data
LogFile %ROOT%\data\nxlog.log

<Input in>
Module im_msvistalog
Query <QueryList>\
<Query Id="0">\
<Select Path="HPConsole">*</Select>\
</Query>\
</QueryList>
SavePos TRUE

</Input>

<Output out>
Module om_udp
Host 192.168.232.6
Port 1514
</Output>

<Route 1>
Path in => out
</Route>

On Linux with rsyslog, create a conf file with the following

:msg,contains,"HPSS Agent" @@192.168.232.6:1514

Now Splunk should be receiving any HPSS events sent to it and storing them in the defined index, and extracting the fields during search queries.

In the future we can look at creating some graphs and analyze the events received. If there is any interest, I can look at creating a Splunk app to configure all of this for you.