Basic Logging Advice

Logging and monitoring are two important aspects of any security program. Without logging, we cannot understand how our systems operate, and without monitoring, we cannot detect anomalies and issues before they become problems.

There are many different types of logs available to us today. Some are generated automatically, while others require manual intervention. For instance, network traffic is usually logged automatically. However, application logs are not. We may need to manually create these logs.

Application logs provide valuable information about what happened during the execution of an application. They can show us which parts of the application were executed, what resources were used, and what was returned. Application logs are often stored in databases, allowing us to query them later.

Network logs are also useful. They allow us to see what packets were sent and received, and what responses were made. 

System logs are another type of log that we should consider. System logs record events such as system startup, shutdown, reboots, etc. They are generally stored in files, but can also be recorded in databases.

While logs are very helpful, they do have their limitations:

  • First, logs are only as good as the people who generate them. If  something doesn’t save a log, then we likely don’t know what happened. We might be able to get that from some other log, but having multiple layers of logs around an event is often useful.
  • Second, logs are static. Once created, they should remain unchanged. Hashing logs, storing them on read only file systems and other forms of log controls are highly suggested.
  • Third, logs are not always accurate. Sometimes, logs contain false positives, meaning that something appears to be happening when actually nothing is. False negatives are also possible, meaning we don’t alert on something we should have. Logs are a part of detection solution, not the sole basis of one.
  • Fourth, logs are not always actionable. That means that we can’t easily tell from a log whether something bad has occurred or if it is just noise. This is where log familiarity and anomaly detection comes in. Sometimes reviewing logs in aggregate and looking for trends is more helpful than individual line by line analysis. The answer may be in looking for haystacks instead of needles…
  • Finally, logs are not always timely. They might be created after the fact, and therefore won’t help us identify a problem until much later. While good log analysis can help create proactive security through threat intelligence, they are more powerful when analyzing events that have happened or as sources for forensic data.

Keep all of these things in mind when considering logging tools, designing monitoring techniques or building logs for your systems and applications.

How often should security logs be reviewed?

Security logs are one of the most important components of any security program. They provide insight into how well your security program is working, and they serve as a valuable source of intelligence for incident response. However, they are not perfect; they can contain false positives and false negatives. As a result, they need to be reviewed regularly to ensure they are providing accurate information.

There are two main reasons why security log reviews are necessary. First, they allow you to identify problems before they become serious incidents. Second, they allow you to determine whether your current security measures are effective.

When reviewing logs, look for three things:

1. Incidents – These are events that indicate something has gone wrong. For example, a firewall blocking access to a website, or a virus scanning software alerting you to a malware infection.

2. False Positives – These are alerts that don’t represent anything actually happening. For example, a virus scanner warning you about a file that was downloaded from the Internet without any infection identified.

3. False Negatives – These are alerts that do represent something actually happening, but were missed because of a flaw in the system. For example, a server being accessed remotely, but no alarms raised.

Reviewing logs every day is recommended. If you review logs daily, you will catch issues sooner and prevent them from becoming major incidents. This should be done on a rotating basis by the security team to prevent fatigue from diminishing the quality of the work, or via automated methods to reduce fatigue.

Peer reviewing logs weekly is also recommended. It allows you to spot trends and anomalies that might otherwise go unnoticed by a single reviewer. It also gives a second set of eyes on the logs, and helps guard against fatigue or bias-based errors.

Finally, aggregated trend-based monthly reviews are recommended. This gives you a chance to look back and see if there have been any changes to your environment that could affect your security posture or represent anomalies. This is a good place to review items like logged events per day, per system, trends on specific log events and the like. Anomalies should be investigated. Often times, this level of log review is great for spotting changes to the environment or threat intelligence.

If you want to learn more about how to conduct log reviews effectively, reach out to us at info@microsolved.com. We’re happy to help!

How long should security logs be kept?

Security logs are a great source of information for incident response, forensics, and compliance purposes. However, log retention policies vary widely among organizations. Some keep logs indefinitely; others only retain them for a certain period of time. Logging practices can impact how much useful information is available after a compromise has occurred.

In general, the longer logs are retained, the better. But, there are several factors to consider when determining how long to keep logs. These include:

• What type of system is being monitored?

• Is the system mission-critical?

• Are there any legal requirements regarding retention of logs?

• Does the company have a policy regarding retention of logs? If so, does it match industry standards?

• How often do incidents occur?

• How many employees are affected by each incident?

• How many incidents are reported?

• How many hours per day are logs collected?

• How many days per week are logs collected?

It is important to understand the business needs before deciding on a retention policy. For example, if a company has a policy of retaining logs for 90 days, then it is reasonable to assume that 90 days is sufficient for the majority of situations. However, if a company has no retention policy, then it is possible that the logs could be lost forever.

Logs are one of the most valuable sources of information during an investigation. It is important to ensure that the right people have access to the logs and that they are stored securely. In addition, it is important to know how long logs need to be kept.

MicroSolved provides a number of services related to logging and monitoring. We can help you create logging policies and practices, as well as design log monitoring solutions. Drop us a line at info@microsolved.com if you’d like to discuss logging and logging solutions.

What should be in a security log?

Logging is one of the most important aspects of any security program. It provides a record of events that occur within your environment, which allows you to understand how your systems are being used and what vulnerabilities exist. Logging helps you identify issues before they become problems, and it gives you insight into what happened after the fact.

There are many different types of logs, each with its own purpose. Some logs are designed to provide information about system activity, while others are intended to capture information about network traffic or application behavior. There are also different levels of logging, ranging from basic records of actions taken by applications, to detailed records of every event that occurs during the execution of an application.

In general, the more detail you can include in your logs, the better. For instance, if you’re looking for evidence of a compromise, you’ll need to look for signs of unauthorized access to your systems. A log entry that includes details about the IP addresses involved in the request will allow you to correlate the requests with the users making them. Similarly, if you’re trying to determine whether a particular file was accessed by someone else, you’ll need to examine the contents of the log entries associated with that file.

As you consider what type of logs to create, keep in mind that not all logs are created equal. In addition, not all logs are equally useful. For example, a log of HTTP requests might be helpful in determining whether a web server has been compromised, but it won’t tell you much about the nature of the threat. On the other hand, a log of failed login attempts could indicate that a malicious actor is attempting to gain access to your systems.

The best way to decide what kind of logs to create is to think about the specific threats you face and the kinds of information you want to collect. If you’re concerned about a particular type of threat, such as phishing emails, then you’ll probably want to track email messages sent to your domain. If you’re worried about malware infections, you’ll likely want to monitor the activities of your users’ computers.

In general, as a minimum, make sure the elements of the common logging format are included and build from there. If you need assistance with log design or help determining and implementing a logging strategy, drop us a line at info@microsolved.com. We’re happy to help! 

New Federal Banking Rule Requires Notifying Regulators of Cyber Incident Within 36 Hours

Here is a new reason to get your cybersecurity incident response program in order: federal banking regulators have issued a new rule requiring banks to notify regulators of “qualifying” cybersecurity incidents within 36 hours of recognition. This rule has the collaboration of the FDIC, the Federal Reserve and the Comptroller of Currency, and will be effective on April 1 of 2022.

It’s not as bad as it seems, though. According to the rule, a computer security incident is defined as an occurrence that “results in actual harm to the confidentiality, integrity or availability of an information system or the information that that system processes, stores or transmits.” However, a computer security incident that must be reported according to the new timeline is one that has disrupted or degraded a bank’s operations and its ability to deliver services to a material portion of its customer base and to business lines. Since this is somewhat nebulous, they also listed a number of examples of incidents requiring 36 hour notification. These include (but are not limited to):

  • A failed system upgrade resulting in widespread user outage.
  • A large-scale DDoS attack disrupting account access for more than four hours.
  • A ransomware attack that encrypts core banking systems or backup data.
  • A bank service provider experiencing a widespread system outage.
  • A computer hacking incident disabling banking operations for an extended period of time.
  • An unrecoverable system failure resulting in activation of business continuity / disaster recovery plan.
  • Malware on a bank’s network that poses an imminent threat to core business lines or critical operations.

This same rule also requires banking service providers to notify at least one bank-designated point of contact at each affected customer banking organization “as soon as possible” when the service provider has experienced a computer security incident that disrupts services for 4 hours or more.

Although 36 hours seems like an adequate amount of time for banks to notify the FDIC, in reality this time is very short indeed. From having worked with financial institutions that have had various compromises in the past, we know that determining if the incident is real, determining exactly what happened, when, how and was perpetrated by whom are thorny problems that can take days to figure out. There is also the reality to consider that modern cyberattacks often have multiple stages in which one attack is used to obfuscate other insidious attacks that are launched during the confusion. The regulators have been working with banking industry to try to craft requirements that do not overly burden the affected financial institutions during times of crisis, but who knows how well that will work? Guess we’ll see next spring!

Automating SSL Certificate Management with Certbot and Let’s Encrypt

As we posted previously, following best practices for SSL certificate management is critical to properly secure your site. In that post, we discussed automating certificate management as a best practice. This post is an example of how to do just that.
 
To do so, we will use the highly-trusted free certificate provider Let’s Encrypt. We will also leverage the free certificate automation tool Certbot.
 

Installing Certbot

Installing Certbot is pretty easy, overall, but you do need to be comfortable with the command line and generally know how to configure your chosen web server. That said, if you check out the Certbot site, you will find a dropdown menu that will let you pick your chosen web server and operating system. Once you make your selections, simply follow the on-screen step-by-step instructions. In our testing, we found them to be complete and intuitive.
 

That’s It!

Following the on-screen instructions will have:

  • Certbot installed
  • Configure your web server for the certificate
  • Generate, get and install the certificate
  • Implement automatic renewals of the certificate to prevent expiration

You can literally go from a basic website to fully implemented and automated SSL in a matter of moments. Plenty of support is available from EFF for Certbot, or via Let’s Encrypt. In our testing, we ran into no issues and the implementation completed successfully each time.

Give it a shot! This might be one of the easiest and most effective security controls to automate. Together, Certbot and Let’s Encrypt can create a no-cost cryptography solution for your web sites in a very short amount of time.

SSL Certificate High-Level Best Practices

SSL certificates are an essential part of online security. They protect websites against hackers who try to steal information such as credit card numbers and passwords. In addition, they ensure that customers trust the site and its content.

Almost 50% of the top one million websites use HTTPS by default (they redirect inquiries of HTTP pages to URLs with HTTPS). (comodosslstore.com)As such, even pages that don’t deal with confidential data are being deployed using SSL. The underlying certificates to power the encryption are available from a variety of commercial providers, and even the pro-bono resource https://letsencrypt.org. No matter where you get your certificate from, here are a few resources for high-level best practices.

Trust Your Certificate Provider

Since certificates provide the basis for the cryptography for your site, their source is important. You can find a trustworthy list of providers for certificates here. https://www.techradar.com/news/best-ssl-certificate-provider. Beware of commercial providers not found on this list, as some of them may be sketchy at best, or dangerous at worst. Remember, the Let’s Encrypt project above is also highly trusted, even though they are not a commercial firm.

Manage Versions and Algorithms

Make sure you disable SSL and TLS 1.0 on the server. That version has known vulnerabilities. If possible, and there are no impacts on your users, consider removing 1.1 and 1.2 as well. 1.3 fixes a lot of the known issues with the protocol and supports only the known secure algorithms.

In cryptography, cipher suites play an important part in securing connections by enabling encryption at different levels. You shouldn’t be using an old version of a cryptographic protocol if there’s a newer one available; otherwise, you may put your site’s security at risk. Using secure cipher suites that support 128-bit (or more) encryption is crucial for securing sensitive client communications.

Diffie Hellman Key Exchange has been shown to be vulnerable when used for weaker keys; however, there is no known attack against stronger keys such as 2048-bits. Make sure you use the strongest settings possible for your server.

Manage and Maintain Certificate Expiration

As of Sept. 1, 2020, Apple’s Safari browser will no longer trust certificates with validity periods longer than 398 days, and other browsers are likely to follow suit. Reducing validity periods reduces the time period in which compromised or bogus certificates can be exploited. As such, any certificates using retired encryption algorithms or protocols will need to be replaced sooner. (searchsecurity.techtarget.com)

Maintain a spreadsheet or database of your certificate expiration dates for each relevant site. Make sure to check it frequently for expiring certificates to avoid user issues and browser error messages. Even better is to use an application or certificate management platform that alerts you in plenty of time to upcoming certificate expirations – thus, you can plan accordingly. Best of all, if possible, embrace tools and frameworks for automating certificate management and rotation – that makes sure that you are less likely to have expiration issues. Most popular web frameworks now have tools and plugins available to perform this for you.

Protect Your Certificates and Private Keys

Remember that your certificate is not only a basis for cryptography, but is also a source of identification and reputation. As such, you need to make sure that all certificates are stored properly, securely and in trusted locations. Make sure that web users can’t access the private certificate files, and that you have adequate back up and restore processes in place.

Make sure that you also protect the private keys used in certificate generation. Generate them offline, if possible, protect them with strong passwords and store them in a secure location. Generate a new private key for each certificate and each renewal cycle.

Revoke your certificate or keys as quickly as possible if you believe they have been compromised.

Following these best practices will go a long way to making your SSL certificate processes safer and more effective. Doing so protects your users, your reputation and your web sites. Make sure you check back with your certificate provider often, and follow any additional practices they suggest.

 

 

 

 

Value of an ISSA Membership

One of the most common questions that mentees ask me is about membership in different groups and organizations. One of the most valuable in the Central Ohio area is ISSA (Information Systems Security Association International). Here are a few reasons why we believe in ISSA, their mission and their work.

Specific Value of an ISSA Membership

The ISSA is the community of choice for international professionals who are interested in furthering individual growth, managing technology risk, and protecting critical information and infrastructure.

A few key reasons that a Cybersecurity professional would want to join ISSA are listed below.

  • Chapters Around The World -ISSA provides educational opportunities and local networking for information security professionals. ISSA’s members can become your strongest allies when needed, and there are 157 chapters around the world.
  • Build Your Knowledge and Reputation – There are opportunities for active participation at Board and Chapter levels. You can use the ISSA Journal and KSEs to share your insights with the industry if you are an ISSA author or speaker. If you have innovative ways to solve problems, have applied security technology to address risks, or have case studies of how you have done it, then your ideas on security challenges, management, and innovation will go a long way in establishing you as a thought leader.
  • Network Like a Pro -Make new contacts and deepen old ones on a regular basis. ISSA offers a lot of networking opportunities beyond exchanging business cards. Forging lasting ties with others who have the same professional interests and concerns is one of the things you can do as you attend local chapter meetings, become involved on a committee or take a prominent leadership role. The sources of inspiration and ideas will come from these relationships. Networking contacts are a great resource for benchmarking security practices and validation of security product features.
  • Grow Your Career – The training you receive through the ISSA will give you a means to find potential career opportunities and can help get you noticed by those looking for someone to join their team. The ISSA sponsors many meetings and conferences that you can attend in order to earn CPEs for various certifications.
  • Learn for a Lifetime – The annual conference and chapter meetings are vital educational and professional resources that provide in-depth and timely information about the information security industry. Meeting and events can help you develop skills and solve problems. In addition to comprehensive workshops, seminars and knowledgeable guest speakers, there are presentations on new technologies. ISSA gives members additional discounts to security conferences.

Summary

In summary, I think that joining ISSA is worth every penny, especially if you want to progress from beginner to practitioner to expert. It’s among some of the best money you can spend in terms of ROI for growing your knowledge and your reputation in the community.

 

Is it Possible to Identify all Risks Associated with a Project, Program or System?

How good is risk assessment? Can a risk assessment actually identify all the risks that might plague a particular project, program or system? The short answer is no, not entirely.

Since humans became sentient and gained the ability to reason, we have been using our logical ability to attempt to see into the future and determine what may be coming next. We see what is going on around us, we remember what has happened in the past, we learn what others have experienced and we use that information as our guide to calculating the future. And that paradigm has served us well, generally speaking. We have the logical ability to avoid previously made mistakes and predict future trends pretty well. However, we never get it 100% right. It is a truism that no system ever designed to protect ourselves and our assets has not been defeated sooner or later. That is why a risk engineer will never tell you that their security measures will provide you with a zero-risk outcome. All you can do is lessen risk as much as possible.

One reason for this is an imperfect understanding of all the factors that contribute to risk for any given system or situation. These factors include understanding exactly what we are attempting to protect, understanding threats that menace the asset, understanding mechanisms that we have in place to protect the asset and understanding weaknesses that those threats may be able to exploit to defeat our protection mechanisms. If any one of these factors is imperfectly understood and integrated with the other factors involved, risk cannot be wholly eliminated.

Understanding what we a trying to protect is the factor that is easiest to accomplish usually, especially if it is something simple like money or our home. However, even this task can become daunting when you are trying to entirely understand something as complex as a software application or a computer network. These sorts of things are often composed of parts that we ourselves have not constructed, such as standard bits of code or networking devices that we simply employ in our bigger design but do not have a complete understanding of.

Understanding threats that menace our assets is more difficult. We are pretty good at protecting ourselves against threats that have been employed by attackers before. But the problem lies in innovative threats that are entirely new or that are novel uses and combinations of previously identified threats. These are the big reasons why we are always playing catchup with attackers.

Understanding the mechanisms we have in place to protect our assets is another area we can accomplish fairly well, but even this factor is often imperfectly understood. For example, how many of you have purchased a security software package to protect your network, but then have trouble getting it to work to its greatest effect because your team doesn’t have a handle on all of its complexities? We have seen this often in our work.

Finally, understanding weaknesses in our protection mechanisms may be the hardest factor of all to deal with. Often, security vulnerabilities go unrecognized until some cleaver attacker comes up with a zero-day exploit to take advantage of them. Or sometimes simple vulnerabilities seem easy to protect against until someone figures out that you can string a few of them together to affect a big compromise.

So, to get the most out of risk assessment, you need to gain the greatest understanding possible of all the factors that make up risk. In addition, you need to guard against complacency and ensure that you are not only protecting your assets to the greatest extent your ability and budget will allow, but you need to be prepared for those times that your efforts fail and security compromise does occur.

Why Penetration Testing Should Accompany Vulnerability Assessment

Twenty years ago, the world of network security was a whole different ballgame. At that time, the big threat was external attackers making their way onto your network and wreaking havoc. As hard as it is to believe now, many businesses and organizations did not even employ firewalls on their networks at that time! The big push among network security professionals then was to ensure that everyone had good firewalls, or “network perimeter” security, in place. This is the time when vulnerability assessment of distributed computer networks became big.

Vulnerability assessment entails examining networks for weaknesses such as exposed services and misconfigurations that could be exploited by attackers to gain access to private information and systems. This type of testing was encouraged by professionals to give businesses and organizations information about the weaknesses that were actually present at the time of testing. At first, vulnerability assessment was usually only conducted against the external network (that part of the network that is visible from outside the business, usually over the Internet).

Most businesses and organizations embraced the need for firewalls and external vulnerability assessments as time progressed. This was not only because doing so made good sense, but because of regulatory requirements penned to meet the requirements of modern laws such as HIPAA, GLBA and SOX. However, many did not see the need for other security studies such as internal vulnerability assessment (VA). Internal VA is like external VA, but looks for weaknesses on the internal network used by employees, partners and service providers that have been granted access and privileges to internal systems and services. The need for internal VA became increasingly important as cybercriminals found ways to worm their way into internal networks or the networks of service providers or partners. As more time passed, and network attacks increased in volume and competency, internal VA became more commonly performed among businesses and organizations.

Unfortunately, despite the increase in vulnerability studies, networks continued to be compromised. One of the reasons for this is the limited nature of vulnerability assessment. When a VA is performed, the assessors usually employ network scanning tools such as Nessus. The outputs of these tools show where vulnerabilities exist on the network, and even provide the consumer with recommendations for closing the security holes that were found. But it doesn’t go so far as to see if these vulnerabilities can actually be exploited by attackers. Also, these tools are limited, and do not show how the network may be vulnerable to combination attacks in which cybercriminals combine various weaknesses (technical, procedural and configurational weaknesses) on the network to foment big compromises. That is where penetration testing comes into play.

Penetration testing is not automated. It requires expert network security personnel to undertake properly. In penetration testing, the assessor employs the results of vulnerability studies and their own expertise to try to actually penetrate network security mechanisms just as a real-world cybercriminal would do. Obviously, the smarter and more knowledgeable the penetration tester is, the more valid the results they obtain. And for the consumer this can be a great boon.

It is true that penetration testing costs more money than performing vulnerability studies alone. What is little appreciated is the money it can save an organization in the long run. Not only can penetration testing uncover those tricky combined attacks mentioned above, it can also reveal which vulnerabilities found during VA are not presently exploitable by attackers to any great effect. This can save organizations from spending inordinate amounts of time and money fixing useless vulnerabilities and allows them to concentrate their resources on those network flaws that present the most actual danger to the organization.