How long should security logs be kept?

Security logs are a great source of information for incident response, forensics, and compliance purposes. However, log retention policies vary widely among organizations. Some keep logs indefinitely; others only retain them for a certain period of time. Logging practices can impact how much useful information is available after a compromise has occurred.

In general, the longer logs are retained, the better. But, there are several factors to consider when determining how long to keep logs. These include:

• What type of system is being monitored?

• Is the system mission-critical?

• Are there any legal requirements regarding retention of logs?

• Does the company have a policy regarding retention of logs? If so, does it match industry standards?

• How often do incidents occur?

• How many employees are affected by each incident?

• How many incidents are reported?

• How many hours per day are logs collected?

• How many days per week are logs collected?

It is important to understand the business needs before deciding on a retention policy. For example, if a company has a policy of retaining logs for 90 days, then it is reasonable to assume that 90 days is sufficient for the majority of situations. However, if a company has no retention policy, then it is possible that the logs could be lost forever.

Logs are one of the most valuable sources of information during an investigation. It is important to ensure that the right people have access to the logs and that they are stored securely. In addition, it is important to know how long logs need to be kept.

MicroSolved provides a number of services related to logging and monitoring. We can help you create logging policies and practices, as well as design log monitoring solutions. Drop us a line at info@microsolved.com if you’d like to discuss logging and logging solutions.

What should be in a security log?

Logging is one of the most important aspects of any security program. It provides a record of events that occur within your environment, which allows you to understand how your systems are being used and what vulnerabilities exist. Logging helps you identify issues before they become problems, and it gives you insight into what happened after the fact.

There are many different types of logs, each with its own purpose. Some logs are designed to provide information about system activity, while others are intended to capture information about network traffic or application behavior. There are also different levels of logging, ranging from basic records of actions taken by applications, to detailed records of every event that occurs during the execution of an application.

In general, the more detail you can include in your logs, the better. For instance, if you’re looking for evidence of a compromise, you’ll need to look for signs of unauthorized access to your systems. A log entry that includes details about the IP addresses involved in the request will allow you to correlate the requests with the users making them. Similarly, if you’re trying to determine whether a particular file was accessed by someone else, you’ll need to examine the contents of the log entries associated with that file.

As you consider what type of logs to create, keep in mind that not all logs are created equal. In addition, not all logs are equally useful. For example, a log of HTTP requests might be helpful in determining whether a web server has been compromised, but it won’t tell you much about the nature of the threat. On the other hand, a log of failed login attempts could indicate that a malicious actor is attempting to gain access to your systems.

The best way to decide what kind of logs to create is to think about the specific threats you face and the kinds of information you want to collect. If you’re concerned about a particular type of threat, such as phishing emails, then you’ll probably want to track email messages sent to your domain. If you’re worried about malware infections, you’ll likely want to monitor the activities of your users’ computers.

In general, as a minimum, make sure the elements of the common logging format are included and build from there. If you need assistance with log design or help determining and implementing a logging strategy, drop us a line at info@microsolved.com. We’re happy to help! 

New Federal Banking Rule Requires Notifying Regulators of Cyber Incident Within 36 Hours

Here is a new reason to get your cybersecurity incident response program in order: federal banking regulators have issued a new rule requiring banks to notify regulators of “qualifying” cybersecurity incidents within 36 hours of recognition. This rule has the collaboration of the FDIC, the Federal Reserve and the Comptroller of Currency, and will be effective on April 1 of 2022.

It’s not as bad as it seems, though. According to the rule, a computer security incident is defined as an occurrence that “results in actual harm to the confidentiality, integrity or availability of an information system or the information that that system processes, stores or transmits.” However, a computer security incident that must be reported according to the new timeline is one that has disrupted or degraded a bank’s operations and its ability to deliver services to a material portion of its customer base and to business lines. Since this is somewhat nebulous, they also listed a number of examples of incidents requiring 36 hour notification. These include (but are not limited to):

  • A failed system upgrade resulting in widespread user outage.
  • A large-scale DDoS attack disrupting account access for more than four hours.
  • A ransomware attack that encrypts core banking systems or backup data.
  • A bank service provider experiencing a widespread system outage.
  • A computer hacking incident disabling banking operations for an extended period of time.
  • An unrecoverable system failure resulting in activation of business continuity / disaster recovery plan.
  • Malware on a bank’s network that poses an imminent threat to core business lines or critical operations.

This same rule also requires banking service providers to notify at least one bank-designated point of contact at each affected customer banking organization “as soon as possible” when the service provider has experienced a computer security incident that disrupts services for 4 hours or more.

Although 36 hours seems like an adequate amount of time for banks to notify the FDIC, in reality this time is very short indeed. From having worked with financial institutions that have had various compromises in the past, we know that determining if the incident is real, determining exactly what happened, when, how and was perpetrated by whom are thorny problems that can take days to figure out. There is also the reality to consider that modern cyberattacks often have multiple stages in which one attack is used to obfuscate other insidious attacks that are launched during the confusion. The regulators have been working with banking industry to try to craft requirements that do not overly burden the affected financial institutions during times of crisis, but who knows how well that will work? Guess we’ll see next spring!

Automating SSL Certificate Management with Certbot and Let’s Encrypt

As we posted previously, following best practices for SSL certificate management is critical to properly secure your site. In that post, we discussed automating certificate management as a best practice. This post is an example of how to do just that.
 
To do so, we will use the highly-trusted free certificate provider Let’s Encrypt. We will also leverage the free certificate automation tool Certbot.
 

Installing Certbot

Installing Certbot is pretty easy, overall, but you do need to be comfortable with the command line and generally know how to configure your chosen web server. That said, if you check out the Certbot site, you will find a dropdown menu that will let you pick your chosen web server and operating system. Once you make your selections, simply follow the on-screen step-by-step instructions. In our testing, we found them to be complete and intuitive.
 

That’s It!

Following the on-screen instructions will have:

  • Certbot installed
  • Configure your web server for the certificate
  • Generate, get and install the certificate
  • Implement automatic renewals of the certificate to prevent expiration

You can literally go from a basic website to fully implemented and automated SSL in a matter of moments. Plenty of support is available from EFF for Certbot, or via Let’s Encrypt. In our testing, we ran into no issues and the implementation completed successfully each time.

Give it a shot! This might be one of the easiest and most effective security controls to automate. Together, Certbot and Let’s Encrypt can create a no-cost cryptography solution for your web sites in a very short amount of time.

SSL Certificate High-Level Best Practices

SSL certificates are an essential part of online security. They protect websites against hackers who try to steal information such as credit card numbers and passwords. In addition, they ensure that customers trust the site and its content.

Almost 50% of the top one million websites use HTTPS by default (they redirect inquiries of HTTP pages to URLs with HTTPS). (comodosslstore.com)As such, even pages that don’t deal with confidential data are being deployed using SSL. The underlying certificates to power the encryption are available from a variety of commercial providers, and even the pro-bono resource https://letsencrypt.org. No matter where you get your certificate from, here are a few resources for high-level best practices.

Trust Your Certificate Provider

Since certificates provide the basis for the cryptography for your site, their source is important. You can find a trustworthy list of providers for certificates here. https://www.techradar.com/news/best-ssl-certificate-provider. Beware of commercial providers not found on this list, as some of them may be sketchy at best, or dangerous at worst. Remember, the Let’s Encrypt project above is also highly trusted, even though they are not a commercial firm.

Manage Versions and Algorithms

Make sure you disable SSL and TLS 1.0 on the server. That version has known vulnerabilities. If possible, and there are no impacts on your users, consider removing 1.1 and 1.2 as well. 1.3 fixes a lot of the known issues with the protocol and supports only the known secure algorithms.

In cryptography, cipher suites play an important part in securing connections by enabling encryption at different levels. You shouldn’t be using an old version of a cryptographic protocol if there’s a newer one available; otherwise, you may put your site’s security at risk. Using secure cipher suites that support 128-bit (or more) encryption is crucial for securing sensitive client communications.

Diffie Hellman Key Exchange has been shown to be vulnerable when used for weaker keys; however, there is no known attack against stronger keys such as 2048-bits. Make sure you use the strongest settings possible for your server.

Manage and Maintain Certificate Expiration

As of Sept. 1, 2020, Apple’s Safari browser will no longer trust certificates with validity periods longer than 398 days, and other browsers are likely to follow suit. Reducing validity periods reduces the time period in which compromised or bogus certificates can be exploited. As such, any certificates using retired encryption algorithms or protocols will need to be replaced sooner. (searchsecurity.techtarget.com)

Maintain a spreadsheet or database of your certificate expiration dates for each relevant site. Make sure to check it frequently for expiring certificates to avoid user issues and browser error messages. Even better is to use an application or certificate management platform that alerts you in plenty of time to upcoming certificate expirations – thus, you can plan accordingly. Best of all, if possible, embrace tools and frameworks for automating certificate management and rotation – that makes sure that you are less likely to have expiration issues. Most popular web frameworks now have tools and plugins available to perform this for you.

Protect Your Certificates and Private Keys

Remember that your certificate is not only a basis for cryptography, but is also a source of identification and reputation. As such, you need to make sure that all certificates are stored properly, securely and in trusted locations. Make sure that web users can’t access the private certificate files, and that you have adequate back up and restore processes in place.

Make sure that you also protect the private keys used in certificate generation. Generate them offline, if possible, protect them with strong passwords and store them in a secure location. Generate a new private key for each certificate and each renewal cycle.

Revoke your certificate or keys as quickly as possible if you believe they have been compromised.

Following these best practices will go a long way to making your SSL certificate processes safer and more effective. Doing so protects your users, your reputation and your web sites. Make sure you check back with your certificate provider often, and follow any additional practices they suggest.

 

 

 

 

Value of an ISSA Membership

One of the most common questions that mentees ask me is about membership in different groups and organizations. One of the most valuable in the Central Ohio area is ISSA (Information Systems Security Association International). Here are a few reasons why we believe in ISSA, their mission and their work.

Specific Value of an ISSA Membership

The ISSA is the community of choice for international professionals who are interested in furthering individual growth, managing technology risk, and protecting critical information and infrastructure.

A few key reasons that a Cybersecurity professional would want to join ISSA are listed below.

  • Chapters Around The World -ISSA provides educational opportunities and local networking for information security professionals. ISSA’s members can become your strongest allies when needed, and there are 157 chapters around the world.
  • Build Your Knowledge and Reputation – There are opportunities for active participation at Board and Chapter levels. You can use the ISSA Journal and KSEs to share your insights with the industry if you are an ISSA author or speaker. If you have innovative ways to solve problems, have applied security technology to address risks, or have case studies of how you have done it, then your ideas on security challenges, management, and innovation will go a long way in establishing you as a thought leader.
  • Network Like a Pro -Make new contacts and deepen old ones on a regular basis. ISSA offers a lot of networking opportunities beyond exchanging business cards. Forging lasting ties with others who have the same professional interests and concerns is one of the things you can do as you attend local chapter meetings, become involved on a committee or take a prominent leadership role. The sources of inspiration and ideas will come from these relationships. Networking contacts are a great resource for benchmarking security practices and validation of security product features.
  • Grow Your Career – The training you receive through the ISSA will give you a means to find potential career opportunities and can help get you noticed by those looking for someone to join their team. The ISSA sponsors many meetings and conferences that you can attend in order to earn CPEs for various certifications.
  • Learn for a Lifetime – The annual conference and chapter meetings are vital educational and professional resources that provide in-depth and timely information about the information security industry. Meeting and events can help you develop skills and solve problems. In addition to comprehensive workshops, seminars and knowledgeable guest speakers, there are presentations on new technologies. ISSA gives members additional discounts to security conferences.

Summary

In summary, I think that joining ISSA is worth every penny, especially if you want to progress from beginner to practitioner to expert. It’s among some of the best money you can spend in terms of ROI for growing your knowledge and your reputation in the community.

 

Is it Possible to Identify all Risks Associated with a Project, Program or System?

How good is risk assessment? Can a risk assessment actually identify all the risks that might plague a particular project, program or system? The short answer is no, not entirely.

Since humans became sentient and gained the ability to reason, we have been using our logical ability to attempt to see into the future and determine what may be coming next. We see what is going on around us, we remember what has happened in the past, we learn what others have experienced and we use that information as our guide to calculating the future. And that paradigm has served us well, generally speaking. We have the logical ability to avoid previously made mistakes and predict future trends pretty well. However, we never get it 100% right. It is a truism that no system ever designed to protect ourselves and our assets has not been defeated sooner or later. That is why a risk engineer will never tell you that their security measures will provide you with a zero-risk outcome. All you can do is lessen risk as much as possible.

One reason for this is an imperfect understanding of all the factors that contribute to risk for any given system or situation. These factors include understanding exactly what we are attempting to protect, understanding threats that menace the asset, understanding mechanisms that we have in place to protect the asset and understanding weaknesses that those threats may be able to exploit to defeat our protection mechanisms. If any one of these factors is imperfectly understood and integrated with the other factors involved, risk cannot be wholly eliminated.

Understanding what we a trying to protect is the factor that is easiest to accomplish usually, especially if it is something simple like money or our home. However, even this task can become daunting when you are trying to entirely understand something as complex as a software application or a computer network. These sorts of things are often composed of parts that we ourselves have not constructed, such as standard bits of code or networking devices that we simply employ in our bigger design but do not have a complete understanding of.

Understanding threats that menace our assets is more difficult. We are pretty good at protecting ourselves against threats that have been employed by attackers before. But the problem lies in innovative threats that are entirely new or that are novel uses and combinations of previously identified threats. These are the big reasons why we are always playing catchup with attackers.

Understanding the mechanisms we have in place to protect our assets is another area we can accomplish fairly well, but even this factor is often imperfectly understood. For example, how many of you have purchased a security software package to protect your network, but then have trouble getting it to work to its greatest effect because your team doesn’t have a handle on all of its complexities? We have seen this often in our work.

Finally, understanding weaknesses in our protection mechanisms may be the hardest factor of all to deal with. Often, security vulnerabilities go unrecognized until some cleaver attacker comes up with a zero-day exploit to take advantage of them. Or sometimes simple vulnerabilities seem easy to protect against until someone figures out that you can string a few of them together to affect a big compromise.

So, to get the most out of risk assessment, you need to gain the greatest understanding possible of all the factors that make up risk. In addition, you need to guard against complacency and ensure that you are not only protecting your assets to the greatest extent your ability and budget will allow, but you need to be prepared for those times that your efforts fail and security compromise does occur.

Why Penetration Testing Should Accompany Vulnerability Assessment

Twenty years ago, the world of network security was a whole different ballgame. At that time, the big threat was external attackers making their way onto your network and wreaking havoc. As hard as it is to believe now, many businesses and organizations did not even employ firewalls on their networks at that time! The big push among network security professionals then was to ensure that everyone had good firewalls, or “network perimeter” security, in place. This is the time when vulnerability assessment of distributed computer networks became big.

Vulnerability assessment entails examining networks for weaknesses such as exposed services and misconfigurations that could be exploited by attackers to gain access to private information and systems. This type of testing was encouraged by professionals to give businesses and organizations information about the weaknesses that were actually present at the time of testing. At first, vulnerability assessment was usually only conducted against the external network (that part of the network that is visible from outside the business, usually over the Internet).

Most businesses and organizations embraced the need for firewalls and external vulnerability assessments as time progressed. This was not only because doing so made good sense, but because of regulatory requirements penned to meet the requirements of modern laws such as HIPAA, GLBA and SOX. However, many did not see the need for other security studies such as internal vulnerability assessment (VA). Internal VA is like external VA, but looks for weaknesses on the internal network used by employees, partners and service providers that have been granted access and privileges to internal systems and services. The need for internal VA became increasingly important as cybercriminals found ways to worm their way into internal networks or the networks of service providers or partners. As more time passed, and network attacks increased in volume and competency, internal VA became more commonly performed among businesses and organizations.

Unfortunately, despite the increase in vulnerability studies, networks continued to be compromised. One of the reasons for this is the limited nature of vulnerability assessment. When a VA is performed, the assessors usually employ network scanning tools such as Nessus. The outputs of these tools show where vulnerabilities exist on the network, and even provide the consumer with recommendations for closing the security holes that were found. But it doesn’t go so far as to see if these vulnerabilities can actually be exploited by attackers. Also, these tools are limited, and do not show how the network may be vulnerable to combination attacks in which cybercriminals combine various weaknesses (technical, procedural and configurational weaknesses) on the network to foment big compromises. That is where penetration testing comes into play.

Penetration testing is not automated. It requires expert network security personnel to undertake properly. In penetration testing, the assessor employs the results of vulnerability studies and their own expertise to try to actually penetrate network security mechanisms just as a real-world cybercriminal would do. Obviously, the smarter and more knowledgeable the penetration tester is, the more valid the results they obtain. And for the consumer this can be a great boon.

It is true that penetration testing costs more money than performing vulnerability studies alone. What is little appreciated is the money it can save an organization in the long run. Not only can penetration testing uncover those tricky combined attacks mentioned above, it can also reveal which vulnerabilities found during VA are not presently exploitable by attackers to any great effect. This can save organizations from spending inordinate amounts of time and money fixing useless vulnerabilities and allows them to concentrate their resources on those network flaws that present the most actual danger to the organization.

What is the Difference Between a Risk Assessment and an Audit?

Many different types of organizations and businesses are required to undertake risk assessments and audits, either to satisfy some regulatory body or to satisfy internal policy requirements. But there often are questions about why both must be undertaken each year and what the differences between them are. These processes are very different, are done for different reasons and produce very different results

A risk assessment in reality is a way to estimate, or make “an informed guess” about the kinds and levels of risk facing just about anything. From a business perspective, you can perform a risk assessment on an individual business process, an information system, a third-party supplier, a software application or the enterprise as a whole. Risk assessments may be performed internally by company personnel, or by specialist, third-party security organizations. They can also be small-scale assessments conducted among a group of interested parties, or they can be large-scale, formal assessments that are comprehensive and fully documented. But whatever type and scale of risk assessment you are undertaking, they all share certain common characteristics.

To perform risk assessment, you first must characterize the system you wish to assess. For example, you may wish to assess the risk to the organization of implementing a new software application. “Characterizing,” in this case, means learning everything you can about the system and what is going to be entailed with installing it, maintaining it, training personnel to use it, how it connects to other systems, etc.

Once you have this information in hand, the next step is to find out what threats and vulnerabilities to the application exist or may appear in the near future. To do this, most organizations look to government and private organizations that keep track of threats and vulnerabilities and rate them for severity such as DHS, CERT, Cisco or SAP. In addition, organizations look to similar organizations and use groups to learn from them what threats they have experienced and what vulnerabilities they have found when implementing the software application in question.

The next steps in risk calculation are ascertaining the probability that the threats and vulnerabilities found in the previous steps may actually occur, and the impacts on the organization if they do. The final step is then to take into account the security controls that the organization has in place and the effect these countermeasures might have in preventing attackers from actually compromising the system. Thus, the formula for calculating risk is (threats x vulnerabilities x probability of occurrence x impact)/countermeasures in place = risk.

Looking at the above, it is obvious that there is much room for error in a risk calculation. You might not be able to find all the threats against the application, nor may you be able to determine all the vulnerabilities that exist. Probability of occurrence is also just an estimate, and even impact on the organization may not be fully understood. That is why I said that risk assessment is really just an estimate or educated guess. Audit, on the other hand, is something entirely different.

The goal of an audit is to ascertain if an organization is effectively implementing and adhering to a documented quality system. In other words, an audit examines written policies and processes, and records of how they are actually being implemented, to see if the organization is following the rules and to see if the processes they are following are effective. Auditors should be disinterested third-party professionals and in the case of IT audits are usually CPAs.

Most often, such as in the case of an audit by a regulatory body, a group of auditors will come on-site to the organization and start the process of records examination and interviews with personnel. This is an exhaustive process and contains little or no guesswork. Audits can be limited, such as an audit of an accounting system, or can look at all the business practices of an organization. You can even have an audit done to test the quality and effectiveness of your risk assessment and risk management processes. This is probably where some of the confusion between the two arise. Although both may be mandated for a single organization, they remain very different processes.

Three Old School Attacks That Still Cause Trouble

Throughout the last several months, the MSI team has been performing some old-school types of attacks in our penetration testing work. Astoundingly, these “ancient” forms of hacking attacks are still yielding high levels of return. We’ve managed to steal amazing amounts of data using these tactics from the early days of the hacking community.

Dumpster Diving

Lots of confidential data still ends up in the trash. If you’re lucky enough to find a dumpster with sensitive information inside it, then you can get access to that data without having to break into any systems or networks. This is one of the most common ways for hackers to gain access to valuable data and intellectual property.

And, we’ve seen plenty of it. PII, PHI, employee data, mergers and acquisitions information and a whole lot of intellectual property is still turning up in our team’s testing. Even with corporate shred containers scattered about (which you should have), many sensitive documents still end up in the trash.

The best we’ve seen? A document with a plethora of sensitive data in it, generated by a corporate attorney, with a post-it still attached to it that says “Please shred!”. All we can say is, awareness is the key to mitigating this one.

Compromising Voicemail Boxes

It’s 2021, and yet, 1987 called and wants their hack back. Our team is still compromising voicemail boxes with ease. Most are protected by simple 4 digit codes, and even then, the majority of those codes fall into a short “easy pickings” list. PIN lockouts after so many bad attempts remain almost unheard of, and it’s simply astounding what you can learn from owning some corporate voicemails.

If you haven’t had your voicemail system audited recently, now might be a good time to talk about it. Not only can it lead to exposure of a variety of confidential information, credentials and customer data, but in many cases, it can also lead to toll fraud and significantly increased telecomm charges.

Our best story here? Compromising a voicemail box for a customer service rep, where thanks to COVID, they were working from home. We changed the message to ask for callers to leave their account information as a part of their support request. Lo and behold, an easy way to harvest that data. How long would it take you to notice this kind of attack?

Wardialing & Dial-up Compromises

Remember dial-up? Our team still loves to play with the “beauty of the baud”, so to speak. You’d be amazed how many companies still have modems attached to critical systems and exposed to the world via the phone. Routers, industrial automation, PBX remote management, critical ICS systems all abound in the dial-up world. Many have simple logins and passwords, but some don’t even have that anymore.

In addition, VoIP and cloud technologies were expanded years ago to include modern war dialing tools. Hunting for dial-ups remains easy, cheap and effective.

What’s worse? If the attacker “gets lucky”, they can find a loose dial-up system that is network connected on the other side, making it easy to bridge a dial-up compromise into network access. The next thing the penetration testing team knows, it’s “raining shells”, so to speak.

When was the last time you audited your dial-up space, or went looking for modems? Many remote vendor support agreements still contain these types of connections. Pay special attention to remote support for MPLS and telecomm circuits. We’ve found a lot of this equipment with dial-ups in place for inbound tech support when a circuit fails.

Need a war dial or some dial-up testing? Give us a call. We love it.

Give some thought to old-school attacks. Penetration testers with experience in these areas may have some grey hair, but you’d likely be surprised how much these long in the tooth exploits still have bite!