How to Craft Effective Prompts for Threat Detection and Log Analysis

 

Introduction

As cybersecurity professionals, log analysis is one of our most powerful tools in the fight against threats. By sifting through the vast troves of data generated by our systems, we can uncover the telltale signs of malicious activity. But with so much information to process, where do we even begin?

The key is to arm ourselves with well-crafted prompts that guide our investigations and help us zero in on the threats that matter most. In this post, we’ll explore three sample prompts you can use to supercharge your threat detection and log analysis efforts. So grab your magnifying glass, and let’s dive in!

Prompt 1: Detecting Unusual Login Activity

One common indicator of potential compromise is unusual login activity. Attackers frequently attempt to brute force their way into accounts or use stolen credentials. To spot this, try a prompt like:

Show me all failed login attempts from IP addresses that have not previously authenticated successfully to this system within the past 30 days. Include the source IP, account name, and timestamp.

This will bubble up login attempts coming from new and unfamiliar locations, which could represent an attacker trying to gain a foothold. You can further refine this by looking for excessive failed attempts to a single account or many failed attempts across numerous accounts from the same IP.

Prompt 2: Identifying Suspicious Process Execution

Attackers will often attempt to run malicious tools or scripts after compromising a system. You can find evidence of this by analyzing process execution logs with a prompt such as:

Show me all processes launched from temporary directories or user profile AppData directories. Include the process name, associated username, full command line, and timestamp.

Legitimate programs rarely run from these locations, so this can quickly spotlight suspicious activity. Pay special attention to scripting engines like PowerShell or command line utilities like PsExec being launched from unusual paths. Examine the full command line to understand what the process was attempting to do.

Prompt 3: Spotting Anomalous Network Traffic

Compromised systems frequently communicate with external command and control (C2) servers to receive instructions or exfiltrate data. To detect this, try running the following prompt against network connection logs:

Show me all outbound network connections to IP addresses outside of our organization’s controlled address space. Exclude known good IPs like software update servers. Include source and destination IPs, destination port, connection duration, and total bytes transferred.

Look for long-duration connections or large data transfers to previously unseen IP addresses, especially on non-standard ports. Correlating this with the associated process can help determine if the traffic is malicious or benign.

Conclusion

Effective prompts like these are the key to unlocking the full potential of your log data for threat detection. You can quickly identify the needle in the haystack by thoughtfully constructing queries that target common attack behaviors.

But this is just the beginning. As you dig into your findings, let each answer guide you to the next question. Pivot from one data point to the next to paint a complete picture and scope the full extent of any potential compromise.

Mastering the art of prompt crafting takes practice, but the effort pays dividends. Over time, you’ll develop a robust library of questions that can be reused and adapted to fit evolving needs. So stay curious, keep honing your skills, and happy hunting!

More Help?

Ready to take your threat detection and log analysis skills to the next level? The experts at MicroSolved are here to help. With decades of experience on the front lines of cybersecurity, we can work with you to develop custom prompts tailored to your unique environment and risk profile. We’ll also show you how to integrate these prompts into a comprehensive threat-hunting program that proactively identifies and mitigates risks before they impact your business. Be sure to start asking the right questions before an attack succeeds. Contact us today at info@microsolved.com to schedule a consultation and build your defenses for tomorrow’s threats.

 

* AI tools were used as a research assistant for this content.

 

Optimizing DNS and URL Request Logging

 

Organizations aiming to enhance their cybersecurity posture should consider optimizing their processes around DNS and URL request logging and review. This task is crucial for identifying, mitigating, and preventing cyber threats in an increasingly interconnected digital landscape. Here’s a practical guide to help organizations streamline these processes effectively.

 1. Establish Clear Logging Policies
Define what data should be collected from DNS and URL requests. Policies should address the scope of logging, retention periods, and privacy considerations, ensuring compliance with relevant laws and regulations like GDPR.

 2. Leverage Automated Tools for Data Collection
Utilize advanced logging tools that automate the collection of DNS and URL request data. These tools should not only capture the requests but also the responses, timestamps, and the initiating device’s identity. Integration with existing cybersecurity tools can enhance visibility and threat detection capabilities.

 3. Implement Real-time Monitoring and Alerts
Set up real-time monitoring systems to analyze DNS and URL request logs for unusual patterns or malicious activities. Automated alerts can expedite the response to potential threats, minimizing the risk of significant damage.

 4. Conduct Regular Audits and Reviews
Schedule periodic audits of your DNS and URL logging processes to ensure they comply with your established policies and adapt to evolving cyber threats. Audits can help identify gaps in your logging strategy and areas for improvement.

 5. Prioritize Data Analysis and Threat Intelligence
Invest in analytics platforms that can process large volumes of log data to identify trends, anomalies, and potential threats. Incorporating threat intelligence feeds into your analysis can provide context to the data, enhancing the detection of sophisticated cyber threats.

 6. Enhance Team Skills and Awareness
Ensure that your cybersecurity team has the necessary skills to manage and analyze DNS and URL logs effectively. Regular training sessions can keep the team updated on the latest threat landscapes and analysis techniques.

 7. Foster Collaboration with External Partners
Collaborate with ISPs, cybersecurity organizations, and industry groups to share insights and intelligence on emerging threats. This cooperation can lead to a better understanding of the threat environment and more effective mitigation strategies.

 8. Streamline Incident Response with Integrated Logs
Integrate DNS and URL log analysis into your incident response plan. Quick access to relevant log data during a security incident can speed up the investigation and containment efforts, reducing the impact on your organization.

 9. Review and Adapt to Technological Advances
Continuously evaluate new logging technologies and methodologies to ensure your organization’s approach remains effective. The digital landscape and associated threats are constantly evolving, requiring adaptive logging strategies.

 10. Document and Share Best Practices
Create comprehensive documentation of your DNS and URL logging and review processes. Sharing best practices and lessons learned with peers can contribute to a stronger cybersecurity community.

By optimizing DNS and URL request logging and review processes, organizations can significantly enhance their ability to detect, investigate, and respond to cyber threats. A proactive and strategic approach to logging can be a cornerstone of a robust cybersecurity defense strategy.

 

 

* AI tools were used in the research and creation of this content.

What to Look For in a DHCP Log Security Audit

Examining the DHCP logs

In today’s ever-evolving technology landscape, information security professionals face numerous challenges in ensuring the integrity and security of network infrastructures. As servers and devices communicate within networks, one crucial element to consider is DHCP (Dynamic Host Configuration Protocol) logs. These logs provide valuable insights into network activity, aiding in identifying security issues and potential threats. Examining DHCP logs through a thorough security audit is a critical step that can help organizations pinpoint vulnerabilities and effectively mitigate risks.

Why are DHCP Logs Important?

DHCP servers are central in assigning IP addresses and managing network resources. By constantly logging activities, DHCP servers enable administrators to track device connections, detect unauthorized access attempts, and identify abnormal network behavior. Consequently, DHCP logs clarify network utilization, application performance, and potential security incidents, making them a vital resource for information security professionals.

What Security Issues Can Be Identified in DHCP Logs?

When analyzing DHCP logs, security professionals should look for several key indicators of potential security concerns. These may include IP address conflicts, unauthorized IP address allocations, rogue DHCP servers, and abnormal DHCP server configurations. Additionally, DHCP logs can help uncover DoS (Denial of Service) attacks, attempts to bypass network access controls, and instances of network reconnaissance in some circumstances.

In conclusion, conducting a comprehensive security audit of DHCP logs is an essential practice for information security professionals. By leveraging the data contained within these logs, organizations can identify and respond to potential threats, ensuring the overall security and stability of their network infrastructure. Stay tuned for our upcoming blog posts, where we will delve deeper into the crucial aspects of DHCP log analysis and its role in fortifying network defenses.

Parsing the List of Events Logged

When conducting a DHCP log security audit, information security professionals must effectively parse the list of events logged to extract valuable insights and identify potential security issues.

To parse the logs and turn them into easily examined data, obtain the log files from the DHCP server. These log files are typically stored in a default logging path specified in the server parameters. Once acquired, the logs can be examined using various tools, including the server management console or event log viewer.

Begin by analyzing the log entries for critical events such as IP address conflicts, unauthorized IP address allocations, and abnormal DHCP server configurations. Look for any indications of rogue DHCP servers, as they can pose a significant security risk.

Furthermore, pay close attention to entries related to network reconnaissance, attempts to bypass network access controls and DoS attacks. These events can potentially reveal targeted attacks or malicious activities within the network.

By effectively parsing the list of events logged, information security professionals can uncover potential security issues, identify malicious activities, and take necessary measures to mitigate risks and protect the network infrastructure. It is crucial to remain vigilant and regularly conduct DHCP log audits to ensure the ongoing security of the network.

Heuristics that Represent Malicious Behaviors

When conducting a DHCP log security audit, information security professionals should look for specific heuristics representing potentially malicious behaviors. These heuristics can help identify security issues and prevent potential threats. It’s essential to understand what these heuristics mean and how to investigate them further.

Some examples of potentially malicious DHCP log events include:

1. Multiple DHCP Server Responses: This occurs when multiple devices on the network respond to DHCP requests, indicating the presence of rogue DHCP servers. Investigate the IP addresses associated with these responses to identify the unauthorized server and mitigate the security risk.

2. IP Address Pool Exhaustion: This event indicates that all available IP addresses in a subnet have been allocated or exhausted. It could suggest an unauthorized device or an unexpected influx of devices on the network. Investigate the cause and take appropriate actions to address the issue.

3. Unusual DHCP Lease Durations: DHCP lease durations outside the normal range can be suspicious. Short lease durations may indicate an attacker attempting to maintain control over an IP address. Long lease durations could suggest an attempt to evade IP address tracking. Investigate these events to identify any potential malicious activities.

Summary

A DHCP log security audit is crucial for information security professionals to detect and mitigate potential threats within their network. By analyzing DHCP log events, security teams can uncover malicious activities and take appropriate actions to protect their systems.

In this audit, several DHCP log events should be closely examined. One such event is multiple DHCP server responses, indicating the presence of rogue DHCP servers. Investigating the IP addresses associated with these responses can help identify unauthorized servers and address the security risk.

Another event that requires attention is IP address pool exhaustion. This event suggests the allocation of all available IP addresses in a subnet or an unexpected increase in devices on the network. Identifying the cause of this occurrence is vital to mitigate any potential security threats.

Unusual DHCP lease durations are also worth investigating. Short lease durations may suggest an attacker’s attempt to maintain control over an IP address, while long lease durations could indicate an effort to evade IP address tracking.

By conducting a thorough DHCP log security audit, security teams can proactively protect their networks from unauthorized devices, rogue servers, and potential malicious activities. Monitoring and analyzing DHCP log events should be an essential part of any organization’s overall security strategy.

* Just to let you know, we used some AI tools to gather the information for this article, and we polished it up with Grammarly to make sure it reads just right!

FAQ on Audit Log Best Practices

Q: What are audit logs?

A: Audit logs are records of all events and security-related information that occur within a system. This information is crucial for incident response, threat detection, and compliance monitoring.

Q: Why is audit log management important?

A: Audit log management is essential for every organization that wants to ensure its data security. Without audit logs, organizations would have no way of knowing who accessed what information when or how the incident happened or whether unauthorized users or suspicious activity occurred. Moreover, audit log management supports compliance with industry regulations and guidelines.

Q: What are the best practices for audit log management?

A: To ensure that your audit log management practices meet the CIS CSC version 8 guidelines and safeguard requirements, consider implementing the following best practices:

1. Define the audit log requirements based on industry regulations, guidelines, and best practices.

2. Establish audit policies and procedures that align with your organization’s requirements and implement them consistently across all systems and devices.
3. Secure audit logs by collecting, storing, and protecting them securely to prevent unauthorized access or tampering.
4. Monitor and review audit logs regularly for anomalies, suspicious activity, and security violations, such as unauthorized access attempts, changes to access rights, and software installations.
5. Configure audit logging settings to generate records of critical security controls, including attempts to gain unauthorized access or make unauthorized changes to the network.
6. Generate alerts in real-time for critical events, including security violations, unauthorized access attempts, changes to access rights, and software installations.
7. Regularly test audit log management controls to ensure their effectiveness and meet your organization’s audit log requirements.

Q: What are the benefits of following audit log management best practices?

A: Following audit log management best practices can establish a strong framework for incident response, threat detection, and compliance monitoring. This, in turn, can help safeguard against unauthorized access, malicious activity, and other security breaches, prevent legal and financial penalties, and maintain trust levels with clients and partners.

Q: How long should audit logs be kept?

A: As a general rule, storage of audit logs should include 90 days hot (meaning actively available for immediate review or alerting), 6 months warm (meaning they can be restored within hours), and two years cold (meaning they can be restored within days). However, organizations should define retention periods based on their audit log requirements and compliance regulations. [1] [2]

*This article was written with the help of AI tools and Grammarly.

Let’s Talk About Audit Logs

CIS Control 8: Audit Log Management

Data is at the core of every business in today’s digital age. Protecting that data is of paramount importance. For this reason, the Center for Internet Security (CIS) developed the CIS Controls to provide a comprehensive framework for cybersecurity best practices.

One of these controls, CIS Control 8, focuses specifically on audit log management. This control aims to ensure that all events and security-related information are recorded and retained in an audit log for a defined period.

This article will explore the importance of audit log management as a fundamental component of any organization’s security posture. We will examine the CIS Control 8 safeguard requirements and industry-standard best practices for audit log management.

By following the procedures outlined in this article, organizations can improve their security posture, meet all CIS CSC version 8 safeguards, and ensure compliance with industry standards.

Why audit log management is essential

Audit log management is essential for every organization that wants to ensure its data security. The reason is simple: audit logs provide a comprehensive record of all events and security-related information that occurs within a system. This information is critical for incident response, threat detection, and compliance monitoring. Without audit logs, organizations would have no way of knowing who accessed what information, when or how the incident happened, or whether unauthorized users or suspicious activity occurred.

In addition to aiding in incident response and threat detection, audit log management also supports compliance with industry regulations and guidelines. Many compliance requirements mandate that organizations maintain a record of all activity that occurs on their systems. Failing to comply with these requirements can result in significant legal and financial penalties. Therefore, organizations prioritizing data security must take audit log management seriously and implement practices that meet their data security needs and safeguard requirements.

Best practices for audit log management

Audit log management is critical to an organization’s data security efforts. To ensure that your audit log management practices meet the CIS CSC version 8 guidelines and safeguard requirements, consider implementing the following best practices:

1. Define the audit log requirements: Assess the audit log requirements for your organization based on industry regulations, guidelines, and best practices. Define the data to be logged, audit events, and retention periods.

2. Establish audit policies and procedures: Develop audit policies and procedures that align with your organization’s requirements. Ensure these policies and procedures are implemented consistently across all systems and devices.

3. Secure audit logs: Audit logs should be collected, stored, and protected securely to prevent unauthorized access or tampering. Only authorized personnel should have access to audit logs.

4. Monitor and review audit logs: Regularly monitor and review audit logs for anomalies, suspicious activity, and security violations. This includes monitoring for unauthorized access attempts, changes to access rights, and software installations.

5. Configure audit logging settings: Ensure audit logs capture essential system information and user activity information. Configure audit logging settings to generate records of critical security controls, including attempts to gain unauthorized access or make unauthorized changes to the network.

6. Generate alerts: Configure the system to generate real-time alerts for critical events. This includes alerts for security violations, unauthorized access attempts, changes to access rights, and software installations.

7. Regularly test audit log management controls: Ensure audit log management controls are consistently implemented and reviewed. Conduct regular testing to ensure they are effective and meet your organization’s audit log requirements.

Organizations can establish a strong framework for incident response, threat detection, and compliance monitoring by implementing these best practices for audit log management. This will help safeguard against unauthorized access, malicious activity, and other security breaches, prevent legal and financial penalties, and maintain trust levels with clients and partners.

Audit log management policies

To establish audit log management policies that meet CIS CSC version 8 guidelines and safeguard requirements, organizations should follow the following sample policy:

1. Purpose: The purpose of this policy is to establish the principles for collecting, monitoring, and auditing all system and user activity logs to ensure compliance with industry regulations, guidelines, and best practices.

2. Scope: This policy applies to all employees, contractors, equipment, and facilities within the organization, including all workstations, servers, and network devices used in processing or storing sensitive or confidential information.

3. Policy:

– All computer systems and devices must generate audit logs that capture specified audit events, including user logins and accesses, system configuration changes, application accesses and modifications, and other system events necessary for detecting security violations, troubleshooting, and compliance monitoring.

– Audit logs must be generated in real-time and stored in a secure, centralized location that is inaccessible to unauthorized users.

– The retention period for audit logs must be at least 90 days, or longer if law or regulation requires.

– Only authorized personnel with appropriate access rights and clearances can view audit logs. Access to audit logs must be audited and reviewed regularly by the Information Security team.

– Audit logs must be reviewed regularly to identify patterns of suspicious activity, security violations, or potential security breaches. Any unauthorized access or security violation detected in the audit logs must be reported immediately to the Information Security team.

– Audit log management controls, and procedures must be tested periodically to ensure effectiveness and compliance with CIS CSC version 8 guidelines and safeguard requirements.

4. Enforcement: Failure to comply with this policy may result in disciplinary action, up to and including termination of employment or contract. All violations must be reported to the Information Security team immediately.

By implementing the above policy, organizations can ensure they meet the audit log management standards set forth by CIS CSC version 8 guidelines and safeguard requirements. This will help organizations prevent unauthorized access, malicious activity, and data breaches, maintain compliance with industry regulations, and protect the integrity and confidentiality of sensitive or confidential information.

Audit log management procedures

Here are the audit log management procedures that establish best practices for performing the work of this control:

I. Initial Setup

– Determine which audit events will be captured in the logs based on industry regulations, guidelines, and best practices.

– Configure all computer systems and devices to capture the specified audit events in the logs.

– Establish a secure, centralized location for storing the logs that is inaccessible to unauthorized users.

II. Ongoing Operations

– Set the logs to generate in real time.

– Monitor the logs regularly to detect security violations, troubleshoot, and monitor compliance.

– Ensure only authorized personnel with appropriate access rights can view the logs.

– Review the logs regularly to identify patterns of suspicious activity, security violations, or potential security breaches.

– Immediately report any unauthorized access or security violation detected in the logs to the Information Security team.

– Retain log data for at least 90 days, or longer if required by law or regulation.

III. Testing and Evaluation

– Test the audit log management controls and procedures periodically.

– Ensure that all testing and evaluation are conducted in compliance with CIS CSC version 8 guidelines and safeguard requirements.

By following these audit log management procedures, organizations can establish best practices for performing the work of this control and ensure that all system and user activities are properly monitored and audited. This will help organizations maintain compliance with industry regulations, prevent unauthorized access, and protect sensitive or confidential information from data breaches.

 

*This article was written with the help of AI tools and Grammarly.

Best Practices for DHCP Logging

As an IT and security auditor, I have seen the importance of DHCP logging in, ensuring network security, and troubleshooting network issues. Here are the best practices for DHCP logging that every organization should follow:

 

1. Enable DHCP Logging: DHCP logging should be turned on to record every event that occurs in the DHCP server. The logs should include information such as the time of the event, the IP address assigned, and the client’s MAC address.

2. Store DHCP Logs Securely: DHCP logs are sensitive information that should be stored in a secure location. Access to the logs should be restricted to authorized personnel only.

3. Use a Centralized Logging Solution: To manage DHCP logs, organizations should use a centralized logging solution that can handle logs from multiple DHCP servers. This makes monitoring logs, analyzing data, and detecting potential security threats easier.

4. Regularly Review DHCP Logs: Regularly reviewing DHCP logs can help detect and prevent unauthorized activities on the network. IT and security auditors should review logs to identify suspicious behavior, such as unauthorized IP and MAC addresses.

5. Analyze DHCP Logs for Network Performance Issues: DHCP logs can also help identify network performance issues. By reviewing logs, IT teams can identify IP address conflicts, subnet mask issues, and other network performance problems.

6. Monitor DHCP Lease Expiration: DHCP lease expiration is vital to ensure IP addresses are not allotted to unauthorized devices. DHCP logs can help to monitor lease expiration and to deactivate the leases of non-authorized devices.

7. Implement Alerting: IT and security audit teams should implement alerting options to ensure network security. By setting up alert mechanisms, they can be notified of suspicious activities such as unauthorized devices connecting to the network or DHCP problems.

8. Maintain DHCP Logs Retention Policy: An effective DHCP logs retention policy should be defined to ensure logs are saved for an appropriate period. This policy will help to provide historical audit trails and to comply with data protection laws.

 

Following these DHCP logging best practices will help ensure the network’s security and stability while simplifying the troubleshooting of any network issues.

ClawBack from MicroSolved: A Solution for Detecting Data Exposures on IT Help Forums and Support Sites

Introduction

In today’s interconnected world, the sharing of information has become a necessary aspect of both personal and professional life. However, this also increases the risk of exposing sensitive data to malicious actors. IT help forums, and support sites are particularly vulnerable to such data exposures, as users inadvertently share information that can compromise their networks and systems. ClawBack from MicroSolved is a powerful tool designed to identify and mitigate these data exposures, helping organizations safeguard their sensitive information.

ClawBack: A Solution for Detecting Data Exposures

ClawBack is a data leakage detection tool developed by MicroSolved, an industry leader in information security services. It is specifically designed to scan the internet for sensitive data exposure, including IT help forums and support sites, where individuals and organizations may unwittingly disclose critical information. By utilizing cutting-edge search techniques, ClawBack can efficiently and effectively identify exposed data, enabling organizations to take appropriate action.

Key Features of ClawBack

  1. Advanced Search Algorithms: ClawBack employs sophisticated search algorithms to identify specific data types, such as personally identifiable information (PII), intellectual property, and system configuration details. This ensures that organizations can focus on addressing the most critical exposures.

  2. Comprehensive Coverage: ClawBack’s search capabilities extend beyond IT help forums and support sites. It also covers social media platforms, code repositories, and other online sources where sensitive data may be exposed.

  3. Customizable Searches: Organizations can tailor ClawBack’s search parameters to their unique needs, targeting specific keywords, internal project names, and even key/certificate shards. This customization ensures organizations can focus on the most relevant and potentially damaging exposures.

  4. Real-time Alerts: ClawBack provides real-time notifications to organizations when sensitive data is detected, allowing for prompt response and mitigation.

The Importance of Addressing Data Exposures

Organizations must recognize the importance of addressing data exposures proactively. The sensitive information disclosed on IT help forums and support sites can provide cybercriminals with the tools to infiltrate an organization’s network, steal valuable assets, and cause significant reputational damage.

ClawBack offers a proactive solution to this growing problem. Identifying and alerting organizations to potential data exposures allows them to take swift action to secure their sensitive information. This can include contacting the source of the exposure, requesting the removal of the exposed data, or initiating internal remediation processes to mitigate any potential risks.

Conclusion

In conclusion, ClawBack from MicroSolved is an invaluable tool for organizations seeking to protect their sensitive data from exposure on IT help forums and support sites. Its advanced search algorithms, comprehensive coverage, and real-time alerts enable organizations to proactively address data exposures and strengthen their security posture.

As cyber threats continue to evolve, it is essential for organizations to remain vigilant and invest in solutions like ClawBack to safeguard their valuable information. By doing so, organizations can build a robust security foundation that will help them thrive in the digital age.

Workstation Logging Best Practices

Why Workstation Logging Matters

Workstations are important components of any IT infrastructure, and they’re also one of the most overlooked. Often seen as expendable, many organizations fail to see the value of workstation logs, and how they can add to the visibility and detection capabilities of the security team. Workstations are quite likely to be early indicators of attack and malware infections. They are also often super useful in identifying manual attacker behaviors and performing adequate forensics.

Organizations that don’t maintain and organize workstation logs are usually missing out on some essential data and falling short of having across-the-enterprise visibility. This is especially true if you have a decentralized work environment. Simply enabling, configuring, and properly aggregating workstation logs can give you a huge forensic advantage. Adding real-time or near real-time log parsing and event alerting makes that advantage a superpower.

What to Log

The security events an organization captures on their workstations depend largely on industry-specific needs and relevant legal requirements. However, best practices call for several events that must be recorded and logged to ensure user accountability and to help organizations detect, understand, and recover from malicious events. These events include:

  • Authentication successes and failures for all users and services
  • Access control successes and failures for all users and services
  • Session activity, including files and applications used, especially system utilities and Powershell, if applicable
  • Changes in user access rights or privileges

The Bottom Line

Get busy logging on workstations. Make sure the logs are properly configured, aggregated, and processed as a part of your detection capabilities. Don’t view workstation logs as throw-aways. Instead, see them as a powerful lens for early detection, forensics, and attack recovery.

Update:

Thanks to @TheTokenFemale for pointing out that the logs should be sent somewhere off the system. I meant that by aggregation, but to clarify, the logs should be sent, processed, and archived using a log aggregation system or toolset that includes proper chain of evidence handling, alerting, and heuristics. It should also store and archive the relevant logs according to best practices and legal and regulatory guidance. 

Basic Logging Advice

Logging and monitoring are two important aspects of any security program. Without logging, we cannot understand how our systems operate, and without monitoring, we cannot detect anomalies and issues before they become problems.

There are many different types of logs available to us today. Some are generated automatically, while others require manual intervention. For instance, network traffic is usually logged automatically. However, application logs are not. We may need to manually create these logs.

Application logs provide valuable information about what happened during the execution of an application. They can show us which parts of the application were executed, what resources were used, and what was returned. Application logs are often stored in databases, allowing us to query them later.

Network logs are also useful. They allow us to see what packets were sent and received, and what responses were made. 

System logs are another type of log that we should consider. System logs record events such as system startup, shutdown, reboots, etc. They are generally stored in files, but can also be recorded in databases.

While logs are very helpful, they do have their limitations:

  • First, logs are only as good as the people who generate them. If  something doesn’t save a log, then we likely don’t know what happened. We might be able to get that from some other log, but having multiple layers of logs around an event is often useful.
  • Second, logs are static. Once created, they should remain unchanged. Hashing logs, storing them on read only file systems and other forms of log controls are highly suggested.
  • Third, logs are not always accurate. Sometimes, logs contain false positives, meaning that something appears to be happening when actually nothing is. False negatives are also possible, meaning we don’t alert on something we should have. Logs are a part of detection solution, not the sole basis of one.
  • Fourth, logs are not always actionable. That means that we can’t easily tell from a log whether something bad has occurred or if it is just noise. This is where log familiarity and anomaly detection comes in. Sometimes reviewing logs in aggregate and looking for trends is more helpful than individual line by line analysis. The answer may be in looking for haystacks instead of needles…
  • Finally, logs are not always timely. They might be created after the fact, and therefore won’t help us identify a problem until much later. While good log analysis can help create proactive security through threat intelligence, they are more powerful when analyzing events that have happened or as sources for forensic data.

Keep all of these things in mind when considering logging tools, designing monitoring techniques or building logs for your systems and applications.

How often should security logs be reviewed?

Security logs are one of the most important components of any security program. They provide insight into how well your security program is working, and they serve as a valuable source of intelligence for incident response. However, they are not perfect; they can contain false positives and false negatives. As a result, they need to be reviewed regularly to ensure they are providing accurate information.

There are two main reasons why security log reviews are necessary. First, they allow you to identify problems before they become serious incidents. Second, they allow you to determine whether your current security measures are effective.

When reviewing logs, look for three things:

1. Incidents – These are events that indicate something has gone wrong. For example, a firewall blocking access to a website, or a virus scanning software alerting you to a malware infection.

2. False Positives – These are alerts that don’t represent anything actually happening. For example, a virus scanner warning you about a file that was downloaded from the Internet without any infection identified.

3. False Negatives – These are alerts that do represent something actually happening, but were missed because of a flaw in the system. For example, a server being accessed remotely, but no alarms raised.

Reviewing logs every day is recommended. If you review logs daily, you will catch issues sooner and prevent them from becoming major incidents. This should be done on a rotating basis by the security team to prevent fatigue from diminishing the quality of the work, or via automated methods to reduce fatigue.

Peer reviewing logs weekly is also recommended. It allows you to spot trends and anomalies that might otherwise go unnoticed by a single reviewer. It also gives a second set of eyes on the logs, and helps guard against fatigue or bias-based errors.

Finally, aggregated trend-based monthly reviews are recommended. This gives you a chance to look back and see if there have been any changes to your environment that could affect your security posture or represent anomalies. This is a good place to review items like logged events per day, per system, trends on specific log events and the like. Anomalies should be investigated. Often times, this level of log review is great for spotting changes to the environment or threat intelligence.

If you want to learn more about how to conduct log reviews effectively, reach out to us at info@microsolved.com. We’re happy to help!