Optimizing DNS and URL Request Logging

 

Organizations aiming to enhance their cybersecurity posture should consider optimizing their processes around DNS and URL request logging and review. This task is crucial for identifying, mitigating, and preventing cyber threats in an increasingly interconnected digital landscape. Here’s a practical guide to help organizations streamline these processes effectively.

 1. Establish Clear Logging Policies
Define what data should be collected from DNS and URL requests. Policies should address the scope of logging, retention periods, and privacy considerations, ensuring compliance with relevant laws and regulations like GDPR.

 2. Leverage Automated Tools for Data Collection
Utilize advanced logging tools that automate the collection of DNS and URL request data. These tools should not only capture the requests but also the responses, timestamps, and the initiating device’s identity. Integration with existing cybersecurity tools can enhance visibility and threat detection capabilities.

 3. Implement Real-time Monitoring and Alerts
Set up real-time monitoring systems to analyze DNS and URL request logs for unusual patterns or malicious activities. Automated alerts can expedite the response to potential threats, minimizing the risk of significant damage.

 4. Conduct Regular Audits and Reviews
Schedule periodic audits of your DNS and URL logging processes to ensure they comply with your established policies and adapt to evolving cyber threats. Audits can help identify gaps in your logging strategy and areas for improvement.

 5. Prioritize Data Analysis and Threat Intelligence
Invest in analytics platforms that can process large volumes of log data to identify trends, anomalies, and potential threats. Incorporating threat intelligence feeds into your analysis can provide context to the data, enhancing the detection of sophisticated cyber threats.

 6. Enhance Team Skills and Awareness
Ensure that your cybersecurity team has the necessary skills to manage and analyze DNS and URL logs effectively. Regular training sessions can keep the team updated on the latest threat landscapes and analysis techniques.

 7. Foster Collaboration with External Partners
Collaborate with ISPs, cybersecurity organizations, and industry groups to share insights and intelligence on emerging threats. This cooperation can lead to a better understanding of the threat environment and more effective mitigation strategies.

 8. Streamline Incident Response with Integrated Logs
Integrate DNS and URL log analysis into your incident response plan. Quick access to relevant log data during a security incident can speed up the investigation and containment efforts, reducing the impact on your organization.

 9. Review and Adapt to Technological Advances
Continuously evaluate new logging technologies and methodologies to ensure your organization’s approach remains effective. The digital landscape and associated threats are constantly evolving, requiring adaptive logging strategies.

 10. Document and Share Best Practices
Create comprehensive documentation of your DNS and URL logging and review processes. Sharing best practices and lessons learned with peers can contribute to a stronger cybersecurity community.

By optimizing DNS and URL request logging and review processes, organizations can significantly enhance their ability to detect, investigate, and respond to cyber threats. A proactive and strategic approach to logging can be a cornerstone of a robust cybersecurity defense strategy.

 

 

* AI tools were used in the research and creation of this content.

Best Practices for DHCP Logging

As an IT and security auditor, I have seen the importance of DHCP logging in, ensuring network security, and troubleshooting network issues. Here are the best practices for DHCP logging that every organization should follow:

 

1. Enable DHCP Logging: DHCP logging should be turned on to record every event that occurs in the DHCP server. The logs should include information such as the time of the event, the IP address assigned, and the client’s MAC address.

2. Store DHCP Logs Securely: DHCP logs are sensitive information that should be stored in a secure location. Access to the logs should be restricted to authorized personnel only.

3. Use a Centralized Logging Solution: To manage DHCP logs, organizations should use a centralized logging solution that can handle logs from multiple DHCP servers. This makes monitoring logs, analyzing data, and detecting potential security threats easier.

4. Regularly Review DHCP Logs: Regularly reviewing DHCP logs can help detect and prevent unauthorized activities on the network. IT and security auditors should review logs to identify suspicious behavior, such as unauthorized IP and MAC addresses.

5. Analyze DHCP Logs for Network Performance Issues: DHCP logs can also help identify network performance issues. By reviewing logs, IT teams can identify IP address conflicts, subnet mask issues, and other network performance problems.

6. Monitor DHCP Lease Expiration: DHCP lease expiration is vital to ensure IP addresses are not allotted to unauthorized devices. DHCP logs can help to monitor lease expiration and to deactivate the leases of non-authorized devices.

7. Implement Alerting: IT and security audit teams should implement alerting options to ensure network security. By setting up alert mechanisms, they can be notified of suspicious activities such as unauthorized devices connecting to the network or DHCP problems.

8. Maintain DHCP Logs Retention Policy: An effective DHCP logs retention policy should be defined to ensure logs are saved for an appropriate period. This policy will help to provide historical audit trails and to comply with data protection laws.

 

Following these DHCP logging best practices will help ensure the network’s security and stability while simplifying the troubleshooting of any network issues.

Basic Logging Advice

Logging and monitoring are two important aspects of any security program. Without logging, we cannot understand how our systems operate, and without monitoring, we cannot detect anomalies and issues before they become problems.

There are many different types of logs available to us today. Some are generated automatically, while others require manual intervention. For instance, network traffic is usually logged automatically. However, application logs are not. We may need to manually create these logs.

Application logs provide valuable information about what happened during the execution of an application. They can show us which parts of the application were executed, what resources were used, and what was returned. Application logs are often stored in databases, allowing us to query them later.

Network logs are also useful. They allow us to see what packets were sent and received, and what responses were made. 

System logs are another type of log that we should consider. System logs record events such as system startup, shutdown, reboots, etc. They are generally stored in files, but can also be recorded in databases.

While logs are very helpful, they do have their limitations:

  • First, logs are only as good as the people who generate them. If  something doesn’t save a log, then we likely don’t know what happened. We might be able to get that from some other log, but having multiple layers of logs around an event is often useful.
  • Second, logs are static. Once created, they should remain unchanged. Hashing logs, storing them on read only file systems and other forms of log controls are highly suggested.
  • Third, logs are not always accurate. Sometimes, logs contain false positives, meaning that something appears to be happening when actually nothing is. False negatives are also possible, meaning we don’t alert on something we should have. Logs are a part of detection solution, not the sole basis of one.
  • Fourth, logs are not always actionable. That means that we can’t easily tell from a log whether something bad has occurred or if it is just noise. This is where log familiarity and anomaly detection comes in. Sometimes reviewing logs in aggregate and looking for trends is more helpful than individual line by line analysis. The answer may be in looking for haystacks instead of needles…
  • Finally, logs are not always timely. They might be created after the fact, and therefore won’t help us identify a problem until much later. While good log analysis can help create proactive security through threat intelligence, they are more powerful when analyzing events that have happened or as sources for forensic data.

Keep all of these things in mind when considering logging tools, designing monitoring techniques or building logs for your systems and applications.

How often should security logs be reviewed?

Security logs are one of the most important components of any security program. They provide insight into how well your security program is working, and they serve as a valuable source of intelligence for incident response. However, they are not perfect; they can contain false positives and false negatives. As a result, they need to be reviewed regularly to ensure they are providing accurate information.

There are two main reasons why security log reviews are necessary. First, they allow you to identify problems before they become serious incidents. Second, they allow you to determine whether your current security measures are effective.

When reviewing logs, look for three things:

1. Incidents – These are events that indicate something has gone wrong. For example, a firewall blocking access to a website, or a virus scanning software alerting you to a malware infection.

2. False Positives – These are alerts that don’t represent anything actually happening. For example, a virus scanner warning you about a file that was downloaded from the Internet without any infection identified.

3. False Negatives – These are alerts that do represent something actually happening, but were missed because of a flaw in the system. For example, a server being accessed remotely, but no alarms raised.

Reviewing logs every day is recommended. If you review logs daily, you will catch issues sooner and prevent them from becoming major incidents. This should be done on a rotating basis by the security team to prevent fatigue from diminishing the quality of the work, or via automated methods to reduce fatigue.

Peer reviewing logs weekly is also recommended. It allows you to spot trends and anomalies that might otherwise go unnoticed by a single reviewer. It also gives a second set of eyes on the logs, and helps guard against fatigue or bias-based errors.

Finally, aggregated trend-based monthly reviews are recommended. This gives you a chance to look back and see if there have been any changes to your environment that could affect your security posture or represent anomalies. This is a good place to review items like logged events per day, per system, trends on specific log events and the like. Anomalies should be investigated. Often times, this level of log review is great for spotting changes to the environment or threat intelligence.

If you want to learn more about how to conduct log reviews effectively, reach out to us at info@microsolved.com. We’re happy to help!

How long should security logs be kept?

Security logs are a great source of information for incident response, forensics, and compliance purposes. However, log retention policies vary widely among organizations. Some keep logs indefinitely; others only retain them for a certain period of time. Logging practices can impact how much useful information is available after a compromise has occurred.

In general, the longer logs are retained, the better. But, there are several factors to consider when determining how long to keep logs. These include:

• What type of system is being monitored?

• Is the system mission-critical?

• Are there any legal requirements regarding retention of logs?

• Does the company have a policy regarding retention of logs? If so, does it match industry standards?

• How often do incidents occur?

• How many employees are affected by each incident?

• How many incidents are reported?

• How many hours per day are logs collected?

• How many days per week are logs collected?

It is important to understand the business needs before deciding on a retention policy. For example, if a company has a policy of retaining logs for 90 days, then it is reasonable to assume that 90 days is sufficient for the majority of situations. However, if a company has no retention policy, then it is possible that the logs could be lost forever.

Logs are one of the most valuable sources of information during an investigation. It is important to ensure that the right people have access to the logs and that they are stored securely. In addition, it is important to know how long logs need to be kept.

MicroSolved provides a number of services related to logging and monitoring. We can help you create logging policies and practices, as well as design log monitoring solutions. Drop us a line at info@microsolved.com if you’d like to discuss logging and logging solutions.

Monitoring: an Absolute Necessity (but a Dirty Word Nonetheless)

There is no easier way to shut down the interest of a network security or IT administrator than to say the word “monitoring”. You can just mention the word and their faces fall as if a rancid odor had suddenly entered the room! And I can’t say that I blame them. Most organizations do not recognize the true necessity of monitoring, and so do not provide proper budgeting and staffing for the function. As a result, already fully tasked (and often times inadequately prepared) IT or security personnel are tasked with the job. This not only leads to resentment, but also virtually guarantees that the job is will not be performed effectively.

And when I say human monitoring is necessary if you want to achieve any type of real information security, I mean it is NECESSARY! You can have network security appliances, third party firewall monitoring, anti-virus packages, email security software, and a host of other network security mechanisms in place and it will all be for naught if real (and properly trained) human beings are not monitoring the output. Why waste all the time, money and effort you have put into your information security program by not going that last step? It’s like building a high and impenetrable wall around a fortress but leaving the last ten percent of it unbuilt because it was just too much trouble! Here are a few tips for effective security monitoring:

  • Properly illustrate the necessity for human monitoring to management, business and IT personnel; make them understand the urgency of the need. Make a logical case for the function. Tell them real-world stories about other organizations that have failed to monitor and the consequences that they suffered as a result. If you can’t accomplish this step, the rest will never fall in line.
  • Ensure that personnel assigned to monitoring tasks of all kinds are properly trained in the function; make sure they know what to look for and how to deal with what they find.
  • Automate the logging and monitoring function as much as possible. The process is difficult enough without having to perform tedious tasks that a machine or application can easily do.
  • Ensure that you have log aggregation in place, and also ensure that other network security tool output is centralized and combined with logging data. Real world cyber-attacks are often very hard to spot. Correlating events from different tools and processes can make these attacks much more apparent. 
  • Ensure that all personnel associated with information security communicate with each other. It’s difficult to effectively detect and stop attacks if the right hand doesn’t know what the left hand is doing.
  • Ensure that logging is turned on for everything on the network that is capable of it. Attacks often start on client side machines.
  • Don’t just monitor technical outputs from machines and programs, monitor access rights and the overall security program as well:
  • Monitor access accounts of all kinds on a regular basis (at least every 90 days is recommended). Ensure that user accounts are current and that users are only allocated access rights on the system that they need to perform their jobs. Ensure that you monitor third party access to the system to this same level.
  • Pay special attention to administrative level accounts. Restrict administrative access to as few personnel as possible. Configure the system to notify proper security and IT personnel when a new administrative account is added to the network. This could be a sign that a hack is in progress.
  • Regularly monitor policies and procedures to ensure that they are effective and meet the security goals of the organization. This should be a regular part of business continuity testing and review.
Thanks to John Davis for writing this post.

Touchdown Task #2: Detection: How Much Malware Do You Have? #security

Our last Touchdown task was “Identify and Remove All Network, System and Application Access that does not Require Secure Authentication Credentials or Mechanisms”. This time, it is “Detection”.

When we say “detection” we are talking about detecting attackers and malware on your network.

The best and least expensive method for detecting attackers on your network is system monitoring. This is also the most labor intensive method of detection. If you are a home user or just have a small network to manage, then this is not much of a problem. However, if your network has even a dozen servers and is complex at all, monitoring can become a daunting task. There are tools and techniques available to help in this task, though. There are log aggregators and parsers, for example. These tools take logging information from all of the entities on your system and combine them and/or perform primary analysis of system logs. But they do cost money, so on a large network some expense does creep in.

And then there are signature-based intruder detection, intruder prevention and anti-virus systems. Signature-based means that these systems work by recognizing the code patterns or “signatures” of malware types that have been seen before and are included in their databases. But there are problems with these systems. First, they have to be constantly updated with new malware patterns that emerge literally every day. Secondly, a truly new or “zero day” bit of Malware code goes unrecognized by these systems. Finally, with intruder detection and prevention systems, there are always lots of “false positives”. These systems typically produce so many “hits” that people get tired of monitoring them. And if you don’t go through their results and winnow out the grain from the chaff, they are pretty much useless.

Finally there are anomaly detection systems. Some of these are SEIM or security event and incident management systems. These systems can work very well, but they must be tuned to your network and can be difficult to implement. Another type of anomaly detection system uses “honey pots”. A honey pot is a fake system that sits on your network and appears to be real. An attacker “foot printing” your system or running an exploit cannot tell them from the real thing. Honey pots can emulate file servers, web servers, desk tops or any other system on your network. These are particularly effective because there are virtually no false positives associated with these systems. If someone is messing with a honey pot, you know you have an attacker! Which is exactly what our HoneyPoint Security Server does: identify real threats!

Undertaking this Touchdown Task is relatively easy and will prove to be truly valuable in protecting your network from attack. Give us a call if you’d like us to partner with you for intrusion detection!