Segmenting Administrative Activities: 4 Options to Meet CIS Control 12.8

As organizations work to strengthen their cybersecurity posture, the CIS Critical Security Controls provide an excellent framework to build upon. In the latest Version 8 of the Controls, Control 12 focuses on establishing, implementing, and actively managing network devices to prevent attackers from exploiting vulnerable access points.

Within Control 12, Safeguard 12.8 specifically calls for enterprises to “segment administrative activities to dedicated machines, accounts, and networks.” This is critical for reducing the risk of credential compromise and lateral movement if an admin account is breached. But how exactly can organizations go about meeting this Control? Let’s look at four potential approaches.

 1. Dedicated Admin Workstations

One straightforward option is to provision separate physical workstations that are used exclusively for administrative tasks. These admin workstations should be hardened with strict security configurations and have limited network access. Ideally, they would have no direct internet connectivity and be logically separated from the primary corporate network.

Activities like managing network devices, administering user accounts, and accessing sensitive databases should only be performed from these dedicated and secured admin workstations. This greatly reduces the attack surface and opportunity for threats to compromise admin credentials.[1][2][3][8]

 2. Privileged Access Workstations (PAWs)

A similar but more formalized approach is to implement Privileged Access Workstations (PAWs). These are specially-configured systems that admins must log into to perform their privileged duties.

PAWs enforce strong authentication requirements, have limited internet access, and are tightly restricted in what applications and activities are allowed. They are typically used for the most sensitive admin functions like domain administration, server management, and access to confidential data. Microsoft provides extensive guidance on designing and deploying PAWs.[2][8]

 3. Jump Servers / Bastion Hosts

Another architectural option to segment administrative activities is to deploy hardened “jump servers” or “bastion hosts.” These are intermediary servers that admins must first connect to before accessing infrastructure systems and devices.

All administrative connections and activities are proxied through these closely monitored jump servers. Admins authenticate to the jump host first, then connect to target devices from there. This allows strict control and audit of administrative access without directly exposing infrastructure to potential threats.[3]

 4. Virtual Admin Environments

Virtualization and cloud technologies provide additional opportunities to segment admin activities. Organizations can provision logically isolated virtual networks, VPCs, virtual desktops, and other environments dedicated to administrative functions.

These virtual admin environments allow strict control over configurations, access, and permissions. They can be dynamically provisioned and decommissioned as needed. Admin activities like server management, network device configuration, and database administration can be performed within these controlled virtual environments, separated from general user access and systems.[8]

 Choosing the Right Approach

The optimal approach to meeting CIS Control 12.8 will depend on each organization’s unique network architecture, admin use cases, and risk considerations. Larger enterprises may utilize a combination of PAWs, jump servers, and virtual admin networks, while a smaller organization may find that a simple deployment of dedicated admin workstations meets their needs.

The key is to analyze administrative activities, determine appropriate segmentation, and enforce strict controls around privileged access. By doing so, organizations can significantly mitigate the risk and potential impact of compromised admin credentials.

Proper administrative segmentation is just one of many important security considerations covered in the CIS Critical Security Controls. But it’s an area where many organizations have room for improvement. Assessing current admin practices and determining how to further isolate and protect those privileged functions is well worth the effort to strengthen your overall security posture.

Citations:
[1] https://ppl-ai-file-upload.s3.amazonaws.com/web/direct-files/13705336/b11ecb11-ff34-4836-80b0-0b302497c10d/advice.pdf
[2] https://www.swarthmore.edu/writing/how-do-i-write-a-compelling-conclusion
[3] https://paper.bobylive.com/Security/CIS/CIS_Controls_v8_Guide.pdf
[4] https://www.masterclass.com/articles/how-to-write-a-conclusion
[5] https://www.cisecurity.org/controls/v8
[6] https://www.cisecurity.org/controls/cis-controls-navigator
[7] https://www.armis.com/blog/see-whats-new-in-cis-critical-security-control-12-version-8/
[8] https://www.youtube.com/watch?v=MaQTv8bItLk&t=78
[9] https://sprinto.com/blog/cis-controls/
[10] https://writingcenter.unc.edu/tips-and-tools/introductions/
[11] https://writingcenter.unc.edu/tips-and-tools/conclusions/
[12] https://www.mytutor.co.uk/blog/students/craft-excellent-conclusion/
[13] https://www.semrush.com/goodcontent/content-marketing-blog/how-to-write-an-introduction/
[14] https://blog.hubspot.com/marketing/write-stronger-introductions
[15] https://www.linkedin.com/advice/0/what-best-practices-writing-introduction-engages
[16] https://www.wordstream.com/blog/ws/2017/09/08/how-to-write-an-introduction
[17] https://www.reddit.com/r/writing/comments/1rjdyj/tips_on_writing_a_great_essay_conclusion/
[18] https://controls-assessment-specification.readthedocs.io/en/stable/control-12/index.html
[19] https://writingcenter.fas.harvard.edu/conclusions
[20] https://owl.purdue.edu/owl/general_writing/common_writing_assignments/argument_papers/conclusions.html

 

* AI tools were used as a research assistant for this content.

Managing Risks Associated with Model Manipulation and Attacks in Generative AI Tools

In the rapidly evolving landscape of artificial intelligence (AI), one area that has garnered significant attention is the security risks associated with model manipulation and attacks. As organizations increasingly adopt generative AI tools, understanding and mitigating these risks becomes paramount.

1. Adversarial Attacks:

Example: Consider a facial recognition system. An attacker can subtly alter an image, making it unrecognizable to the AI model but still recognizable to the human eye. This can lead to unauthorized access or false rejections.

Mitigation Strategies:

Robust Model Training: Incorporate adversarial examples in the training data to make the model more resilient.
Real-time Monitoring: Implement continuous monitoring to detect and respond to unusual patterns.

2. Model Stealing:

Example: A competitor might create queries to a proprietary model hosted online and use the responses to recreate a similar model, bypassing intellectual property rights.

Mitigation Strategies:

Rate Limiting: Implement restrictions on the number of queries from a single source.
Query Obfuscation: Randomize responses slightly to make it harder to reverse-engineer the model.

Policies and Processes to Manage Risks:

1. Security Policy Framework:

Define: Clearly outline the acceptable use of AI models and the responsibilities of various stakeholders.
Implement: Enforce security controls through technical measures and regular audits.

2. Incident Response Plan:

Prepare: Develop a comprehensive plan to respond to potential attacks, including reporting mechanisms and escalation procedures.
Test: Regularly test the plan through simulated exercises to ensure effectiveness.

3. Regular Training and Awareness:

Educate: Conduct regular training sessions for staff to understand the risks and their role in mitigating them.
Update: Keep abreast of the latest threats and countermeasures through continuous learning.

4. Collaboration with Industry and Regulators:

Engage: Collaborate with industry peers, academia, and regulators to share knowledge and best practices.
Comply: Ensure alignment with legal and regulatory requirements related to AI and cybersecurity.

Conclusion:

Model manipulation and attacks in generative AI tools present real and evolving challenges. Organizations must adopt a proactive and layered approach, combining technical measures with robust policies and continuous education. By fostering a culture of security and collaboration, we can navigate the complexities of this dynamic field and harness the power of AI responsibly and securely.

* Just to let you know, we used some AI tools to gather the information for this article, and we polished it up with Grammarly to make sure it reads just right!

High-Level Project Plan for CIS CSC Implementation

Overview:

Implementing the controls and safeguards outlined in the Center for Internet Security (CIS) Critical Security Controls (CSC) Version 8 is crucial for organizations to establish a robust cybersecurity framework. This article provides a concise project plan for implementing these controls, briefly describing the processes and steps involved.

Plan:

1. Establish a Governance Structure:

– Define roles and responsibilities for key stakeholders.

– Develop a governance framework for the implementation project.

– Create a project charter to outline the project’s scope, objectives, and timelines.

2. Conduct a Baseline Assessment:

– Perform a comprehensive assessment of the organization’s existing security posture.

– Identify gaps between the current state and the requirements of CIS CSC Version 8.

– Prioritize the controls that need immediate attention based on the assessment results.

3. Develop an Implementation Roadmap:

– Define a clear timeline for implementing each control, based on priority.

– Identify the necessary resources, including personnel, tools, and technologies.

– Establish milestones for monitoring progress throughout the implementation process.

4. Implement CIS CSC Version 8 Controls:

– Establish secure configurations for all systems and applications.

– Enable continuous vulnerability management and patching processes.

– Deploy strong access controls, including multi-factor authentication and privilege management.

5. Implement Continuous Monitoring and Incident Response:

– Establish a comprehensive incident response plan.

– Deploy intrusion detection and prevention systems.

– Develop a continuous monitoring program to identify and respond to security events.

6. Engage in Security Awareness Training:

– Train employees on security best practices, including email and social engineering awareness.

– Conduct periodic security awareness campaigns to reinforce good cybersecurity hygiene.

– Provide resources for reporting suspicious activities and encouraging a culture of security.

Summary:

Implementing the controls and safeguards outlined in CIS CSC Version 8 requires careful planning and execution. By establishing a governance structure, conducting a baseline assessment, developing an implementation roadmap, implementing the controls, continuous monitoring, and engaging in security awareness training, organizations can strengthen their security posture and mitigate cyber threats effectively. This concise project plan is a starting point for information security practitioners seeking a robust cybersecurity framework.

If you need assistance, get in touch. MSI is always happy to help folks with CIS CSC assessments, control design, or other advisory services. 

 

*This article was written with the help of AI tools and Grammarly.

Comparing 2 Models for DMZ Implementations

I recently had a discussion with another technician about the security of the two most popular DMZ implementation models. That is: 
  • The “3 Legged Model” or “single firewall” – where the DMZ segment(s) are connected via a dedicated interface (or interfaces) and a single firewall implements traffic control rules between all of the network segments (the firewall could be a traditional firewall simply enforcing interface to interface rules or a “next generation” firewall implementing virtualized “zones” or other logical object groupings)
  • The “Layered Model” or “dual firewall”- where the DMZ segment(s) are connected between two sets of firewalls, like a sandwich
 
Both approaches are clearly illustrated above, and explained in detail in the linked wikipedia article, so I won’t repeat that here. 
 
I fully believe that the “3 Legged Model” is a lower risk implementation than the layered model. This outright contradicts what the wikipedia article above states: 
 
     “The most secure approach, according to Stuart Jacobs, [1]is to use two firewalls to create a DMZ.” — wikipedia article above.
 
While the Layered model looks compelling at first blush, and seems to apply the concept of “more firewalls would need to be compromised to lead to internal network access”; I believe that, in fact, it reduces the overall security posture in the real world, and increases risk. Here’s why I feel that way. Two real-world issues that often make things that look great at first blush or that “just work” in the lab environment, have significant disadvantages in the real world are control complexity and entropy. Before we dig too deeply into those issues though, let’s talk about how the two models are similar. (Note that we are assuming that the firewalls themselves are equally hardened and monitored – IE, they have adequate and equal security postures both as an independent system and as a control set, in aggregate.)
 
Reviewing the Similarities
 
In both of the models, traffic from the DMZ segment(s) pass through the firewall(s) and traffic controls are applied. Both result in filtered access to the internal trusted network via an often complex set of rules. Since in both cases, traffic is appropriately filtered, authorization, logging and alerting can adequately occur in both models. 
 
Establishing Differences
 
Now the differences. In the 3 Legged model, the controls are contained in one place (assuming a high availability/failover pair counts as a single set of  synced controls), enforced in one place, managed and monitored in one place. The rule set does not have cascading dependencies on other implementations of firewalls, and if the rule set is well designed and implemented, analysis at a holistic level is less complex.
 
In the Layered model, the controls are contained across two separate instances, each with different goals, roles and enforcement requirements. However, the controls and rule sets are interdependent. The traffic must be controlled through a holistic approach spread across the devices, and failures at either firewall to adequately control traffic or adequately design the rule sets could cause cascading unintended results. The complexity of managing these rules across devices, with different rule sets, capabilities, goals and roles is significantly larger than in a single control instance. Many studies have shown that increased control complexity results in larger amounts of human error, which in turn contributes to higher levels of risk. 
 
Control Complexity Matters
 
Misconfigurations, human errors and outright mistakes are involved in a significant number (~95%) of compromises. How impactful are human mistakes on outright breaches? Well according to the 2015 Verizon DBIR:
 
“As with years past, errors made by internal staff, especially system administrators who were the prime actors in over 60% of incidents, represent a significant volume of breaches and records ,even with our strict definition of what an “error” is.” —DBIR
 
Specifically, misconfiguration of devices were involved in the cause of breaches directly in 3.6% of the breaches studied in the DBIR. That percentage may seem small, but the data set of 79,790 incidents resulting in 2,122 breaches that means a staggering number of 76 breaches of data were the result of misconfigurations.
 
This is exactly why control complexity matters. Since control complexity correlates with misconfiguration and human error directly, when complexity rises, so does risk – conversely, when controls are simplified, complexity falls and risk of misconfiguration and human error is reduced.
 
Not to beat on the wikipedia article and Stuart Jacob’s assertions, but further compounding the complexity of his suggestion is multiple types of firewalls, managed by multiple vendors. Talk about adding complexity, take an interdependent set of rules and spread them across devices, with differing roles and goals and you get complexity. Now make each part of the set a different device type with it’s own features, nuances, rule language, configuration mechanism and managed service vendor, and try to manage both of those vendors in sync to create a holistic implementation of a control function. What you have is a NIGHTMARE of complexity. At an enterprise scale, this implementation approach would scale in complexity, resources required and oversight needs logarthmically as new devices and alternate connections are added. 
 
So, which is less complex, a single implementation, on a single platform, with a unified rule set, managed, monitored and enforced in a single location – OR – a control implemented across multiple devices, with multiple rule sets that require monitoring, management and enforcement in interdependent deployments? I think the choice is obvious and rational.
 
Now Add Entropy
 
Ahh, entropy, our inevitable combatant and the age old foe of order. What can you say about the tendency for all things to break down? You know what I am about to point out though, right? Things that are complex, tend to break down more quickly. This applies to complex organisms, complex structures, complex machinery and complex processes. It also applies to complex controls.
 
In the case of our firewall implementation, both of our models will suffer entropy. Mistakes will be made. Firewall rules will be implemented that allow wider access than is needed. Over time, all controls lose efficiency and effectiveness. Many times this is referred to as “control drift” or “configuration drift”. In our case, the control drift over a single unified rule set would have a score of 1. Changes to the rule set, apply directly to behavior and effectiveness. However, in the case of the Layered model, the firewalls each have a distinct rule set, which will degrade – BUT – they are interdependent on each other – giving an effective score of 2 for each firewall. Thus, you can easily see, that as each firewall’s rule set degrades, the private network’s “view” of the risk increases significantly and at a more rapid pace. Simply put, entropy in the more complex implementation of multiple firewalls will occur faster, and is likely to result in more impact to risk. Again, add the additional complexity of different types of firewalls and distinct vendors for each, and the entropy will simply eat you alive…
 
Let’s Close with Threat Scenarios

Let’s discuss one last point – the actual threat scenarios involved in attacking the private network from the DMZ. In most cases, compromise of a DMZ host will give an attacker a foothold into the environment. From there, they will need to pivot to find a way to compromise internal network resources and establish a presence on the internal network. (Note that I am only focusing on this threat scenario, not the more common phishing/watering hole scenarios that don’t often involve the compromise of a DMZ host, except perhaps for exfiltration paths. But, this is outside our current scope.) If they get lucky, and the DMZ is poorly designed, they may find that their initially compromised host has some form of access to the internal network that they can exploit. But, in most cases, the attacker needs to perform lateral movement to compromise additional hosts, searching for a victim that has the capability to provide a launching point for attacks against the internal network.
 
In these cases, detection is the goal of the security team. Each attacker move and probe, should cause “friction” against the controls, thereby raising the alert and log levels and the amount of unusual activity. Ultimately, this should lead to the detection of the attacker presence and the incident response process engagement.
 
However, let’s say that you are the attacker, trying to find a host that can talk to the internal network from the DMZ in a manner that you can exploit. How likely are you to launch an attack against the firewalls themselves? After all, these are devices that are designed for security and detection. Most attackers, ignore the firewalls as a target, and continue to attempt to evade their detection capabilities. As such, in terms of the threat scenario, additional discreet firewall devices, offer little to no advantage – and the idea that the attacker would need to compromise more devices to gain access loses credibility. They aren’t usually looking to pop the firewall itself. They are looking for a pivot host that they can leverage for access through whatever firewalls are present to exploit internal systems. Thus, in this case, both deployment models are rationally equal in their control integrity and “strength” (for lack of a better term).
 
Wrapping This Up
 
So, we have established that the Layered model is more complex than the 3 Legged model, and that it suffers from higher entropy. We also established that in terms of control integrity against the most common threat scenario, the implementation models are equal. Thus, to implement the Layered model over the 3 Legged model, is to increase risk, both initially, and at a more rapid pace over time for NO increase in capability or control “strength”. This supports my assertion that the 3 Legged model is, in fact, less risky than the Layered model of implementation.
 
As always, feel free to let me know your thoughts on social media. I can be found on Twitter at @lbhuston. Thanks for reading! 

The Big Three Part 3: Incident Response

Its been a couple of busy months since we posted parts one and two of this series, so Ill recap briefly here. Part one talked about the failure of information security programs to protect private data and systems from compromise. It showed that despite tighter controls and better security applications, there are more data security compromises now than ever. This was the basis for suggesting an increased emphasis on incident detection, incident response and user education and awareness; the Big Three.

Part two in the series discussed information security incident detection and how difficult it is to implement effectively. It related the sad statistic that less than one out of five serious data breaches is detected by the organization affected, and that a disturbing number of breaches go undetected for months before finally being uncovered. Part two also touted a combination of well configured security tools coupled with human monitoring and analysis as one of the best solutions to the problem. In this installment, we’ll discuss the importance of accompanying incident detection with an effective, well-practiced incident response plan.

Say that an ongoing malware attack on your systems is detected, would your staff know just what to do to stop it in its tracks? If they dont do everything quickly, correctly and in the right order, what could happen? I can think of a number of possibilities right off the bat. Perhaps all of your private customer information is compromised instead of just a portion of it. Maybe your customer facing systems will become inoperable instead of just running slow for a while. Possibly your company will face legal and regulatory sanctions instead of just having to clean up and reimage the system. Maybe evidence of the event is not collected and preserved correctly and the perpetrator cant be sued or punished. Horrible consequences like these are the reason effective incident response is increasingly important in todays dangerous computing environment.

Developing and implementing an incident response plan is very much like the fire drills that schools carry out or the lifeboat drills everyone has to go through as part of a holiday cruise. It is really just a way to prepare in case some adverse event occurs. It is deconstructing all the pieces-parts that make up security incidents and making sure you have a way to deal with each one of them.

When constructing your own incident response plan, it is wise to go about it systematically and to tailor it to your own organization and situation. First, consider the threats your business is menaced by. If you have been conducting risk assessments, those threats should already be listed for you. Then pick the threats that seem the most realistic and think about the types of information security incidents they could cause at your organization. These will be the events that you plan for.

Next, look over incident response plans that similar organizations employ and read the guidance that is readily available our there (just plug information security incident response guidelinesinto a web browser and see what you get templates and implementation advice just jump off the page at you!). Once you have a good idea of what a proper incident response plan looks like, pick the parts that fit your situation best and start writing. This process produces the incident response policies needed for your plan.

After your policies are set, the next step I like to tackle is putting together the incident response team. These individuals are the ones that will have most of the responsibility for developing, updating and practicing the incident response procedures that are the meat of any incident response plan. Armed with the written policies that were developed, they should be an integral part of deciding who does what, when it gets done, where they will meet, how evidence is stored, etc. Typically, an incident response team is made up of management personnel, security personnel, IT personnel, representative business unit personnel, legal representatives and sometimes expert consultants (such as computer forensics specialists).

Once all the policies, personnel and procedures are in place, the next (and most overlooked part of the plan) is regular practice sessions. Just like the fire drills mentioned above, if you dont actually practice the plan you have put together and learn from the results, it will never work right when you actually need it. In all my time doing this sort of work, I have never seen an incident response practice exercise that didnt expose flaws in the plan. We recommend picking real-world scenarios when planning your practice exercises and surprising the team with the exercise just as they would be in an actual event.

In the fourth and final installment of this series, we will discuss user education and awareness another vital component in recognizing and fighting data breaches and system security compromises. 

Thanks to John Davis for this post.

The Big Three

Information security techniques certainly are improving. The SANS Top Twenty Critical Controls, for example, are constantly improving and are being adopted by more and more organizations. Also, security hardware devices and software applications are getting better at a steady rate. But the question we have to ask ourselves is: are these improvements outpacing or even keeping up with the competition? I think a strong argument can be made that the answer to that question is NO! Last year there were plenty of high profile data loss incidents such as the Target debacle. Over 800 million records were compromised that we know of, and who knows how many other unreported security breaches of various types occurred?

So how are we going to get on top of this situation? I think the starkly realistic answer to that question is that we arent going to get on top it. The problem is the age old dilemma of defense versus attack; attackers will always have the advantage over entrenched defenders. The attackers know where you are, what you have and how you defend it. All they have to do is figure out one way to get over, under or around your defenses and they are successful. We, on the other hand, dont know who the attackers are, where theyre at or exactly how they will come at us. We have to figure out a way to stop them each and every time a daunting task to say the least! Sure, we as defenders can turn the tables on the information thieves and go on the attack; that is one way we can actually win the fight. But I dont think the current ethical and legal environment will allow that strategy to be broadly implemented.

Despite this gloomy prognosis, I dont think we should just sit on our hands and keep going along as we have been. I think we should start looking at the situation more realistically and shift the focus of our efforts into strategies that have a real chance of improving the situation. And to me those security capabilities that are most likely to bear fruit are incident detection, incident response and user education and awareness; the Big Three. Over the next several months I intend to expand upon these ideas in a series of blog posts that will delve tactics and means, so stay tuned if this piques your interest! 

Thanks to John Davis for writing this entry.

The First Five Quick Wins

The Top 20 Critical Controls for Effective Cyber Defense have been around for half a decade now, and are constantly gaining more praise and acceptance among information security groups and government organizations across the globe. One of the main reasons for this is that all of these controls have been shown to stop or mitigate known, real-world attacks. Another reason for their success is that they are constantly being updated and adjusted to fit the changing threat picture as it emerges. 

One of these recent updates is the delineation of the “First Five” from the other “Quick Wins” category of sub-controls included in the guidance (Quick Wins security controls are those that provide solid risk reduction without major procedural, architectural or technical changes to an environment, or that provide substantial and immediate risk reduction against very common attacks – in other words, these are the controls that give you the most bang for the buck). The First Five Quick Wins controls are those that have been shown to be the most effective means yet to stop the targeted intrusions that are doing the greatest damage to many organizations. They include:

  1. Application white listing: Application white listing technology only allows systems to run software applications that are included in the white list. This control prevents both external and internal attackers from implementing malicious and unwanted applications on the system. One caveat that should be kept in mind is that the organization must strictly control access to and modifications of the white list itself. New software applications should be approved by a change control committee and access/changes to the white list should be strictly monitored.
  2. Secure standard images: Organizations should employ secure standard images for configuring their systems. These standard images should utilize hardened versions of underlying operating systems and applications. It is important to keep in mind that these standard images need to be updated and validated on a regular basis in order to meet the changing threat picture.
  3. Automated patching tools and processes: Automated patching tools, along with appropriate policies and procedures, allow organizations to close vulnerabilities in their systems in a timely manner. The standard for this control is patching of both application and operating system software within 48 hours of release.
  4. Removal or replacement of outdated software applications: Many computer networks we test have outdated or legacy software applications present on the system. Dated software applications may have both known and previously undiscovered vulnerabilities associated with them, and are consequently very useful to cyber attackers. Organizations should have mechanisms in place to identify then remove or replace such vulnerable applications in a timely manner just as is done with the patching process above.
  5. Control of administrative privileges and accounts: One of the most useful mechanisms employed by cyber attackers is elevation of privileges. Attackers can turn simple compromise of one client machine to full domain compromise by this means, simply because administrative access is not well controlled. To thwart this, administrative access should be given to as few users as possible, and administrative privileged functions should be monitored for anomalous behavior. MSI also recommends that administrators use separate credentials for simple network access and administrative access to the system. In addition, multi-part authentication for administrative access should be considered. Attackers can’t do that much damage if they are limited to isolated client machines!

Certainly, the controls detailed above are not the only security controls that organizations should implement to protect their information assets. However, these are the controls that are currently being implemented first by the most security-aware and skilled organizations out there. Perhaps your organization can also benefit from the lessons they have learned.

Thanks to John Davis for writing this post.