How To Implement a Basic ZTNA Architecture

 

Implementing a Basic Zero Trust Network Access Architecture

Implementing a Zero Trust Network Access (ZTNA) architecture is increasingly essential for organizations aiming to secure their networks against evolving cyber threats. Zero Trust is a security concept centered on the belief that organizations should not automatically trust anything inside or outside its perimeters and instead must verify everything trying to connect to its systems before granting access.

1. Define the Protect Surface

Identify the critical data, applications, assets, and services (DAAS) that need protection. This step is crucial as it allows you to focus your resources and security measures on the most valuable and vulnerable parts of your network.

2. Map the Transaction Flows

Understand how traffic moves across your network. Mapping the traffic will help you identify legitimate access patterns and needs, which is essential for setting up appropriate security policies.

3. Architect a Zero Trust Network

Create a micro-segmented network architecture. Micro-segmentation involves dividing the network into small zones to maintain separate access for different parts of the network. Each segment or zone should have its own security settings, and access should be restricted based on the principle of least privilege.

4. Create a Zero Trust Policy

Develop a policy that specifies how resources in the network are accessed, who can access these resources, and under what conditions. This policy should enforce that only authenticated and authorized users and devices are allowed access to the specified network segments and resources.

5. Monitor and Maintain Network Security

Implement security monitoring tools to inspect and log network traffic constantly. This can help detect and respond to threats in real-time. Regular audits and updates of the zero trust policies and architecture should be performed to adapt to new threats and changes in the organization.

6. Leverage Multi-factor Authentication (MFA)

Enforce MFA to ensure that the chance of unauthorized access is minimized. MFA requires users to provide two or more verification factors to gain access to a resource, adding an extra layer of security.

7. Implement Least Privilege Access

Ensure that users only have access to the resources that they need to perform their job functions. This should be strictly enforced through rigorous access controls and ongoing management of user permissions.

8. Utilize Endpoint Security Solutions

Secure all endpoints that access the network by ensuring they meet the security standards before they are allowed to connect. This often includes anti-malware and anti-virus software, and endpoint detection and response (EDR) solutions.

9. Educate and Train Employees

Provide regular training to all employees about the cybersecurity policies, the importance of security in the workplace, and best practices for maintaining security hygiene. A well-informed workforce can be your first line of defense against cyber threats.

10. Engage Expert Assistance

For organizations looking to develop or enhance their Zero Trust architectures, it is often beneficial to engage with cybersecurity experts who can provide tailored advice and solutions. MicroSolved, Inc. (MSI) has been at the forefront of information security, risk management, and compliance solutions since 1992. MSI offers expert guidance in strategic planning, configuration, policy development, and procedure optimization to ensure your Zero Trust implementation is robust, effective, and tailored to your specific organizational needs. Contact MSI to see how we can help your security team succeed in today’s threat landscape.

 

* AI tools were used as a research assistant for this content.

 

Network Segmentation: A Best Practice We Should All be Using

It would be nice to be able to say that we are winning the war; that network security efforts are slowly getting the better of the bad guys. But I cant do that. Despite all the money being thrown at security tools and hosted services, the cyber-thugs are improving their game at a faster rate than we are. The ten worst known cyber security breaches of this century have all taken place since 2008, and 2013 and 2014 are notorious for their information security incidents.

I think there are a multitude of reasons for this state of affairs to exist. One is confusion, indecisiveness and slow reaction times among regulatory bodies and standards providers. Another is the check the boxcompliance mentality that exists both in government agencies and in the private sector. A third is simply the insane rate of innovation in the information technology realm. There are many more. But despite the reasons, one thing is clear: we have to stop rigidly complying with baseline standards and move into the more flexible and effective world of best practices. And today the best practice I want to touch on is network segmentation.

In our business we see a lot of computer networks that are just flat. There is little or no network segmentation and anyone on the inside can pretty much see everything. I cant begin to tell you how easy this kind of setup makes it for us during penetration testing success is virtually assured! And its amazing how even just basic network segmentation can slow us down or stop us all together.

A good reason to start with network segmentation is that you can go at in easy stages. Maybe you can begin by segmenting off a separate development or test network. Those are pretty basic and can give your networking team some valuable experience for more difficult efforts to come. Then you can ensure that user spaceis separated from server space. Doing just that much can have an amazing effect – it really helps to thwart successful cyber-attacks.

As the team gains confidence in their abilities, they can move onto the next step: real enclaving of the network. This is anything but a trivial effort, and it requires detailed knowledge of the various functions of the different business departments and how information moves into and out of each one of them (a task made very much easier if the company has a good business continuity program and business impact analysis in place). But in the long run these efforts will be well worth the trouble. It is very difficult indeed to gain access to or exfiltrate information from a well enclaved network especially from the Internet.

This blog post by John Davis.


Using TigerTrax to Analyze Device Configurations & Discover Networks

One of the biggest challenges that our M&A clients face is discovering what networks look like, how they are interconnected and what assets are priorities in their newly acquired environments. Sure, you bought the company and the ink is drying on the contracts — but now you have to fold their network into yours, make sure they meet your security standards and double check to make sure you know what’s out there.

That’s where the trouble begins. Because, in many cases, the result is “ask the IT folks”. You know, the already overworked, newly acquired, untrusted and now very nervous IT staff of the company you just bought. Even if they are honest and expedient, they often forget some parts of the environment or don’t know themselves that parts exist…

Thus, we get brought in, as a part of our Information Security Mergers & Acquisitions practice. Our job is usually to discover assets, map the networks and perform security assessments to identify gaps that don’t meet the acquiring company’s policies. Given that we have had to do this so often, we have designed a great new technique for performing these type of mapping and asset identification engagements. For us, instead of asking the humans, we simply ask the machines. We accumulate the router, switch, firewall and other device configurations and then leverage TigerTrax’s unique analytics capabilities to quickly establish network instances, interconnections, prioritized network hosts & segments, common configuration mistakes, etc. “en masse”. TigerTrax  then outputs that data for the MSI analysts, who can quickly perform their assessments, device reviews and inventories — armed with real-world data about the environment!

This approach has been winning us client kudos again and again!

Want to discuss our M&A practice and the unique ways that TigerTrax and MSI can help you before, during and after a merger or acquisition? Give us a call at (614) 351-1237 or drop us a line at info (at) microsolved /dot/ com. We’d be happy to schedule a FREE, no commitment & no pressure call with our Customer Champions & our security engineers.

Yandex.ru Indexing Crawler Issues

The yandex.ru crawler is an indexing application that spiders hosts and puts the results into the yandex.ru search engine. Like Google, Bing and other search engines, the system searches out new contents on the web continually and adds the content to the search engine database. Usually, these types of activities cause little issues for those whose sites are being indexed, and in fact, over the years an etiquette system based on rules placed in the robots.txt file of a web site has emerged.

Robots.txt files provide a rule set for search engine behaviors. They indicate what areas of a site a crawler may index and what sections of the site are to be avoided. Usually this is used to protect overly dynamic areas of the site where a crawler could encounter a variety of problems or inputs that can have either bandwidth or application issues for either the crawler, the web host or both. 

Sadly, many web crawlers and index bots do not honor the rules of robots.txt. Nor do attackers who are indexing your site for a variety of attack reasons. Given the impacts that some of these indexing tools can have on bandwidth, CPU use or database connectivity, other options for blocking them are sometimes sought. In particular, there are a lot of complaints about yandex.ru and their aggressive parsing, application interaction and deep site inspection techniques. They clearly have been identified as a search engine that does not seem to respect the honor system of robots.txt. A Google search for “yandex.ru ignores robots.txt” will show you a wide variety of complaints.

In our monitoring of the HITME traffic, we have observed many deep crawls by yandex.ru from a variety of IP ranges. In the majority of them, they either never requested the robots.txt file at all, or they simply ignored the contents of the file altogether. In fact, some of our HITME web applications have experienced the same high traffic cost concerns that other parts of the web community have been complaining about. In a couple of cases, the cost for supporting the scans of yandex.ru represent some 30+% of the total web traffic observed by the HITME end point. From our standpoint, that’s a pain in the pocketbook and in our attention span, to continually parse their alert traffic out of our metrics.

Techniques for blocking yandex.ru more forcibly than robots.txt have emerged. You can learn about some of them by searching “blocking yandex.ru”. The easiest and what has proven to be an effective way, is to use .htaccess rules. We’ve also had some more modest success with forcibly returning redirects to requests with known url parameters associated with yandex.ru, along with some level of success by blocking specific IPs associated with them via an ignore rule in HoneyPoint.

If you are battling yandex.ru crawling and want to get some additional help, drop us a comment or get in touch via Twitter (@lbhuston, @microsolved). You can also give an account representative a call to arrange for a more technical discussion. We hope this post helps some folks who are suffering increased bandwidth use or problems with their sites/apps due to this and other indexing crawler issues. Until next time, stay safe out there!