Network Segmentation with MachineTruth

network segmentation with MachineTruth

About MachineTruthTM

We’ve just released a white paper on the topic of leveraging MachineTruth™, our proprietary network and device analytics platform, to segment or separate network environments.

Why Network Segmentation?

The paper covers the reasons to consider network segmentation, including the various drivers across clients and industries that we’ve worked with to date. It also includes a sample work flow to guide you through the process of performing segmentation with an analytics and modeling-focused solution, as opposed to the traditional plug and pray method, many organizations are using today.

Lastly, the paper covers how MachineTruthTM is different than traditional approaches and what you can expect from such a work plan.

To find out more:

If you’re considering network segmentation, analysis, inventory or mapping, then MachineTruthTM is likely a good fit for your organization. Download the white paper today and learn more about how to make segmentation easier, safer, faster and more affordable than ever before!

Interested? Download the paper here:

https://signup.microsolved.com/machinetruth-segmentation-wp/

As always, thanks for reading and we look forward to working with you. If you have any questions, please drop us a line (info@microsolved.com) or give us a call (614-351-1237) to learn more.

BEC #6 – Recovery

A few weeks ago, we published the Business Email Compromise (BEC) Checklist. The question arose – what if you’re new to security, or your security program isn’t very mature?

Since the checklist is based on the NIST model, there’s a lot of information here to help your security program mature, as well as to help you mature as a security practitioner. MSI’s engineers have discussed a few ways to leverage the checklist as a growth mechanism.

Part 1 and Part 2 covered the first checkpoint in the list – Discover. Part 3 covered the next checkpoint – Protect. Part 4 continued the series – Detect. Part 5 addressed how to Respond.

Continue reading

How do you Identify? Business Email Compromise #1

Business Email Compromise

business email compromise

Recently, we posted the Business Email Compromise (BEC) checklist. We’ve gotten a lot of great feedback on the checklist…as well as a few questions. What if you’re new to security? What if your organization’s security program is newer, and still maturing? How can you leverage this list?

Since the checklist is based on the NIST model, there’s a lot of information here to help your security program mature, as well as to help you mature as a security practitioner. MSI’s engineers have discussed a few ways to leverage the checklist as a growth mechanism.

Continue reading

It’s Dev, not Diva – Don’t set the “stage” for failure

Development: the act, process, or result of developing, the development of new ideas. This is one of the Merriam-Webster definitions of development.

It doesn’t really matter what you call it…dev, development, stage, test. Software applications tend to be in flux, and the developers, programmers, testers, and ancillary staff need a place to work on them.

Should that place be out on the internet? Let’s think about that for a minute. By their very nature, dev environments aren’t complete. Do you want a work in progress, with unknown holes, to be externally facing? This doesn’t strike me as the best idea.

But, security peeps, we HAVE to have it facing the internet – because REASONS! (Development types…tell me what your valid reasons are?)

And it will be fine – no one will find it, we won’t give it a domain name!

Security through obscurity will not be your friend here…with the advent of Shodan, Censys.io, and other venues…they WILL find it. Ideally, you should only allow access via VPN or other secure connection.

What could possibly go wrong? Well, here’s a short list of SOME of the things that MSI has found or used to compromise a system, from an internet facing development server:

  • A test.txt file with sensitive information about the application, configuration, and credentials.
  • Log files with similar sensitive information.
  • .git directories that exposed keys, passwords, and other key development information.
  • A development application that had weak credentials was compromised – the compromise allowed inspection of the application, and revealed an access control issue. This issue was also present in the production application, and allowed the team to compromise the production environment.
  • An unprotected directory that contained a number of files including a network config file. The plain text credentials in the file allowed the team to compromise the internet facing network devices.

And the list keeps going.

But, security peeps – our developers are better than that. This won’t happen to us!

The HealthCare.Gov breach https://www.csoonline.com/article/2602964/data-protection/configuration-errors-lead-to-healthcare-gov-breach.html in 2014 was the result of a development server that was improperly connected to the internet. “Exact details on how the breach occurred were not shared with the public, but sources close to the investigation said that the development server was poorly configured and used default credentials.”

Another notable breach occurred in 2016 – an outsourcing company named Capgemini https://motherboard.vice.com/en_us/article/vv7qp8/open-database-exposes-millions-of-job-seekers-personal-information exposed the personal information of millions of job seekers when their IT provider connected a development server to the internet.

The State of Vermont also saw their health care exchange – Vermont Connected – compromised in 2014 https://www.databreachtoday.asia/hackers-are-targeting-health-data-a-7024 when a development server was accessed. The state indicates this was not a breach, because the development server didn’t contain any production data.

So, the case is pretty strongly on the side of – internet facing development servers is a bad idea.

Questions? Comments? What’s your take from the development side? I’d love to hear from you – lwallace@microsolved.com, or @TheTokenFemale on Twitter!

If you would like to know more about MicroSolved or its services please send an e-mail to info@microsolved.com or visit microsolved.com.

 

 

 

 

 

 

 

 

 

That phone call you dread…

So, you’re a sysadmin, and you get a call from that friend and co-worker…we all know that our buddies don’t call the helpdesk, right?

This person sheepishly admits that they got an email that looked maybe a bit suspicious in hindsight, it had an attachment…and they clicked.

Yikes. Now what?

Well, since you’re an EXCELLENT sysadmin, and you work for the best company ever, you’ve done a few things to make sure you’re ready for this day…

  • The company has had a business impact analysis, so all of the relevant policies and procedures are in place.
  • Your backups are in place, offsite, and you know you can restore them with a modicum of effort – and because you’ve done baselines, you know how long it will take to restore.
  • Your team has been doing incident response tabletops, so all of the IR processes are documented and up-to-date. And you set it up to be a good time, so they were fully engaged in the process.

But now, one of your people has clicked…now what, indeed..

  • Pull. The. Plug. Disconnect that system. If it’s hard wired, yank the cord. If it’s on a wifi network, kick it off – take down the whole wifi network if feasible. The productivity that you’ll lose will be outweighed by the gains if you can stop lateral spread of the infection.
  • Pull any devices – external hard drives, USB sticks, etc.
  • DO NOT power the system off – not yet! If you need to do forensics, the live system memory will be important.

Now you can breathe, but just for a minute. This is the time to act with strategy as well as haste. Establish whether you’ve got a virus or ransomware infection, or if the ill-advised click was an attachment of another nature.

If it’s spam, but not malicious:

  • Check the email information in your email administration portal, and see if it was delivered to other users. Notify them as necessary.
  • Evaluate key features of the email – are there changes you should make to your blocking and filtering? Start that process.
  • Parse and evaluate the email headers for IPs and/or domains that should be blocked. See if there are indicators of other emails with these parameters that were blocked or delivered.
  • Add the scenario of this email to your user education program for future educational use.

If it’s a real infection, full forensics is beyond the scope of this blog post. But we’ll give a few pointers to get you started.

If it’s a virus, but not ransomware:

  • If the file that was delivered is still accessible, use VirusTotal and other sites to see if it’s known to be malicious. The hash can be checked, as well as the file itself.
  • Consider a full wipe of the affected system, as opposed to a virus removal – unless you’re 100% successful with removal, repeated infection is likely.
  • All drives or devices – network, USB, etc. – that were connected to the system should be suspect. Discard those you can, clean network drives or restore from backup.
  • Evaluate the end user account – did the attacker have time to elevate privileges? Check for any newly created accounts, as well.
  • Check system and firewall logs for traffic to and from the affected system, as well as any ancillary systems.

If it’s ransomware:

  • Determine what kind of ransomware you are dealing with.
  • Determine the scope of the infection – ancillary devices, network shares, etc.
  • Check to see if a decrypt tool is available – be aware these are not always successful.
  • Paying the ransom, or not, is a business decision – often the ransom payments are not successful, and the files remain encrypted. Address this in your IR plan, so the company policy is defined ahead of time.
  • Restore files from backup.
  • Strongly consider a full wipe of the system, even if the files are decrypted.
  • Evaluate the end user account – did the attacker have time to elevate privileges? Check for any newly created accounts, as well.
  • Check system and firewall logs for traffic to and from the affected system, as well as any ancillary systems.

In all cases, go back and map the attack vector. How did the suspect attachment get in, and how can you prevent it going forward?

What are your thoughts? I’d love to hear from you – lwallace@microsolved.com, or @TheTokenFemale on Twitter!

HPSS and Splunk

We’ve had a few users ask how to feed alerts from the HPSS Console into a SIEM. In these cases it was Splunk, so I will show how to quickly get a feed going into Splunk and some basic visualizations. I chose Splunk since that’s what I helped the users with, but any SIEM that will take syslog will work.

The first step is to get the HPSS Console set up to externally log events. This can be enabled by checking the “Enable System Logging” in the preferences window. What happens with the output depends on your OS. On Windows the events are written to Event Log, and on Linux/MacOS they are handled by the syslog daemon. Alternatively you can use the Console plugins system if syslog/eventlog is not flexible enough.

HPSS Preferences Window

Before we go further, we’ll need to configure Splunk to read in the data, or even set up Splunk if you don’t have an existing system. For this blog post, I used the Splunk Docker image to get it up and running a couple minutes in a container.

In Splunk we’ll need to create a “source type”, an “index” and a “data input” to move the data into the index. To create the source type, I put the following definitions in the local props.conf file located in $SPLUNK_HOME/etc/system/local (you may need to create the props.conf file)

[hpss]
EXTRACT-HPSSAgent = Agent: (?P<Honeypoint_Agent>[^ ]+)
EXTRACT-Attacker_IP = from: (?P<Attacker_IP>[^ ]+)
EXTRACT-Port = on port (?P<Port>[^ ]+)
EXTRACT-Alert_Data = Alert Data: (?P<Alert_Data>.+)
TIME_PREFIX = at\s
MAX_TIMESTAMP_LOOKAHEAD = 200
TIME_FORMAT = %Y-%m-%d %H:%M:%S

This tells Splunk how to extract the data from the event. You can also define this in the Splunk web interface by going to Settings -> Source Types and creating a new source type.

Source Type definition

Next create the Index under Settings -> Indexes. Just giving the index a name and leaving everything default will work fine to get started. 

To create a Data Input, go to Settings -> Data Inputs.  I’m going to set it up to directly ingest the data through a TCP socket, but if you already have a setup to read files from a centralized logging system, then feel free to use that instead.

 Set the port and protocol to whatever you would like.

For the source type, manually typing in “hpss” (or whatever you named it) should bring up the already defined source type. Select that, and everything else can remain as is. Then go to review and finish. It’s now ready for you to ship the events to it.

Lastly, we need to get the logs from the Console system to Splunk. Again, this will differ depending on your OS. I will show one way to do this on Windows and one for Linux. However, there are numerous ways to do it. In both cases, replace the IP and Port of your Splunk instance.

On Windows you can use NXLog or another type of eventlog to syslog shipper. After installing NXLog, edit the following into the configuration file.

define ROOT C:\Program Files\nxlog
#define ROOT C:\Program Files (x86)\nxlog

Moduledir %ROOT%\modules
CacheDir %ROOT%\data
Pidfile %ROOT%\data\nxlog.pid
SpoolDir %ROOT%\data
LogFile %ROOT%\data\nxlog.log

<Input in>
Module im_msvistalog
Query <QueryList>\
<Query Id="0">\
<Select Path="HPConsole">*</Select>\
</Query>\
</QueryList>
SavePos TRUE

</Input>

<Output out>
Module om_udp
Host 192.168.232.6
Port 1514
</Output>

<Route 1>
Path in => out
</Route>

On Linux with rsyslog, create a conf file with the following

:msg,contains,"HPSS Agent" @@192.168.232.6:1514

Now Splunk should be receiving any HPSS events sent to it and storing them in the defined index, and extracting the fields during search queries.

In the future we can look at creating some graphs and analyze the events received. If there is any interest, I can look at creating a Splunk app to configure all of this for you.

Last Quick and Dirty Log Tip for the Week

OK, so this week I posted two other blog posts about doing quick and dirty log analysis and some of the techniques I use. This one also covers converting column logs to CSV.

After the great response, I wanted to drop one last tip for the week. 

Several folks asked me about re-sorting and processing the column-based data in different ways and to achieve different analytical views. 

Let me re-introduce you to my friend and yours, sort.

In this case, instead of using the sort -n -r like before (numeric sort, reverse order), we can use:

  • sort -k# -n input_file (where # is the number of the column you’d like to sort by and the input file is the name of the file to sort)
    • You can use this inline by leveraging the pipe (|) again – i.e.: cat input.txt | sort -k3 -n (this types the input file and sends it to sort for sorting on the third column in numeric order) (-r would of course, reverse it…)
    • You can write the output of this to a file with redirects “> filename.txt”, i.e.: cat input.txt | sort -k3 -n -r > output.txt
      • You could also use “>>” as the redirect in order to create a file if it doesn’t exist OR append to a file if it does exist… i.e..:  cat input.txt | sort -k3 -n -r >> appended_output.txt

That’s it! It’s been a fun week sharing some simple command line processing tips for log files. Drop me a line on Twitter (@lbhuston) and let me know what you used them for, or which ones are your favorite. As always, thanks and have a great weekend! 

Quick And Dirty Log Analysis Followup

Earlier this week, I posted some tips for doing Quick and Dirty PA Firewall Log Analysis.

After I posted this, I got a very common question, and I wanted to answer it here.

The question is something along the lines of “When I use the techniques from your post, the outputs of the commands are column separated data. I need them to be CSV to use with my (tool/SEIM/Aunt Gracie/whatever). How can I convert them?” Sound familiar?

OK, so how do we accomplish this feat of at the command line without all of the workarounds that people posted, and without EVER loading Excel? Thankfully we can use awk again for this.

We can use:

  • awk ‘BEGIN { OFS = “,”} ; {print $1,$2,$3}’
    • Basically, take an input of column data, and print out the columns we want (can be any, in this case I want the first 3 columns), and make the outputs comma delimited.
    • We can just append this to our other command stacks with another pipe (|) to get our output CSV
  • Example: cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $8,$9}’ | sort -n | uniq -c | sort -n -r | awk ‘BEGIN { OFS = “,”} ; {print $1,$2,$3}’
    • In this example, the source IP and destination IP will be analyzed, and the reduced to unique pairs, along with the number of times that that pair is duplicated in the input log (I use this as a “hit rate” as I described earlier
      • A common question, why do I ask for two columns in the first awk and then ask for three columns in the second awk?
        • The answer of course, is that the first awk prints the unique pairs, but it also adds a column of the “hit rate”, so to get the output appropriately, I need all three fields.

So, once again, get to know awk. It is your friend.:)

PS – Yes, I know, there are hundreds of other ways to get this same data, in the same format, using other command line text processing tools. Many may even be less redundant than the commands above. BUT, this is how I did it. I think it makes it easy for people to get started and play with the data. Post your ways to Twitter or share with the community. Exploration is awesome, so it will encourage users to play more. Cool! Hit me on Twitter if you wanna share some or talk more about this approach (@lbhuston).

Thanks for reading!

Quick & Dirty Palo Alto Log Analysis

OK, so I needed to do some quick and dirty traffic analysis on Palo Alto text logs for a project I was working on. The Palo Alto is great and their console tools are nice. Panorama is not too shabby. But, when I need quick and dirty analysis and want to play with data, I dig into the logs. 
 
That said, for my quick analysis, I needed to analyze a bunch of text logs and model the traffic flows. To do that, I used simple command line text processing in Unix (Mac OS, but with tweaks also works in Linux, etc.)
 
I am sharing some of my notes and some of the useful command lines to help others who might be facing a similar need.
 
First, for my project, I made use of the following field #’s in the text analysis, pulled from the log header for sequence:
  • $8 (source IP) 
  • $9 (dest IP)
  • $26 (dest port)
  • $15 (AppID)
  • $32 (bytes)
 
Once, I knew the fields that corresponded to values I wanted to study, I started using the core power of command line text processing. And in this case, the power I needed was:
  • cat
  • grep
    • Including, the ever useful grep -v (inverse grep, show me the lines that don’t match my pattern)
  • awk
    • particularly: awk ‘BEGIN { FS = “,”} ; {print $x, $y}’ which prints specific columns in CSV files 
  • sort
    • sort -n (numeric sort)
    • sort -r (reverse sort, descending)
  • uniq
    • uniq -c (count the numbers of duplicates, used for determining “hit rates” or frequency, etc.)
 
Of course, to learn more about these commands, simply man (command name) and read the details. 😃 
 
OK, so I will get you started, here are a few of the more useful command lines I used for my quick and dirty analysis:
  • cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $8,$9,$26}’ | sort | uniq -c | sort -n -r > hitrate_by_rate.txt
    • this one produces a list of Source IP/Dest IP/Dest Port unique combinations, sorted in descending order by the number of times they appear in the log
  • cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $8,$9}’ | sort -n | uniq -c | sort -n -r > uniqpairs_by_hitrate.txt
    • this one produces a list of the uniq Source & Destination IP addresses, in descending order by how many times they talk to each other in the log (note that their reversed pairings will be separate, if they are present – that is if A talks to B, there will be an entry for that, but if B initiates conversations with A, that will be a separate line in this data set)
  • cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $15}’ | sort | uniq -c | sort -n -r > appID_by_hitrate.txt
    • this one uses the same exact techniques, but now we are looking at what applications have been identified by the firewall, in descending order by number of times that application identifier appears in the log
 
Again, these are simple examples, but you can tweak and expand as you need. This trivial approach to command line text analysis certainly helps with logs and traffic data. You can use those same commands to do a wondrous amount of textual analysis and processing. Learn them, live them, love them. 😃 
 
If you have questions, or want to share some of the ways you use those commands, please drop us a line on Twitter (@microsolved) or hit me up personally for other ideas (@lbhuston). As always, thanks for reading and stay safe out there!