How do you “Identify”?

Recently, we posted the Business Email Compromise (BEC) checklist. We’ve gotten a lot of great feedback on the checklist…as well as a few questions. What if you’re new to security? What if your organization’s security program is newer, and still maturing? How can you leverage this list?

Since the checklist is based on the NIST model, there’s a lot of information here to help your security program mature, as well as to help you mature as a security practitioner. MSI’s engineers have discussed a few ways to leverage the checklist as a growth mechanism.

Let’s begin with the first category – Discover. The first action item under Discover is:

  • Identify and catalog all of the authentication portals where attackers could test stolen credentials or leverage them for access.

What does that mean from a real-world perspective? The first step is understanding the external exposure of the organization. How can you do this? A few items that you might leverage to catalog the exposure would be:

  • Asset tracking – what do you know about?
  • Firewall audits (both ingress and egress) – what are you allowing in and out, and how can they proceed past the firewall protections?
  • Network maps – do you have them? Are they up-to-date? These should be a living document, and reviewed regularly with staff who are not maintaining them as a sanity check.
  • Vulnerability assessments – when is the last time you have one performed? These are valuable not only from a catalog perspective, but for detecting old, defunct, and unintentional exposure.

Don’t forget to consider access points that you may not directly control. Remote systems, such as cloud based email servers that rely on network authentication or vendor systems that leverage SSO (single sign on) are also attack vectors.

The next step would be to catalog your exposed surfaces and their authentication mechanisms. Do they leverage Active Directory (AD), SSO, or another method? Is multi-factor authentication available, and if so, is it employed? What is the crossover between these exposures – what could one compromised password potentially access in other systems?

Once you feel you have an adequate catalog, it’s time to consider the individual exposures, and review the information you have available from each.

  • Does the device support logging, and is logging enabled?
  • Are the logs being preserved? A minimum standard would be your organization’s records retention policy.
  • Are the logs configured properly, so that success and failure are logged, and the specifics of the attempt are captured?
  • Are the logs capturing information sufficient for investigation? For example, if the logs are capturing IP information, but you are using a DHCP scenario where IPs are leased for a short period of time, are the logs configured to capture hostname and user ID to properly identify any problematic systems?

This is a pretty big step for step 1 – so we’ll pause here for now. How does your organization compare against these suggestions?

Questions? Comments? I’d love to hear from you – lwallace@microsolved.com, or @TheTokenFemale on Twitter!

 

 

Business E-mail Compromise (BEC) Checklist

MSI has recently received requests from a variety of sources for guidance around the configuration and management of business e-mail.

In response, our CEO Brent Huston has created a checklist (link is below) that:

  • Enumerates attack vectors
  • Briefly reviews impacts
  • Lists control suggestions mapped back to the NIST framework model

This is a must read for Security and IT practitioners as it helps to make sure you have your bases covered! As always, if you have questions or want to know more please reach out to Info@microsolved.com!

https://s3.amazonaws.com/MSIMedia/BECChecklist082918.pdf

It’s Dev, not Diva – Don’t set the “stage” for failure

Development: the act, process, or result of developing, the development of new ideas. This is one of the Merriam-Webster definitions of development.

It doesn’t really matter what you call it…dev, development, stage, test. Software applications tend to be in flux, and the developers, programmers, testers, and ancillary staff need a place to work on them.

Should that place be out on the internet? Let’s think about that for a minute. By their very nature, dev environments aren’t complete. Do you want a work in progress, with unknown holes, to be externally facing? This doesn’t strike me as the best idea.

But, security peeps, we HAVE to have it facing the internet – because REASONS! (Development types…tell me what your valid reasons are?)

And it will be fine – no one will find it, we won’t give it a domain name!

Security through obscurity will not be your friend here…with the advent of Shodan, Censys.io, and other venues…they WILL find it. Ideally, you should only allow access via VPN or other secure connection.

What could possibly go wrong? Well, here’s a short list of SOME of the things that MSI has found or used to compromise a system, from an internet facing development server:

  • A test.txt file with sensitive information about the application, configuration, and credentials.
  • Log files with similar sensitive information.
  • .git directories that exposed keys, passwords, and other key development information.
  • A development application that had weak credentials was compromised – the compromise allowed inspection of the application, and revealed an access control issue. This issue was also present in the production application, and allowed the team to compromise the production environment.
  • An unprotected directory that contained a number of files including a network config file. The plain text credentials in the file allowed the team to compromise the internet facing network devices.

And the list keeps going.

But, security peeps – our developers are better than that. This won’t happen to us!

The HealthCare.Gov breach https://www.csoonline.com/article/2602964/data-protection/configuration-errors-lead-to-healthcare-gov-breach.html in 2014 was the result of a development server that was improperly connected to the internet. “Exact details on how the breach occurred were not shared with the public, but sources close to the investigation said that the development server was poorly configured and used default credentials.”

Another notable breach occurred in 2016 – an outsourcing company named Capgemini https://motherboard.vice.com/en_us/article/vv7qp8/open-database-exposes-millions-of-job-seekers-personal-information exposed the personal information of millions of job seekers when their IT provider connected a development server to the internet.

The State of Vermont also saw their health care exchange – Vermont Connected – compromised in 2014 https://www.databreachtoday.asia/hackers-are-targeting-health-data-a-7024 when a development server was accessed. The state indicates this was not a breach, because the development server didn’t contain any production data.

So, the case is pretty strongly on the side of – internet facing development servers is a bad idea.

Questions? Comments? What’s your take from the development side? I’d love to hear from you – lwallace@microsolved.com, or @TheTokenFemale on Twitter!

 

 

 

 

 

 

 

 

 

 

That phone call you dread…

So, you’re a sysadmin, and you get a call from that friend and co-worker…we all know that our buddies don’t call the helpdesk, right?

This person sheepishly admits that they got an email that looked maybe a bit suspicious in hindsight, it had an attachment…and they clicked.

Yikes. Now what?

Well, since you’re an EXCELLENT sysadmin, and you work for the best company ever, you’ve done a few things to make sure you’re ready for this day…

  • The company has had a business impact analysis, so all of the relevant policies and procedures are in place.
  • Your backups are in place, offsite, and you know you can restore them with a modicum of effort – and because you’ve done baselines, you know how long it will take to restore.
  • Your team has been doing incident response tabletops, so all of the IR processes are documented and up-to-date. And you set it up to be a good time, so they were fully engaged in the process.

But now, one of your people has clicked…now what, indeed..

  • Pull. The. Plug. Disconnect that system. If it’s hard wired, yank the cord. If it’s on a wifi network, kick it off – take down the whole wifi network if feasible. The productivity that you’ll lose will be outweighed by the gains if you can stop lateral spread of the infection.
  • Pull any devices – external hard drives, USB sticks, etc.
  • DO NOT power the system off – not yet! If you need to do forensics, the live system memory will be important.

Now you can breathe, but just for a minute. This is the time to act with strategy as well as haste. Establish whether you’ve got a virus or ransomware infection, or if the ill-advised click was an attachment of another nature.

If it’s spam, but not malicious:

  • Check the email information in your email administration portal, and see if it was delivered to other users. Notify them as necessary.
  • Evaluate key features of the email – are there changes you should make to your blocking and filtering? Start that process.
  • Parse and evaluate the email headers for IPs and/or domains that should be blocked. See if there are indicators of other emails with these parameters that were blocked or delivered.
  • Add the scenario of this email to your user education program for future educational use.

If it’s a real infection, full forensics is beyond the scope of this blog post. But we’ll give a few pointers to get you started.

If it’s a virus, but not ransomware:

  • If the file that was delivered is still accessible, use VirusTotal and other sites to see if it’s known to be malicious. The hash can be checked, as well as the file itself.
  • Consider a full wipe of the affected system, as opposed to a virus removal – unless you’re 100% successful with removal, repeated infection is likely.
  • All drives or devices – network, USB, etc. – that were connected to the system should be suspect. Discard those you can, clean network drives or restore from backup.
  • Evaluate the end user account – did the attacker have time to elevate privileges? Check for any newly created accounts, as well.
  • Check system and firewall logs for traffic to and from the affected system, as well as any ancillary systems.

If it’s ransomware:

  • Determine what kind of ransomware you are dealing with.
  • Determine the scope of the infection – ancillary devices, network shares, etc.
  • Check to see if a decrypt tool is available – be aware these are not always successful.
  • Paying the ransom, or not, is a business decision – often the ransom payments are not successful, and the files remain encrypted. Address this in your IR plan, so the company policy is defined ahead of time.
  • Restore files from backup.
  • Strongly consider a full wipe of the system, even if the files are decrypted.
  • Evaluate the end user account – did the attacker have time to elevate privileges? Check for any newly created accounts, as well.
  • Check system and firewall logs for traffic to and from the affected system, as well as any ancillary systems.

In all cases, go back and map the attack vector. How did the suspect attachment get in, and how can you prevent it going forward?

What are your thoughts? I’d love to hear from you – lwallace@microsolved.com, or @TheTokenFemale on Twitter!

HPSS and Splunk

We’ve had a few users ask how to feed alerts from the HPSS Console into a SIEM. In these cases it was Splunk, so I will show how to quickly get a feed going into Splunk and some basic visualizations. I chose Splunk since that’s what I helped the users with, but any SIEM that will take syslog will work.

The first step is to get the HPSS Console set up to externally log events. This can be enabled by checking the “Enable System Logging” in the preferences window. What happens with the output depends on your OS. On Windows the events are written to Event Log, and on Linux/MacOS they are handled by the syslog daemon. Alternatively you can use the Console plugins system if syslog/eventlog is not flexible enough.

HPSS Preferences Window

Before we go further, we’ll need to configure Splunk to read in the data, or even set up Splunk if you don’t have an existing system. For this blog post, I used the Splunk Docker image to get it up and running a couple minutes in a container.

In Splunk we’ll need to create a “source type”, an “index” and a “data input” to move the data into the index. To create the source type, I put the following definitions in the local props.conf file located in $SPLUNK_HOME/etc/system/local (you may need to create the props.conf file)

[hpss]
EXTRACT-HPSSAgent = Agent: (?P<Honeypoint_Agent>[^ ]+)
EXTRACT-Attacker_IP = from: (?P<Attacker_IP>[^ ]+)
EXTRACT-Port = on port (?P<Port>[^ ]+)
EXTRACT-Alert_Data = Alert Data: (?P<Alert_Data>.+)
TIME_PREFIX = at\s
MAX_TIMESTAMP_LOOKAHEAD = 200
TIME_FORMAT = %Y-%m-%d %H:%M:%S

This tells Splunk how to extract the data from the event. You can also define this in the Splunk web interface by going to Settings -> Source Types and creating a new source type.

Source Type definition

Next create the Index under Settings -> Indexes. Just giving the index a name and leaving everything default will work fine to get started. 

To create a Data Input, go to Settings -> Data Inputs.  I’m going to set it up to directly ingest the data through a TCP socket, but if you already have a setup to read files from a centralized logging system, then feel free to use that instead.

 Set the port and protocol to whatever you would like.

For the source type, manually typing in “hpss” (or whatever you named it) should bring up the already defined source type. Select that, and everything else can remain as is. Then go to review and finish. It’s now ready for you to ship the events to it.

Lastly, we need to get the logs from the Console system to Splunk. Again, this will differ depending on your OS. I will show one way to do this on Windows and one for Linux. However, there are numerous ways to do it. In both cases, replace the IP and Port of your Splunk instance.

On Windows you can use NXLog or another type of eventlog to syslog shipper. After installing NXLog, edit the following into the configuration file.

define ROOT C:\Program Files\nxlog
#define ROOT C:\Program Files (x86)\nxlog

Moduledir %ROOT%\modules
CacheDir %ROOT%\data
Pidfile %ROOT%\data\nxlog.pid
SpoolDir %ROOT%\data
LogFile %ROOT%\data\nxlog.log

<Input in>
Module im_msvistalog
Query <QueryList>\
<Query Id="0">\
<Select Path="HPConsole">*</Select>\
</Query>\
</QueryList>
SavePos TRUE

</Input>

<Output out>
Module om_udp
Host 192.168.232.6
Port 1514
</Output>

<Route 1>
Path in => out
</Route>

On Linux with rsyslog, create a conf file with the following

:msg,contains,"HPSS Agent" @@192.168.232.6:1514

Now Splunk should be receiving any HPSS events sent to it and storing them in the defined index, and extracting the fields during search queries.

In the future we can look at creating some graphs and analyze the events received. If there is any interest, I can look at creating a Splunk app to configure all of this for you.

Last Quick and Dirty Log Tip for the Week

OK, so this week I posted two other blog posts about doing quick and dirty log analysis and some of the techniques I use. This one also covers converting column logs to CSV.

After the great response, I wanted to drop one last tip for the week. 

Several folks asked me about re-sorting and processing the column-based data in different ways and to achieve different analytical views. 

Let me re-introduce you to my friend and yours, sort.

In this case, instead of using the sort -n -r like before (numeric sort, reverse order), we can use:

  • sort -k# -n input_file (where # is the number of the column you’d like to sort by and the input file is the name of the file to sort)
    • You can use this inline by leveraging the pipe (|) again – i.e.: cat input.txt | sort -k3 -n (this types the input file and sends it to sort for sorting on the third column in numeric order) (-r would of course, reverse it…)
    • You can write the output of this to a file with redirects “> filename.txt”, i.e.: cat input.txt | sort -k3 -n -r > output.txt
      • You could also use “>>” as the redirect in order to create a file if it doesn’t exist OR append to a file if it does exist… i.e..:  cat input.txt | sort -k3 -n -r >> appended_output.txt

That’s it! It’s been a fun week sharing some simple command line processing tips for log files. Drop me a line on Twitter (@lbhuston) and let me know what you used them for, or which ones are your favorite. As always, thanks and have a great weekend! 

Quick And Dirty Log Analysis Followup

Earlier this week, I posted some tips for doing Quick and Dirty PA Firewall Log Analysis.

After I posted this, I got a very common question, and I wanted to answer it here.

The question is something along the lines of “When I use the techniques from your post, the outputs of the commands are column separated data. I need them to be CSV to use with my (tool/SEIM/Aunt Gracie/whatever). How can I convert them?” Sound familiar?

OK, so how do we accomplish this feat of at the command line without all of the workarounds that people posted, and without EVER loading Excel? Thankfully we can use awk again for this.

We can use:

  • awk ‘BEGIN { OFS = “,”} ; {print $1,$2,$3}’
    • Basically, take an input of column data, and print out the columns we want (can be any, in this case I want the first 3 columns), and make the outputs comma delimited.
    • We can just append this to our other command stacks with another pipe (|) to get our output CSV
  • Example: cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $8,$9}’ | sort -n | uniq -c | sort -n -r | awk ‘BEGIN { OFS = “,”} ; {print $1,$2,$3}’
    • In this example, the source IP and destination IP will be analyzed, and the reduced to unique pairs, along with the number of times that that pair is duplicated in the input log (I use this as a “hit rate” as I described earlier
      • A common question, why do I ask for two columns in the first awk and then ask for three columns in the second awk?
        • The answer of course, is that the first awk prints the unique pairs, but it also adds a column of the “hit rate”, so to get the output appropriately, I need all three fields.

So, once again, get to know awk. It is your friend.:)

PS – Yes, I know, there are hundreds of other ways to get this same data, in the same format, using other command line text processing tools. Many may even be less redundant than the commands above. BUT, this is how I did it. I think it makes it easy for people to get started and play with the data. Post your ways to Twitter or share with the community. Exploration is awesome, so it will encourage users to play more. Cool! Hit me on Twitter if you wanna share some or talk more about this approach (@lbhuston).

Thanks for reading!

Quick & Dirty Palo Alto Log Analysis

OK, so I needed to do some quick and dirty traffic analysis on Palo Alto text logs for a project I was working on. The Palo Alto is great and their console tools are nice. Panorama is not too shabby. But, when I need quick and dirty analysis and want to play with data, I dig into the logs. 
 
That said, for my quick analysis, I needed to analyze a bunch of text logs and model the traffic flows. To do that, I used simple command line text processing in Unix (Mac OS, but with tweaks also works in Linux, etc.)
 
I am sharing some of my notes and some of the useful command lines to help others who might be facing a similar need.
 
First, for my project, I made use of the following field #’s in the text analysis, pulled from the log header for sequence:
  • $8 (source IP) 
  • $9 (dest IP)
  • $26 (dest port)
  • $15 (AppID)
  • $32 (bytes)
 
Once, I knew the fields that corresponded to values I wanted to study, I started using the core power of command line text processing. And in this case, the power I needed was:
  • cat
  • grep
    • Including, the ever useful grep -v (inverse grep, show me the lines that don’t match my pattern)
  • awk
    • particularly: awk ‘BEGIN { FS = “,”} ; {print $x, $y}’ which prints specific columns in CSV files 
  • sort
    • sort -n (numeric sort)
    • sort -r (reverse sort, descending)
  • uniq
    • uniq -c (count the numbers of duplicates, used for determining “hit rates” or frequency, etc.)
 
Of course, to learn more about these commands, simply man (command name) and read the details. 😃 
 
OK, so I will get you started, here are a few of the more useful command lines I used for my quick and dirty analysis:
  • cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $8,$9,$26}’ | sort | uniq -c | sort -n -r > hitrate_by_rate.txt
    • this one produces a list of Source IP/Dest IP/Dest Port unique combinations, sorted in descending order by the number of times they appear in the log
  • cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $8,$9}’ | sort -n | uniq -c | sort -n -r > uniqpairs_by_hitrate.txt
    • this one produces a list of the uniq Source & Destination IP addresses, in descending order by how many times they talk to each other in the log (note that their reversed pairings will be separate, if they are present – that is if A talks to B, there will be an entry for that, but if B initiates conversations with A, that will be a separate line in this data set)
  • cat log.csv | awk ‘BEGIN { FS = “,”} ; {print $15}’ | sort | uniq -c | sort -n -r > appID_by_hitrate.txt
    • this one uses the same exact techniques, but now we are looking at what applications have been identified by the firewall, in descending order by number of times that application identifier appears in the log
 
Again, these are simple examples, but you can tweak and expand as you need. This trivial approach to command line text analysis certainly helps with logs and traffic data. You can use those same commands to do a wondrous amount of textual analysis and processing. Learn them, live them, love them. 😃 
 
If you have questions, or want to share some of the ways you use those commands, please drop us a line on Twitter (@microsolved) or hit me up personally for other ideas (@lbhuston). As always, thanks for reading and stay safe out there! 

An Exercise to Increase IT/OT Engagement & Cooperation

Just a quick thought on an exercise to increase the cooperation, trust and engagement between traditional IT and OT (operational technology – (ICS/SCADA tech)) teams. Though it likely applies to just about any two technical teams, including IT and development, etc.

Here’s the idea: Host a Hack-a-thon!

It might look something like this:

  • Invest in some abundant kits of LittleBits. These are like Legos with electronics, mechanical circuits and even Arduino/Cloud controllers built in. Easy, safe, smart and fun!
  • Put all of the technical staff in a room together for a day. Physically together. Ban all cell phones, calls, emails, etc. for the day – get people to engage – cater in meals so they can eat together and develop rapport
  • Split the folks into two or more teams of equal size, mixing IT and OT team members (each team will need both skill sets – digital and mechanical knowledge) anyway.
  • Create a mission – over the next 8 hours, each team will compete to see who can use their smart bits set to design, program and proto-type a solution to a significant problem faced in their everyday work environments.
  • Provide a prize for 1st and 2nd place team. Reach deep – really motivate them!
  • Let the teams go through the process of discussing their challenges to find the right problem, then have them use draw out their proposed solution.
  • After lunch, have the teams discuss the problems they chose and their suggested fix.Then have them build it with the LittleBits. 
  • Right before the end of the day, have a judging and award the prizes.

Then, 30 days later, have a conference call with the group. Have them again discuss the challenges they face together, and see if common solutions emerge. If so, implement them.

Do this a couple times a year, maybe using something like Legos, Raspberry Pis, Arduinos or just whiteboards and markers. Let them have fun, vent their frustrations and actively engage with one another. The results will likely astound you.

How does your company further IT/OT engagement? Let us know on Twitter (@microsolved) or drop me a line personally (@lbhuston). Thanks for reading! 

Emulating SIP with HoneyPoint

Last week, Hos and I worked on identifying how to emulate a SIP endpoint with HoneyPoint Security Server. We identified an easy way to do it using the BasicTCP capability. This emulation component emulates a basic TCP service and performs in the following manner:

  • Listens for connections
  • Upon connection, logs the connection details
  • Sends the banner file and awaits a response
  • Upon response, logs the response data
  • Sends the response, repeating the wait and log loop, resending the response to every request
  • When the connection limit is reached, it closes the connection
It has two associated files for the emulation:
  • The banner file – “banner”
  • The response file – “response”

In our testing, we were able to closely emulate a SIP connection by creating a banner file that was blank or contained only a CR/LF. Then we added the appropriate SIP messaging into the response file. This emulates a service where thew connection is completed and logged, and the system appears to wait on input. Once input is received, then a SIP message is delivered to the client. In our testing, the SIP tools we worked with accepted the emulation as SIP server and did not flag any anomalies.

I’ll leave the actual SIP messaging as a research project for the reader, to preserve some anonymity for HPSS users. But, if you are an HPSS user and would like to do this, contact support and we will provide you with the specific messaging that we used in our testing.

As always, thanks for reading and especially thanks for being interested in HoneyPoint. We are prepping the next release, and I think you will be blown away by some of the new features and the updates to the documentation. We have been hard at work on this for a while, and I can’t wait to share it with you shortly!