In this episode of the MSI podcast, we discuss recent issues involving AWS misconfigurations that led to incidents, common problems, the importance of proper configurations to avoid these issues and how we can help you identify them in your environment.
We’ve talked about development servers, and the perils of internet facing development environments. Now, let’s talk about what is IN your development environment.
Another issue we run into fairly often with dev environments,…they are set up to use production data, and sometimes this data is piped in directly at night with no modification. This introduces a risk of not only exposing this data through vulnerabilities within the development environment but could allow a contractor or unauthorized employee to view sensitive information.
On many assessments in the past we have found and accessed data through various mechanisms. In some cases the production data remained but “test” users with weak passwords were able to authenticate. In other cases, system misconfiguration or missing patches allowed access to the application and the real data inside of it. Developers also might be leaving production data or fragments of it on their laptop which offers another way for that data to be exposed.
So if you are currently using production data to test development environments, what can be done? Encrypting the database and the fields containing the PII data certainly helps. There’s no one size fits all solution but here are a few suggestions that can be used depending on the nature of the data and application requirements. Care must be taken to make sure that data that needs to pass checksum tests (such as credit card numbers) will still pass, without modifying the application code.
Examples of what can be done so there isn’t sensitive data in test environments:
- Apply Data Masking to the real data. Data Masking is changing data so that it keeps the structure of production data, but isn’t real. If you use Oracle, the Enterprise version has a feature to do this built in “Oracle Data Masking Pack”. SQL Server 2014 has a similar feature named “Dynamic Data Masking”.
- Use scripts to generate fake data
- Maintain a curated database of invalid data
- If tests require real data, ensure that at least all PII or other equally sensitive data is masked and encrypted
- Don’t forget to use different environment variables, such as a different database password
These are just some examples of what can be done to reduce the risk of sensitive data being leaked. Data Masking is often the most viable solution as it is being built into many databases now. You can also look at tools such as mockaroo, which help generating test data.
Most businesses have processes and policies for handling sensitive data on paper, whether thats selectively shredding papers or shredding everything, along with training about what goes in trash bins and what goes in shredding bins. However, how many are ensuring that these policies and processes are being followed? Brent asked
When was the last time you did a dumpster dive or wardial against your organization? You know these old school tactics still work, right? Yeah….
— L. Brent Huston (@lbhuston) June 27, 2018
Which got me thinking about this. I couldn’t remember the last time an organization actually asked us about it beyond reviewing policies. I know this problem didn’t disappear, even as we move more and more away from paper. Paper still gets used, people write stuff down, things get printed, and no solution completely ensures that that paper doesn’t end up in the wrong bin. I know from doing it. I found something useful in almost every engagement that we’ve done in the past, whether it was an administrative password, or contact information that I can use for phishing.
Recently, some researchers performed a trash inspection of some hospitals in Toronto. What they found didn’t surprise me. They found PII and PHI, a good bit of it. A resident in Palolo Hawaii found these too. A nuclear security complex was found to be dumping trash that had classified documents in it. None of these were reported breaches, just there for the taking. Who knows if anyone malicious found them too?
Let’s keep working on the most prevalent topics of the day, such as phishing defense and training, but we can’t forget all of the things that were an issue in the past, because they’re still an issue now even if they’re not making the big headlines in the current moment.
Many organizations have embraced cloud platforms now, like Amazon AWS or Microsoft Azure, whether they are using it for just a few services or moved part or all of their infrastructure there. No matter the service though, configuration isn’t foolproof and they all require specific knowledge to configure in a secure way.
In some cases we have seen these services configured in a way that isn’t best practice, which led to exposure of sensitive information, or compromises through services that should not have been exposed. In many instances there are at least some areas that can be hardened, or features enabled, to reduce risk and improve monitoring capabilities.
So, what should you be doing? We’ll take a look at Amazon AWS today, and some of the top issues.
One issue, that is seemingly pervasive, is inappropriate permissions on S3 buckets. Searches on S3 incidents will turn up numerous stories about companies exposing sensitive data due to improper configuration. How can you prevent that?
Firstly, when creating your buckets, consider your permissions very carefully. If you want to publicly share data from a bucket, consider granting ‘Everyone’ read permissions to the specific resources, instead of the entire bucket. Never allow the ‘Everyone’ group to have write permissions, on the bucket, or on individual resources. The ‘Everyone’ group applies literally to everyone, your employees and any attackers alike.
Secondly, take advantage of the logging capability of S3, and monitor the logs. This will help identify any inappropriately accessed resources, whether through inadvertently exposed buckets, or through misuse of authorization internally.
Another common issue is ports unnecessarily exposed on EC2 resources. This happens through misconfigurations in VPC NACLs or Security Groups, which act as a firewall, sometimes found configured with inbound traffic allowed to any port from any ip. NACLs and Security Groups should be configured to allow the least amount of traffic to the destination as possible, restricting by port and by ip. Don’t forget about restricting outbound traffic as well. For example, your database server probably only needs to talk to the web server and system update servers.
The last issue we’ll discuss today is the IAM, the Identity and Access Management interface. Firstly, you should be using IAM to configure users and access, instead of sharing the root account among everyone. Secondly, make sure IAM users and keys are configured correctly, with the least amount of privileges necessary for that particularly user. I also recommend requiring multifactor authentication, particularly on the root account, any users in the PowerUsers or Admins group, or any groups you have with similar permissions.
That’s all for today. And remember, the good news here is that you can configure these systems and services to be as secure as what is sitting on your local network.
How closely do you inspect what 3rd party plugins and libraries you use with software and development? We kind of tend to take for granted that once we vet a library or plugin and add that into our usage, then it’s likely to never be a threat in the future. However, over the last few years attackers have started increasing abuse of this trust. A type of watering hole attack that reaches a larger amount of people than a typical focused attack.
There’s 3 main ways this has happened:
- Attackers buy the plugin/library from the original author or assume control of one that is abandoned
- Development system or other system in chain is compromised by attacker
- Attackers create releases that mimic popular and established libraries/plugins in name and function
This has affected a wide range of software, from web applications to web browsers to text editors/IDE’s. Let’s take a brief look at a few instances.
WordPress Display-Widgets plugin. This plugin was sold by the original developer, and at that time had several hundred thousand active installations. The new developers then added code to it that downloaded and installed a plugin that added spam the site.
The Node.js package repository was found to have malicious packages that looked like real packages, differing slightly in the name in an attempt to fool anyone trying to find specific packages. The malicious packages generally tried to send sensitive environment data back to a server.
Python also experienced something very similar. Package uploaded in an attempt to fool anyone not paying close enough attention. Not relying on just misspelling the name, but with names that look legitimate, such as “bzip” for the real package “bzip2file”.
Those are just a few examples, Chrome and Firefox have both had similar issues multiple times as well. So how do we protect against this? Partly some of this has to be on the software that allows the plugins to run. There are some bad policies and practices here, such as WordPress letting anyone claim “abandoned” plugins.
Some things you can do yourself are to install any libraries/plugins (for python, ruby etc) with your systems package manager. If you use pip, gem or the like, make sure you are using the correct plugins, avoid misspellings or “close enough” names. Check the reputations of plugins via Twitter or search for reviews and info on plugins, by name, in your search engine. If you find any anomalies report them on social media and forums associated with your language. Try not to use plugins that have been abandoned and recovered by another developer with no reputation.
MSI is proud to announce the immediate availability of the HoneyPoint Console version 4.1!
The new version of the Console for HPSS is now available for Windows, Linux and Mac OS X.
The new Console includes the ability to bypass local event logging and instead send the events directly to syslog or to be processed by the plugins. This allows the Console to work with a SIEM, other monitoring tools, or any centralized log management system without worrying about managing the local event database. Several improvements in the GUI console have been made, the ability to test email servers has been added, and multiple bugs have been addressed.
To obtain the new Console files or installer, refer to your QuickStart Guide on how to access the HoneyPoint Security Server distribution site. No changes to the database or license key are required, however, you must have a current license to qualify for the upgrade. An in place upgrade can be performed or the installer can handle the upgrade on Windows. As always, we recommend backing up the database and any plugins and logs before upgrading.
Thanks, as always, for choosing HoneyPoint Security Server and MSI. We value your partnership and trust.
We’ve had a few users ask how to feed alerts from the HPSS Console into a SIEM. In these cases it was Splunk, so I will show how to quickly get a feed going into Splunk and some basic visualizations. I chose Splunk since that’s what I helped the users with, but any SIEM that will take syslog will work.
The first step is to get the HPSS Console set up to externally log events. This can be enabled by checking the “Enable System Logging” in the preferences window. What happens with the output depends on your OS. On Windows the events are written to Event Log, and on Linux/MacOS they are handled by the syslog daemon. Alternatively you can use the Console plugins system if syslog/eventlog is not flexible enough.
Before we go further, we’ll need to configure Splunk to read in the data, or even set up Splunk if you don’t have an existing system. For this blog post, I used the Splunk Docker image to get it up and running a couple minutes in a container.
In Splunk we’ll need to create a “source type”, an “index” and a “data input” to move the data into the index. To create the source type, I put the following definitions in the local props.conf file located in $SPLUNK_HOME/etc/system/local (you may need to create the props.conf file)
EXTRACT-HPSSAgent = Agent: (?P<Honeypoint_Agent>[^ ]+)
EXTRACT-Attacker_IP = from: (?P<Attacker_IP>[^ ]+)
EXTRACT-Port = on port (?P<Port>[^ ]+)
EXTRACT-Alert_Data = Alert Data: (?P<Alert_Data>.+)
TIME_PREFIX = at\s
MAX_TIMESTAMP_LOOKAHEAD = 200
TIME_FORMAT = %Y-%m-%d %H:%M:%S
This tells Splunk how to extract the data from the event. You can also define this in the Splunk web interface by going to Settings -> Source Types and creating a new source type.
To create a Data Input, go to Settings -> Data Inputs. I’m going to set it up to directly ingest the data through a TCP socket, but if you already have a setup to read files from a centralized logging system, then feel free to use that instead.
For the source type, manually typing in “hpss” (or whatever you named it) should bring up the already defined source type. Select that, and everything else can remain as is. Then go to review and finish. It’s now ready for you to ship the events to it.
Lastly, we need to get the logs from the Console system to Splunk. Again, this will differ depending on your OS. I will show one way to do this on Windows and one for Linux. However, there are numerous ways to do it. In both cases, replace the IP and Port of your Splunk instance.
On Windows you can use NXLog or another type of eventlog to syslog shipper. After installing NXLog, edit the following into the configuration file.
define ROOT C:\Program Files\nxlog
#define ROOT C:\Program Files (x86)\nxlog
Path in => out
On Linux with rsyslog, create a conf file with the following
:msg,contains,"HPSS Agent" @@192.168.232.6:1514
Now Splunk should be receiving any HPSS events sent to it and storing them in the defined index, and extracting the fields during search queries.
In the future we can look at creating some graphs and analyze the events received. If there is any interest, I can look at creating a Splunk app to configure all of this for you.
For this weeks tool review, we’re looking at Splunk. Splunk is a log collection engine at heart, but it’s really more than that. Think of it as search engine for your IT infrastructure. Splunk will actually collect and index anything you can throw at it, and this is what made me want to explore it.
Setting up your Splunk server is easy, there’s installers for every major OS. Run the installer and visit the web front end, and you are in business. Set up any collection sources you need, I started off with syslog. I started a listener in Splunk, and then forwarded my sources to Splunk (I used syslog-ng for this). Splunk will also easily do WMI polling, monitoring local files, change monitoring, or run scripts to generate any data you want. Some data sources require running Splunk as an agent, but it goes easy on system resources as the GUI is turned off. Installing agents is exactly the same process — you just disable the GUI when you’re finished setting up; however you can still control Splunk through the command line.
Splunk can also run addons, in the form of apps. These are plugins that are designed to take and display certain information. There are quite a few, provided both by the Splunk team and also some created by third parties. I found the system monitoring tools to be very helpful. There are scripts for both Windows and Unix. In this instance, it does require running clients on the system. There are also apps designed for Blue Coat, Cisco Security and more.
In my time using Splunk, I’ve found it to be a great tool for watching logs for security issues (brute forcing ssh accounts for example), it was also useful in fine tuning my egress filtering, as I could instantly see what was being blocked by the firewall, and of course the system monitoring aspects are useful. It could find a home in any organization, and it plays nice with other tools or could happily be your main log aggregation system.
Splunk comes in two flavors, free and professional. There’s not a great difference between them. The biggest difference is that with the free version Splunk is limited to 500MB of indexing per day, which proves to be more than enough for most small businesses, and testing for larger environments. Stepping up to the professional version is a lot easier on the pockets than might be expected, only about $3,000.
Netsparker Professional Edition, by Mavituna Security, is a web application scanner focused on finding unknown flaws in your applications. It can find a wide range of vulnerabilities including SQL injection, cross-site scripting, local and remote file inclusion, command injection and more.
Installation of the software was easy, and as Mavituna Security touts, the license is non-obtrusive. Starting the application you are presented with a nice well designed gui, that shows quite a lot of information. To start a scan, it can be as simple as just putting in a URL. It is very easy for non-security professionals to setup and use. There are also profiles you can configure and save. It’s possible to configure a form login through a very well designed wizard.
The main draw of Netsparker is the confirmation engine, which is how Netsparker claims to be false positive free. The confirmation engine takes the vulnerability and actually confirms that it’s exploitable. If it’s exploitable, it’s definitely not a false positive. A neat feature of identified SQL injection vulnerabilities is the ability for Netsparker to allow you to exploit them right through the scanner. You can run SQL queries, or even open a shell (depending on DB and configuration of it). Directory traversal vulnerabilities can be exploited to download the whole source of the application since Netsparker already knows all the files, and other system files can also be retrieved and saved through the interface.
We set Netsparker to scan our Web application lab which contains known vulnerabilities that cover the OWASP Top Ten Project. We noticed that Netsparker did a very good job at spidering and finding a high number of attack surfaces. On vulnerabilities, Netsparker did a great job of finding SQL injections, cross site scripting, and directory traversals. On one vulnerability, I thought I may have made Netsparker report a confirmed false positive, but it turns out I was wrong after I used the built in query maker and ran one and got data back.
Overall I think Netsparker is an excellent tool, especially effective at finding SQL injections and cross-site issues. Of course, I wouldn’t say it was the only scanner you should have, but definitely consider adding it to your repertoire.
McAfee’s Anti-Virus update for today (5958 DAT April 21, 2010) is causing systems to be stuck in an infinite reboot cycle. If your systems have not updated yet, it is highly recommended to prevent them from doing so, disable automatic updates and any pending update tasks.
The issue comes from the update detecting a false positive on systems. It appears that only Windows XP SP3 systems are effected. McAfee detects this false positive in the file C:/WINDOWS/system32/svchost.exe and thinks it contains the W32/Wecorl.a Virus. The machine then enters a reboot cycle.
McAfee has released a temporary fix to suppress the false positive. To use the fix with VirusScan Enterprise Console 8.5i or higher, Access Protection must be first disabled by following this knowledge base article here. (Alternate Google cache page, site is very busy here.)
To correct a machine with this issue, follow these steps:
1. Download the EXTRA.DAT file here. (Or from the KB article)
2. Start the effected machine in Safe Mode
3. Copy the EXTRA.DAT file to the following location:
4. Remove svchost.exe from the quarantine.