Twitter Games from MicroSolved

If you haven’t followed us on Twitter (@microsolved) yet, be sure to do so. Here are a few reasons why you should look to our Twitter feed for more great content from MSI:

  • Ongoing curated news feeds of some of the most interesting and best information security news & event coverage
  • Discussions of emerging threats and significant issues around InfoSec
  • Pointers to free tools & resources to help your team protect your data & systems
  • Easy way to talk to us & engage in pro-bono Q&A sessions
  • AND NOW – 2 New Games a week:
    • Mondays will feature the “Hacker Challenge” – a weekly technically-focused fun activity or challenge (decrypt a secret, solve a puzzle, find something specific  across the net, etc.)
    • Thursdays will feature the “Throw Back Thursday Hacker Trivia” – weekly trivia contest focused on hacker, InfoSec and technology; with occasional prizes for the winners!

So, grab an account on Twitter or follow us there, and don’t just keep up to date, but talk to us. We want to hear your thoughts, the security challenges you are facing and anything that will help us serve your information security needs. Plus, we know reading log files and patching systems can get tedious, so we will try to mix in a little fun along the way! See you there!

Best Practices for DNS Security

I wanted to share with you a great FREE resource that I found on the Cisco web site that details a great deal of information about DNS and the best practices around securing it. While, obviously, the content is heavy on Cisco products and commands, the general information, overview and many of the ideas contained in the article are very useful for network and security admins getting used to the basics of DNS.

Additionally, there are great resources listed, including several free/open source tools that can be used to manage and monitor DNS servers. 

If you are interested in learning more about DNS or need a quick refresher, check this article out. 

You can find it here.

Several other resources are available around the web, but this seems to be one of the best summaries I have seen. As always, thanks for reading and let me know on Twitter (@lbhuston) if you have other favorite resources that you would like to share.

Sources for Tor Access Tools

As a follow up to my last couple of weeks posting around Tor and the research I am doing within the Tor network, I presented at the Central Ohio ISSA Security Summit around the topic of Tor Hidden Services. The audience asked some great questions, and today I wanted to post some links for folks to explore the Tor network on their own in as safe a manner as possible.

The following is a set of links for gaining access to the Tor network and a couple of links to get people started exploring Tor Hidden Services.  (Note: Be careful out there, remember, this is the ghetto of the Internet and your paranoia may vary…)

 Once you get into the Tor network, here are a couple of hidden service URLs to get you started:

http://kpvz7ki2v5agwt35.onion – Original hidden wiki site

http://3g2upl4pq6kufc4m.onion/ – Duck Duck Go search engine

http://kbhpodhnfxl3clb4.onion – “Tor Search” search engine

As always, thanks for reading and stay safe out there! 

Tool Review: Lynis

Recently, I took a look at Lynis, an open source system and security auditing tool. The tool is a local scanning tool for Linux and is pretty popular.

Here is the description from their site:
Lynis is an auditing tool for Unix/Linux. It performs a security scan and determines the hardening state of the machine. Any detected security issues will be provided in the form of a suggestion or warning. Beside security related information it will also scan for general system information, installed packages and possible configuration errors.

This software aims in assisting automated auditing, hardening, software patch management, vulnerability and malware scanning of Unix/Linux based systems. It can be run without prior installation, so inclusion on read only storage is possible (USB stick, cd/dvd).

Lynis assists auditors in performing Basel II, GLBA, HIPAA, PCI DSS and SOx (Sarbanes-Oxley) compliance audits.

Intended audience:
Security specialists, penetration testers, system auditors, system/network managers.

Examples of audit tests:
– Available authentication methods
– Expired SSL certificates
– Outdated software
– User accounts without password
– Incorrect file permissions
– Configuration errors
– Firewall auditing 

As you can see, it has a wide range of capabilities. It is a pretty handy tool and the reporting is pretty basic, but very useful.

Our testing went well, and overall, we were pleased at the level of detail the tool provides. We wouldn’t use it as our only Linux auditing tool, but is a very handy tool for the toolbox. The runs were of adequate speed and when we tweaked out the configs with common errors, the tool was quick to flag them. 

Overall, we would give it a “not too shabby”. 🙂 The advice is still a bit technical for basic users, but then, do you want basic users administering a production box anyway? For true admins, the tool is perfectly adequate at telling them what to do and how to go about doing it, when it comes to hardening their systems.

Give Lynis a try and let me know what you think. You can give me feedback, kudos or insults on Twitter (@lbhuston). As always, thanks for reading! 

Infosec Tricks & Treats

Happy Halloween!

This time around, we thought we’d offer up a couple of infosec tricks and treats for your browsing pleasure. Around MSI, we LOVE Halloween! We dress up like hackers, bees and hippies. Of course, we do that most other days too… 🙂

Here are a couple of tricks for you for this Halloween:

Columbia University gives you some good tricks on how to do common security tasks here.

University of Colorado gives you some password tricks here.

and The Moneypit even provides some tricks on cheap home security here.  

And now for the TREATS!!!!!

Here are some of our favorite free tools from around the web:

Wireshark – the best network sniffer around

Find your web application vulnerabilities with the FREE OWASP ZED Attack Proxy

Crack some Windows passwords to make sure people aren’t being silly on Halloween with Ophcrack

Actually fix some web issues for free with mod_security

Grab our DREAD calculator and figure out how bad it really is.. 🙂

Put those tricks and treats in your bag and smile. They won’t cause cavities and they aren’t even heavy enough to keep you from running from the neighborhood bully looking to steal your goodies! 

Thanks for reading and have a fun, safe and happy Halloween! 

Three Tough Questions with Aaron Bedra

This time I interviewed Aaron Bedra about his newest creation ~ RepSheet. Check it out here:


Aaron’s Bio:

Aaron is the Application Security Lead at Braintree Payments. He is the co-author of Programming Clojure, 2nd Edition as well as a frequent contributor to the Clojure language. He is also the creator of Repsheet, a reputation based intelligence and security tool for web applications.


Question #1:  You created a tool called Repsheet that takes a reputational approach to web application security. How does it work and why is it important to approach the problem differently than traditional web application firewalling?

I built Repsheet after finding lots of gaps in traditional web application security. Simply put, it is a web server module that records data about requests, and either blocks traffic or notifies downstream applications of what is going on. It also has a backend to process information over time and outside the request cycle, and a visualization component that lets you see the current state of the world. If you break down the different critical pieces that are involved in protecting a web application, you will find several parts:

* Solid and secure programming practices

* Identity and access management

* Visibility (what’s happening right now)

* Response (make the bad actors go away)

* HELP!!!! (DDoS and other upstream based ideas)

* A way to manage all of the information in a usable way

This is a pretty big list. There are certainly some things on this list that I haven’t mentioned as well (crypto management, etc), but this covers the high level. Coordinating all of this can be difficult. There are a lot of tools out there that help with pieces of this, but don’t really help solve the problem at large.

The other problem I have is that although I think having a WAF is important, I don’t necessarily believe in using it to block traffic. There are just too many false positives and things that can go wrong. I want to be certain about a situation before I act aggressively towards it. This being the case, I decided to start by simply making a system that records activity and listens to ModSecurity. It stores what has happened and provides an interface that lets the user manually act based on the information. You can think of it as a half baked SIEM.

That alone actually proved to be useful, but there are many more things I wanted to do with it. The issue was doing so in a manner that didn’t add overhead to the request. This is when I created the Repsheet backend. It takes in the recorded information and acts on it based on additional observation. This can be done in any form and it is completely pluggable. If you have other systems that detect bad behavior, you can plug them into Repsheet to help manage bad actors.  

The visualization component gives you the detailed and granular view of offenses in progress, and gives you the power to blacklist with the click of a button. There is also a global view that lets you see patterns of data based on GeoIP information. This has proven to be extremely useful in detecting localized botnet behavior.

So, with all of this, I am now able to manage the bottom part of my list. One of the pieces that was recently added was upstream integration with Cloudflare, where the backend will automatically blacklist via the Cloudflare API, so any actors that trigger blacklisting will be dealt with by upstream resources. This helps shed attack traffic in a meaningful way.

The piece that was left unanswered is the top part of my list. I don’t want to automate good programming practices. That is a culture thing. You can, of course, use automated tools to help make it better, but you need to buy in. The identity and access management piece was still interesting to me, though. Once I realized that I already had data on bad actors, I saw a way to start to integrate this data that I was using in a defensive manner all the way down to the application layer itself. It became obvious that with a little more effort, I could start to create situations where security controls were dynamic based on what I know or don’t know about an actor. This is where the idea of increased security and decreased friction really set it and I saw Repsheet become more than just a tool for defending web applications.

All of Repsheet is open sourced with a friendly license. You can find it on Github at:

https://github.com/repsheet

There are multiple projects that represent the different layers that Repsheet offers. There is also a brochureware site at http://getrepsheet.com that will soon include tutorial information and additional implementation examples.

Question #2: What is the future of reputational interactions with users? How far do you see reputational interaction going in an enterprise environment?

For me, the future of reputation based tooling is not strictly bound to defending against attacks. I think once the tooling matures and we start to understand how to derive intent from behavior, we can start to create much more dynamic security for our applications. If we compare web security maturity to the state of web application techniques, we would be sitting right around the late 90s. I’m not strictly talking about our approach to preventing breaches (although we haven’t progressed much there either), I’m talking about the static nature of security and the impact it has on the users of our systems. For me the holy grail is an increase in security and a decrease in friction.

A very common example is the captcha. Why do we always show it? Shouldn’t we be able to conditionally show it based on what we know or don’t know about an actor? Going deeper, why do we force users to log in? Why can’t we provide a more seamless experience if we have enough information about devices, IP address history, behavior, etc? There has to be a way to have our security be as dynamic as our applications have become. I don’t think this is an easy problem to solve, but I do think that the companies that do this will be the ones that succeed in the future.

Tools like Repsheet aim to provide this information so that we can help defend against attacks, but also build up the knowledge needed to move toward this kind of dynamic security. Repsheet is by no means there yet, but I am focusing a lot of attention on trying to derive intent through behavior and make these types of ideas easier to accomplish.

Question #3: What are the challenges of using something like Repsheet? Do you think it’s a fit for all web sites or only specific content?

I would like to say yes, but realistically I would say no. The first group that this doesn’t make sense for are sites without a lot of exposure or potential loss. If you have nothing to protect, then there is no reason to go through the trouble of setting up these kinds of systems. They basically become a part of your application infrastructure and it takes dedicated time to make them work properly. Along those lines, static sites with no users and no real security restrictions don’t necessarily see the full benefit. That being said, there is still a benefit from visibility into what is going on from a security standpoint and can help spot events in progress or even pending attacks. I have seen lots of interesting things since I started deploying Repsheet, even botnets sizing up a site before launching an attack. Now that I have seen that, I have started to turn it into an early warning system of sorts to help prepare.

The target audience for Repsheet are companies that have already done the web security basics and want to take the next step forward. A full Repsheet deployment involves WAF and GeoIP based tools as well as changes to the application under the hood. All of this requires time and people to make it work properly, so it is a significant investment. That being said, the benefits of visibility, response to attacks, and dynamic security are a huge advantage. Like every good investment into infrastructure, it can set a company apart from others if done properly.

Thanks to Aaron for his work and for spending time with us! Check him out on Twitter, @abedra, for more great insights!

Go Phish :: How To Self Test with MSI SimplePhish

Depending on who you listen to, phishing (especially spear phishing), is either on the increase or the decrease. While the pundits continue to spin marketing hype, MSI will tell you that phishing and spearphishing are involved in 99% of all of the incidents that we work. Make no mistake, it is the attack of choice for getting malware into networks and environments.

That said, about a year ago or more, MSI introduced a free tool called MSI SimplePhish, which acts as a simplified “catch” for phishing campaigns. The application, which is available for Windows and can run on workstations or even old machines, makes it quite easy to stand up a site to do your own free phishing tests to help users stay aware of this threat.

To conduct such a campaign, follow these steps:

PreCursor: Obtain permission from your security management to perform these activities and to do phishing testing. Make sure your management team supports this testing BEFORE you engage in it.

1.  Obtain the MSI SimplePhish application by clicking here.

2. Unzip the file on a the Windows system and review the README.TXT file for additional information.

3. Execute application and note the IP address of the machine you are using. The application will open a listening web server on port 8080/TCP. Remember to allow that port through any host-based firewalls or the like.

4. The application should now be ready to catch phishing attempts and log activity when the following URL structure is clicked on: http://<ip address of the windows system>:8080/ and when that URL is accessed, a generic login screen should be displayed.

5. Create an email message (or SMS, voice mail, etc.) that you intend to deliver to your victims. This message should attempt to get them to visit the site and enter their login information. An example:

Dear Bob,

This message is to inform you that an update to your W-2 tax form is required by human resources. Given the approaching tax deadline, entering this information will help us to determine if an error was made on your 2012 W-2. To access the application and complete the update process, please visit the online application by clicking here. (You would then link the clicking here text to your target URL obtained in step 4.)

6. Deliver the messages to your intended targets.

7. Watch and review the log file MSISimplePhishLog.txt (located in the same directory as the binary). Users who actually input a login and password will get written to the log as “caught”, including their IP address, the login name and **the first 3 characters** of the password they used.  Users who visit the page, but do not login, will be recorded as a “bite”, including their IP address.

** Note that only the first 3 characters of the password are logged. This is enough to prove useful in discussions with users and to prove their use, but not enough to be useful in further attacks. The purpose of this tool is to test, assess and educate users, not to commit fraud or gather real phishing data. For this reason, and for the risks it would present to the organization, full password capture is not available in the tool and is not logged. **

8. Let the exercise run for several days, in order to catch stragglers. Once complete, analyze the logs and report the information to the security stakeholders in your organization. Don’t forget to approach the users who use successfully phished and give them some tips and information about how they should have detected this type of attack and what they should do to better manage such threats in the future.

That’s it – lather, rinse and repeat as you like!

If you would like to do more advanced phishing testing and social engineering exercises, please get in touch with an MSI account executive who can help put together a proposal and a work plan for performing deep penetration testing and/or ongoing persistent penetration testing using this and other common attack methods. As always, thanks for reading and until next time, stay safe out there!

Threat Update: Wide Scale Phishing in Progress

GlobalDisplay Orig

Just a quick update about the ongoing threat from malware dropped by phishing attacks. There are a lot of phishing attacks currently in progress. Fishing has been a leading form of compromise for quite some time and indicators appear to point to an increasing amount of phishing attacks and a larger amounts of damage from successful exploitation.

Many organizations are reporting wide spread phishing using recycled, older malware including Zeus, Tepfer and other common remote access tools. In some cases, these malware are repackaged or otherwise modified to evade anti-virus detection. Attackers are showing medium to high levels of success with these attacks.

Once compromised, the normal bot installation and exfiltration of data occurs. For most organizations that don’t play a role in critical infrastructure, this likely means credentials, customer information and other commercially valuable data will be targeted. For critical infrastrcuture organizations, more specific  design, future state and architectural data is being targeted along with credentials, etc.

Organizations should be carefully and vigilantly reviewing their egress traffic. They should also be paying careful attention to user desktop space and the ingress/egress from the user workstation DMZ or enclaves (You DO have your user systems segregated from your core operations, correct???). Remember, you CAN NOT depend on AV or email filtering to rebuff these attacks at a meaningful level. Detection and response are key, in order to limit the length of time the attacker has access to your environment. Anything short of full eradication of their malware and tools is likely to end with them still maintaining some level of access and potentially, control.

Now is a good time to consider having a phishing penetration test performed, or to consider using MSISimplePhish to perform some phishing for yourself. Awareness alerts and training are also encouraged. This is going to be a long term threat, so we must begin to implement ongoing controls over the entire technology/ppolicy & process/awareness stack. 

If you have any questions on phishing attacks, malware or incident response, please let us know. Our teams are used to working with these attacks and their subsequent compromises. We also have wide experience with designing enclaved architectures and implementing nuance detection mechanisms that focus on your critical assets. Feel free to touch base with us for a free 30 minute call to discuss your options for increasing security postures.

Quick & Dirty Plan for Critical Infrastructure Security Improvement

J0202190

I was recently engaged with some critical infrastructure experts on Twitter. We were discussing a quick and dirty set of basic tasks that could be used an approach methodology for helping better secure the power grid and other utilities.

There was a significant discussion and many views were exchanged. A lot of good points were made over the course of the next day or so.

Later, I was asked by a couple of folks in the power industry to share my top 10 list in a more concise and easy to use manner. So, per their request, here it is:

@LBHuston’s Top 10 Project List to Help Increase Critical Infrastructure “Cyber” Security

1. Identify the assets that critical infrastructure organizations have in play and map them for architecture, data flow and attack surfaces

2. Undertake an initiative to eliminate “low hanging fruit” vulnerabilities in these assets (fix out of date software/firmware, default configurations, default credentials, turn on crypto if available, etc.)

3. Identify attack surfaces that require more than basic hardening to minimize or mitigate vulnerabilities

4. Undertake a deeper hardening initiative against these surfaces where feasible

5. Catalog the surfaces that can’t be hardened effectively and perform fail state analysis and threat modeling for those surfaces

6. Implement detective controls to identify fail state conditions and threat actor campaigns against those surfaces

7. Train an incident investigation and response team to act when anomalous behaviors are detected

8. Socialize the changes in your organization and into the industry (including regulators)

9. Implement an ongoing lessons learned feedback loop that includes peer and regulator knowledge sharing

10. Improve entire process organically through iteration

The outcome would be a significant organic improvement of the safety, security and trust of our critical infrastructures. I know some of the steps are hard. I know some of them are expensive. I know we need to work on them, and we better do it SOON. You know all of that too. The question is – when will WE (as in society) demand that it be done? That’s the 7 billion people question, isn’t it?

Got additional items? Wanna discuss some of the projects? Drop me a line in the comments, give me a call at (614) 351-1237 or tweet with me (@lbhuston). Thanks for reading and until next time, stay safe out there!

PS – Special thanks to @chrisjager for supporting me in the discussion and for helping me get to a coherent top 10 list. Follow him on Twitter, because he rocks!

Terminal Services Attack Reductions Redux

Last week, we published a post about the high frequency of probes, scans and attacks against exposed Windows Terminal Services from the Internet. Many folks commented on Twitter to me about some of the things that can be done to minimize the risk of these exposures. As we indicated in the previous post, the best suggestions are to eliminate them altogether by placing Terminal Services exposures behind VPN connections or through the implementation of tokens/multi-factor authentication. 

Another idea is to implement specific firewall rules that block access to all but a specific set of IP addresses (such as the home IP address range of your admins or that of a specific jump host, etc.) This can go a long way to minimizing the frequency of interaction with the attack surfaces by random attacker tools, probes and scans. It also raises the bar slightly for more focused attackers by forcing them to target specific systems (where you can deploy increased monitoring).

In addition, a new tool for auditing the configuration of Terminal Services implementations came to our attention. This tool, called “rdp-sec-check”, was written by Portcullis Security and is available to the public. Our testing of the tool showed it to be quite useful in determining the configuration of exposed Terminal Services and in creating a path for hardening them wherever deployed. (Keep in mind, it is likely useful to harden the Terminal Services implementations internally to critical systems as well…)

Note that we particularly loved that the tool could be used REMOTELY. This makes it useful to audit multiple customer implementations, as well as to check RDP exposures during penetration testing engagements. 

Thanks to Portcullis for making this tool available. Hopefully between this tool to harden your deployments and our advice to minimize the exposures, we can all drive down some of the compromises and breaches that result from poor RDP implementations.

If you would like to create some threat metrics for what port 3389 Terminal Services exposures might look like for your organization, get in touch and we can discuss either metrics from the HITME or how to use HoneyPoint to gather such metrics for yourself

PS – Special thanks to @SecRunner for pointing out that many cloud hosting providers make Terminal Server available with default configurations when provisioning cloud systems in an ad-hoc manner. This is likely a HUGE cause for concern and may be what is keeping scans and probes for 3389/TCP so active, particularly amongst cloud-hosted HITME end points.

PSS – We also thought you might enjoy seeing a sample of the videos that show entry level attackers exactly how to crack weak passwords via Terminal Services using tools easily available on the Internet. These kinds of videos are common for low hanging fruit attack vectors. This video was randomly pulled from the Twitter stream with a search. We did not make it and are not responsible for its content. It may not be safe for work (NSFW), depending on your organization’s policies.