MSI Strategy & Tactics Talk Ep. 27: The 2012 Verizon Data Breach Investigations Report

The 2012 Verizon Data Breach Investigations Report is out!  In this episode of MSI Strategy & Tactics, Adam, Phil, and John discuss the newest report’s discoveries and some of the more interesting discoveries.  Discussion questions include:

1. What was the most surprising finding?
2. What is different from the past, any trends?

Listen in and let us know what you think!


The Verizon Data Breach Investigations Report


Adam Hostetler, Network Engineer, Security Analyst
Phil Grimes, Security Analyst
John Davis, Risk Management Engineer
Mary Rose Maguire, Marketing Communication Specialist and moderator

Click the embedded player to listen. Or click this link to access downloads. Stay safe!

Mobile Apps Shouldn’t Roll Their Own Security

An interesting problem is occurring in the mobile development space. Many of the applications being designed are being done so by scrappy, product oriented developers. This is not a bad thing for innovation (in fact just the opposite), but it can be a bad thing for safety, privacy and security.

Right now, we are hearing from several cross platform mobile developers that the API sets across iOS, Android and others are so complex, that they are often skipping some of the APIs and rolling their own code methods for doing some of this work. For example, take crypto from a set of data on the device. In many cases, rather than using standard peer-reviewed routines and leveraging the strength of the OS and its controls, they are saying the job is too complex for them to manage across platforms so they’ll embed their own code routines for doing what they feel is basic in-app crypto. 

Problems (like those with the password vault applications), are likely to emerge from this approach toward mobile apps. There is a reason crypto controls require peer review. They are difficult and often complex mechanisms where mistakes in the logic or data flows can have huge impacts on the security of the data. We learned these lessons long ago. Home-rolled crypto and other common security routines were a big problem in the desktop days and still remain so for many web applications, as well. Sadly, it looks like we might be learning those lessons again at the mobile application development layer as well.
Basically, the bottom line is this; if you are coding a mobile application, or buying one to access critical data for your organization, make sure the developers use the API code for privacy, trust and security functions. Stay away from mobile apps where “roll your own/proprietary security code” is in use. The likelihood of getting it right is a LOT less than using the APIs, methods and code that the mobile OS vendors have made accessible. It’s likely that the OS vendors are using peer-reviewed, strongly tested code. Sadly, we can’t say that for all of the mobile app developer code we have seen.
As always, thanks for reading and stay safe out there!

Disagreement on Password Vault Software Findings

Recently, some researchers have been working on comparing password vault software products and have justifiably found some issues. However, many of the vendors are quickly moving to remediate the identified issues, many of which were simply improper use of proprietary cryptography schemes.

I agree that proprietary crypto is a bad thing, but I find fault with articles such as this one where the researchers suggest that using the built in iOS functions are safer than using a password vault tool.

Regardless of OS, platform or device, I fail to see how depending on simple OS embedded tools versus OS embedded tools, plus the additional layers of whatever mechanisms a password vault adds, reduces risk to the user. It would seem that the additional layers of control (regardless of their specific vulnerability to nuanced attacks against each control surface), would still add overall security for the user and complexity for the attacker to manage in a compromise.
I would love to see a model on this scenario where the additional controls reduce the overall security of the data. I could be wrong (it happens), but in the models I have run, they all point to the idea that even a flawed password vault wrapped in the OS controls are stronger and safer than the bare OS controls alone.
In the meantime, while the vendors work on patching their password vaults and embracing common crypto mechanisms, I’ll continue to use my password vault as is, wrapped in the additional layers of OS controls and added detection mechanisms my systems enjoy. I would suggest you and your organization’s users continue to do the same.

Information Security Is More Than Prevention







One of the biggest signs that an organization’s information security program is immature is when they have an obsessive focus on prevention and they equate it specifically with security.

The big signs of this issue are knee-jerk reactions to vulnerabilities, a never-ending set of emergency patching situations and continual fire-fighting mode of reactions to “incidents”. The security team (or usually the IT team) is overworked, under-communicates, is highly stressed, and lacks both resources and tools to adequately mature the process. Rarely does the security folks actually LIKE this environment, since it feeds their inner super hero complex.

However, time and time again, organizations that balance prevention efforts with rational detection and practiced, effective response programs perform better against today’s threats. Evidence from vendor reports like Verizon DBIR/Ponemon, law enforcement data, DHS studies, etc. have all supported that balanced program work much better. The current state of the threat easily demonstrates that you can’t prevent everything. Accidents and incidents do happen. 
When bad things do come knocking, no matter how much you have patched and scanned, it’s the preparation you have done that matters. It’s whether or not you have additional controls like enclaving in place. Do you have visibility at various layers for detection in depth? Does your team know how to investigate, isolate and mitigate the threats? Will they do so in a timely manner that reduces the impact of the attacker or will they panic, knee-jerk their way through the process, often stumbling and leaving behind footholds of the attacker?
How you perform in the future is largely up to you and your team. Raise your vision, embrace a balanced approach to security and step back from fighting fires. It’s a much nicer view from here. 

Secure Networks: Remember the DMZ in 2012

Just a quick post to readers to make sure that everyone (and I mean everyone), who reads this blog should be using a DMZ, enclaved, network segmentation approach for any and all Internet exposed systems today. This has been true for several years, if not a decade. Just this week, I have talked to two companies who have been hit by malicious activity that compromised a web application and gave the attacker complete control over a box sitting INSIDE their primary business network with essentially unfettered access to the environment.

Folks, within IT network design, DMZ architectures are not just for best practices and regulatory requirements, but an essential survival tool for IT systems. Punching a hole from the Internet to your primary IT environment is not smart, safe, or in many cases, legal.
Today, enclaving the internal network is becoming best practice to secure networks. Enclaving/DMZ segmentation of Internet exposed systems is simply assumed. So, take an hour, review your perimeter, and if you find internally exposed systems — make a plan and execute it. In the meantime, I’d investigate those systems as if they were compromised, regardless of what you have seen from them. At least check them over with a cursory review and get them out of the business network ASAP.
This should go without saying, but this especially applies to folks that have SCADA systems and critical infrastructure architectures.
If you have any questions regarding how you can maintain secure networks with enclaving and network segmentation, let us know. We’d love to help!

10 Ways to Handle Insider Threats







As the economic crisis continues, the possibility of an insider threat occurring within a company increases. Close to 50% of all companies have been hit by insider attacks, according to a recent study by Carnegie Mellon’s CERT Insider Threat Center. (Click here to access the page that has the PDF download, “Insider Threat Study.”)

It doesn’t help when companies are restructuring and handing out pink slips. The result of leaner departments means that often there are less employees to notice when someone is doing something wrong. Tough economic times may also make it tempting for an employee to switch his ‘white hat’ to a black one for financial gain. Insider threats include employees, contractors, auditors, and anyone who has authorized access to an organization’s computers. How can you minimize the risk? Here are a few tips:

1. Monitor and enforce security policies. Update the controls and oversee implementation.

2. Initiate employee awareness programs. Educate the staff about security awareness and the possibility of them being coerced into malicious activities.

3. Start paying attention to new hires. Keep an eye out for repeated violations that may be laying the groundwork for more serious criminal activity.

4. Work with human resources to monitor negative employee issues. Most insider IT sabotage attacks occur following a termination.

5. Carefully distribute resources. Only give employees what they need to do their jobs.

6. If your organization develops software, monitor the process. Pay attention to the service providers and vendors.

7. Approach privileged users with extra care. Use the two-man rule for critical projects. Those who know technology are more likely to use technological means for revenge if they perceive they’ve been wronged.

8. Monitor employees’ online activity, especially around the time an employee is terminated. There is a good chance the employee isn’t satisfied and may be tempted to engage in an attack.

9. Go deep in your defense plan to counter remote attacks. If employees know they are being monitored, there is a good possibility an unhappy worker will use remote control to gain access.

10. Deactivate computer access once the employee is terminated. This will immediately end any malicious activity such as copying files or sabotaging the network.

Be vigilant with your security backup plan. There is no approach that will guarantee a complete defense against insider attacks, but if you continue to practice secure backup, you can decrease the damage. Stay safe!

MSI Strategy & Tactics Talk Ep. 26: Hacking Back or Strikeback Technologies

Hacking back or strikeback technologies is a system  engineering term that could occur in a situation with a positive loop, whereby each component responds with an increased reaction to the response of the other component, and so the problem gets worse and worse. (The Information Security Dictionary: Defining the Terms That Define Security, by Urs E. Gattiker) Recently, a honey pot was created with some strikeback technology in the code.  In this episode of MSI Strategy & Tactics, Brent Huston and the techs discuss the various aspects of this technology and how it would affect you.  Discussion questions include:

  1. What is the history of strike back, hacking back and how does it apply to today when you have major teams working to take down bot nets and such?
  2. HoneyPoint has a type of technology called “defensive fuzzing” which does something that has been compared to strikeback. How it is different than other technologies?
  3. What is the current take on the legality of strikeback/hacking back? Are organizations being put at risk if they attack their attackers or if their security teams go on offense?
Brent Huston, CEO and Security Evangelist
Adam Hostetler, Network Engineer, Security Analyst
Phil Grimes, Security Analyst
John Davis, Risk Management Engineer
Mary Rose Maguire, Marketing Communication Specialist and moderator

Click the embedded player to listen. Or click this link to access downloads. Stay safe!

Threat and Vulnerability: Pay Attention to MS12-020

Microsoft today released details and a patch for the MS12-020 vulnerability. This is a remotely exploitable vulnerability in most current Windows platforms that are running Terminal Server/RDP. Many organizations use this service remotely across the Internet, via a VPN, or locally for internal tasks. It is a common, prevalent technology, and thus the target pool for attacks is likely to make this a significant issue in the near future. 

Please identify your exposures to this vulnerability. Exploits are likely currently being developed. We have not yet (3/13/12 – 2.15pm Eastern) seen exploitation or an increase in probes for port 3389, but both are expected to occur shortly.
Please let us know if you have any questions or if we may be of any assistance with this issue.
This article makes reference to a potential worm attack vector, which we see as increasingly likely. Our team believes the exploitation development time to be significantly less than 30 days and more like 1-3 days for resourced attackers. As such, PLEASE TREAT THIS AS A SIGNIFICANT INTERNAL VULNERABILITY as well. Certainly, IMMEDIATE consideration is needed for Internet exposed systems, but INTERNAL systems should be patched as soon as manageable as well.
This confirms the scope and criticality of this issue.
Just a quick note – we are seeing vast work on the MS12-020 exploit. Some evidence points to 2 working versions. Not public, yet, but PATCH NOW. Internal & protected networks too.
MSI is proud to announce the immediate availability of a FREE version of HoneyPoint, called HPRDP2012 to help organizations monitor for ongoing scans and potential future worm activity. The application listens on port 3389/TCP and is available for OS X (Intel), Windows & Linux. This application is similar to our releases for Conficker & Morto, in that it will be operational for a set time (specifically until October 1, 2012). Simply unzip the application to where you would like to run and execute it. We hope this helps organizations manage this vulnerability and detect impacts should scans, probes or a worm emerge. Traditional HoneyPoint customers can use Agent and/or Wasp to listen for these connections and report them centrally by dilating TCP listener HoneyPoints on port 3389. Please let us know if you have any questions.

4 Tips for Teaching Your Staff About Social Engineering

If there is one thing that is tough to prevent, it is a person whose curiosity overrides their better judgement. Human nature leans toward discovery. If someone believes a valuable piece of information is available, there’s a very good chance she will satisfy her curiosity.

Social engineering, the process of obtaining confidential information through tricking people to do things they should not do; is on the rise. So how can you help your staff recognize social engineering before it’s too late?

Here are a few tips:

1. Create a process for validating outside inquiries.

Often, an attacker has done their homework in obtaining certain pieces of information such as having another employee’s name or their calendar to establish credibility. Create a process for inquiries, making someone the gatekeeper for such calls. Tell staff to not give out confidential information before checking with the gatekeeper.

2. Secure access into the organization.

Does your organization have guards? If not, it is the job of every employee to be alert to outsiders.

Name badges are another way to do this and require everyone to keep it visible. Explain to staff that it is perfectly legitimate to say, “I’m sorry, who did you say you were with again?” Teach awareness through fun exercises and safety posters.

3. Train staff to resist picking up strange USB keys.

This is difficult because it is where a person’s curiosity can get the best of them. However, a person has no idea what is on a found USB key. Would they eat food left on the floor of the kitchen? (Some, unfortunately, might!) Why would anyone take a found USB key and plug it into their computer? Curiosity. Create an incentive program for employees to return found keys to an IT administrator.

4. Fine tune a sense of good customer service.

Most people are helpful. This helpful nature is especially nurtured by organizations who want to provide good customer service to both internal staff and external contacts. Attackers take advantage of this by insisting that it would “be very helpful” if they could get someone’s confidential information in order to do their job. Train your staff to stick to the plan of verifying all inquiries by going through the proper channels. Help employees understand that this approach is truly the most “helpful” since they’ll be saving the company countless dollars if it’s an attack.

Consistent awareness is the key to resisting social engineering attacks. Use these tips and decrease your probability of an attack. Stay safe!

Malicious Exploits: Hitting the Internet Waves with CSRF, Part Two


If you’re the “average Web user” using unmodified versions of the most popular browsers can do relatively little to prevent cross-site request forgery.

Logging out of sites and avoiding their “remember me” features can help mitigate CSRF risk, in addition —  not displaying external images or not clicking links in spam or untrusted e-mails may also help. Browser extensions such as RequestPolicy (for Mozilla Firefox) can prevent CSRF by providing a default-deny policy for cross-site requests. However, this can significantly interfere with the normal operation of many websites. 

The CsFire extension (also for Firefox) can mitigate the impact of CSRF with less impact on normal browsing, by removing authentication information from cross-site requests.

Web developers, however have a better fighting chance to protect their users by implementing counter-measures such as:

  • Requiring a secret, user-specific token in all form submissions, and side-effect URLs prevents CSRF; the attacker’s site cannot put the right token in its submissions
  • Requiring the client to provide authentication data in the same HTTP Request used to perform any operation with security implications (money transfer, etc.)
  • Limiting the lifetime of session cookies
  • Checking the HTTP Referer header
  • Ensuring that there is no clientaccesspolicy.xml file granting unintended access to Silverlight controls
  • Ensuring that there is no crossdomain.xml file granting unintended access to Flash movies
  • Verifying that the request’s header contains a X-Requested-With. Used by Ruby on Rails (before v2.0) and Django (before v1.2.5). This protection has been proven insecure under a combination of browser plugins and redirects which can allow an attacker to provide custom HTTP headers on a request to any website, hence allow a forged request.

One simple method to mitigate this vector is to use a CSRF filter such as OWASP’s CSRFGuard. The filter intercepts responses, detects if it is an html document, and inserts a token into the forms and optionally inserts script-to-insert tokens in ajax functions. The filter also intercepts requests to check that the token is present. One evolution of this approach is to double submit cookies for users who use JavaScript. If an authentication cookie is read using JavaScript before the post is made, JavaScript’s stricter (and more correct) cross-domain rules will be applied. If the server requires requests to contain the value of the authentication cookie in the body of POST requests or the URL of dangerous GET requests, then the request must have come from a trusted domain, since other domains are unable to read cookies from the trusting domain.

Checking the HTTP Referer header to see if the request is coming from an “authorized” page is a common tactic employed by embedded network devices due to the low memory requirements. However, a request that omits the Referer header must be treated as unauthorized because an attacker can suppress the Referer header by issuing requests from FTP or HTTPS URLs. This strict Referer validation may cause issues with browsers or proxies that omit the Referer header for privacy reasons. Also, old versions of Flash (before 9.0.18) allow malicious Flash to generate GET or POST requests with arbitrary http request headers using CRLF Injection. Similar CRLF injection vulnerabilities in a client can be used to spoof the referrer of an http request. To prevent forgery of login requests, sites can use these CSRF countermeasures in the login process, even before the user is logged in. Another consideration, for sites with especially strict security needs, like banks, often log users off after (for example) 15 minutes of inactivity.

Using the HTTP specified usage for GET and POST, in which GET requests never have a permanent effect, while good practice is not sufficient to prevent CSRF. Attackers can write JavaScript or ActionScript that invisibly submits a POST form to the target domain. However, filtering out unexpected GETs prevents some particular attacks, such as cross-site attacks using malicious image URLs or link addresses and cross-site information leakage through <script> elements (JavaScript hijacking); it also prevents (non-security-related) problems with some web crawlers as well as link prefetching.

I hope this helps when dealing with this malicious exploit. Let me know how it works out for you. Meanwhile, stay safe out there!