Detecting Info Leaks with ClawBack

Clawback smallClawBack Is Purpose Built to Detect Info Leaks

ClawBack is MicroSolved’s cloud-based SaaS solution for performing info leak detection. We built the tool because we worked so many incidents and breaches related to three common types of info leaks:

  • Leaked Credentials – this is so common that it lies at the root of thousands of incidents over the last several years, attackers harvest stolen and leaked logins and passwords and use them anywhere they think they can gain access – this is so common, it is even categorized by OWASP as a specific form of attack: credential stuffing 
  • Leaked Configurations – attackers love to comb through leaked device and application configuration files for credentials, of course, but also for details about the network or app environment, sensitive data locations, cryptographic secrets and network management information they can use to gain control or access
  • Leaked Code – leaked source code is a huge boon for attackers; often leaking sensitive intellectual property that they can sell on the dark web to your competitors or parse for vulnerabilities in your environment or products

MicroSolved knows how damaging these info leaks can be to organizations, no matter the type. That’s exactly why we built ClawBack to provide ongoing monitoring for the info leak terms that matter most to you.

How to Get Started Detecting Info Leaks

Putting ClawBack to work for you is incredibly easy. Most customers are up and monitoring for info leaks within 5 minutes.

There is no hardware, software, appliance or agent to deploy. The browser-based interface is simple to use, yet flexible enough to meet the challenges of the modern web. 

First, get a feel for some terms that you would like to monitor that are unique to your organization. Good examples might be unique user names, application names, server names, internal code libraries, IP address ranges, SNMP community strings, the first few hex characters of certificates or encryption keys, etc. Anything that is unique to your organization or at the very least, uncommon. 

Next, register for a ClawBack account by clicking here.

Once your account is created, and you follow the steps to validate it, you can login to the ClawBack application. Here, you will be able to choose the level of subscription that you would like, picking from the three different service levels available. You will also be able to input your payment information and set up additional team members to use the application, if available at your subscription level. 

Next, click on Monitoring Terms and input the terms that you identified in the first step. ClawBack will immediately go and search for any info leaks related to your terms as you put them in. Additionally, ClawBack will continually monitor for the terms going forward and provide alerts for any info leaks that appear in the common locations around the web. 

How To View Any Info Leaks

Reviewing any info leaks found is easy, as well. Simply click on Alerts on the top menu. Here, your alerts will be displayed, in a sortable list. The list contains a summary of each identified leak, the term it matched and the location of the leak. You can click on the alert to view the identified page. Once reviewed, you can archive the alert, where it will remain in the system and is visible in your archive, or you can mark it as a false positive, and it will be removed from your dataset but ClawBack will remember the leak and won’t alert you again for that specific URL. 

If you have access to the export function, based on your subscription level, you can also so export alerts to a CSV file for uploading into SIEM/SOAR tools or ticketing systems. It’s that easy! 

You can find a more specific walkthrough for finding code leaks here, along with some screen shots of the product in action.

You can learn more about ClawBack and view some use case videos and demo videos at the ClawBack homepage.

Give ClawBack a try today and you can put your worries to rest that unknown info leaks might be out there doing damage to your organization. It’s so easy, so affordable and so powerful that it makes worries about info leaks obsolete.

Prepping for Incident Response

Prepping? Who wants to prep for incident response?

This particular bit of writing came from a question that I was asked during a speaking engagement recently – paraphrased a bit.

How can a client help the incident team when they’re investigating an incident, or even suspicious activity? 

So, I circulated this to the team, and we tossed around some ideas.

Continue reading

BEC #6 – Recovery

A few weeks ago, we published the Business Email Compromise (BEC) Checklist. The question arose – what if you’re new to security, or your security program isn’t very mature?

Since the checklist is based on the NIST model, there’s a lot of information here to help your security program mature, as well as to help you mature as a security practitioner. MSI’s engineers have discussed a few ways to leverage the checklist as a growth mechanism.

Part 1 and Part 2 covered the first checkpoint in the list – Discover. Part 3 covered the next checkpoint – Protect. Part 4 continued the series – Detect. Part 5 addressed how to Respond.

Continue reading

How do you “identify”…BEC #2

A few weeks ago, we published the Business Email Compromise (BEC) Checklist. The question arose – what if you’re new to security, or your security program isn’t very mature?

Since the checklist is based on the NIST model, there’s a lot of information here to help your security program mature, as well as to help you mature as a security practitioner. MSI’s engineers have discussed a few ways to leverage the checklist as a growth mechanism.

Continue reading

Enter the game master….disaster recovery tabletops!

I snagged this line from the most excellent Lesley Carhart the other day, and it’s been resonating every since.

“You put your important stuff in a fire safe, have fire drills, maintain fire insurance, and install smoke detectors even though your building doesn’t burn down every year.”

When’s the last time you got out your business continuity/disaster recovery plan, dusted it off, and actually READ it? You have one, so you can check that compliance box…but is it a living document?

It should be.

All of the box checking in the world isn’t going to help you if Step #2 of the plan says to notify Fred in Operations…and Fred retired in 2011. Step #3 is to contact Jason in Physical Security to discuss placement of security resources…and Jason has changed his cell phone number three times since your document was written.

I’ve also seen a disaster recovery plan, fairly recently, that discussed the retrieval and handling of some backup….floppy disks. That’s current and up-do-date?

Now, I am an active tabletop gamer. Once a week I get together with like-minded people to roll the dice and play various board games.

For checking the validity of your disaster recovery plan there is an excellent analog to the tabletop gaming world:

Tabletop DR exercises!

Get BACK here….I see you in the third row, trying to sneak out. I’ll admit, I LOVE doing tabletops. Hello? I get to play game master, throw in all kinds of random real life events, and help people in the process – that’s the trifecta of awesome, right there. If it’s a really good day, I get to use dice, as well!

The bare minimum requirements for an effective tabletop:

  • A copy of  your most recent DR/BC plan
  • Your staff – preferably cooperative. Buy ’em a pizza or three, will you? The good kind. Not the cheap ones.
  • An observer. This person’s job is to review your plan in advance, and observe the tabletop exercise while taking notes. They will note WHAT happens, and what actions your team takes during the exercise. This role is silent, but detail oriented.
  • And the game master. The game master will present the scenario to the team. They will interact with the team during the exercise, and will also be the one who generates the random events that may throw the plan off track. It’s always shocking to me how many people would rather be the observer….to me, game master is where the fun is.

Your scenario, and the random event happenings, should fit your business. I tend to collect these for fun….and class them accordingly. A random happening where all credit card processing is doubling due to an error in the point of sale process is perfect for a retail establishment…but an attorney’s office is going to look at me like I have three heads.

Once the exercise is over, the game master and observer should go over all notes, and generate a report. What did the team do well, what fell off track, what updates does the plan need, and what is missing from the plan entirely?

Get the team together again. Buy ’em donuts – again, the good ones. Good coffee. Or lunch. Never underestimate the power of decent food on technical resources.

Try to start on a high note, and end on a high note. Make plans, as you review – what are the action items, and who owns them? When and how will the updates be done? When will you reconvene to review the updates and make sure they’re clear and correct?

Do this, do it regularly, and do NOT punish for the outcome. It’s an exercise in improvement, always…not something that your staff should dread.

Have a great DR exercise story? Have a REALLY great random event for my collection? I’d love to hear it – reach out. I’m on Twitter @TheTokenFemale, or lwallace@microsolved.com

Worm detection with HoneyPoint Security Server (HPSS): A real world example

This post describes a malware detection event that I actually experienced a few short years ago.

My company (Company B) had been acquired by a much larger organization (Company A) with a very large internal employee desktop-space. A desktop-space larger than national boundaries.

We had all migrated to Company A  laptops – but our legacy responsibilities required us to maintain systems in the original IP-space of company B.  We used legacy Company B VPN for that.

I had installed the HPSS honeypoint agent on my Company A laptop prior to our migration into their large desktop space.  After migration I was routinely VPN’ed into legacy Company B space, so a regular pathway for alerts to reach the console existed.

After a few months, the events shown in the diagram below occurred.

I started to receive email alerts directed to my Company B legacy email account. The alerts described TCP 1433 scans that my Company A laptop was receiving.  The alerts were all being thrown by the MSSQL (TCP 1433 – Microsoft SQL Server) HoneyPoint listener on my laptop.

I was confused – partly because I had become absorbed in post-acquisition activities and had largely forgotten about the HPSS agent running on my laptop.

After looking at the emails and realizing what was happening, I got on the HPSS console and used the HPSS event viewer to get details. I learned that the attackers were internal within Company A space. Courtesy of HPSS I had their source IP addresses and the common payload they all delivered.  Within Company A I gathered information via netbios scans of the source IPs.  The infected machines were all Company A laptops belonging to various non-technical staff on the East Coast of the U.S.

All of that got passed on to the Company A CIO office. IDS signatures were generated, tweaked, and eventually the alerts stopped.  I provided payload and IP information from HPSS throughout the process.

I came away from the experience with a firm belief that company laptops, outfitted with HoneyPoint agents, are an excellent way of getting meaningful detection out into the field.

I strongly recommend you consider something similar. Your organization’s company laptops are unavoidably on the front-line of modern attacks.

Use them to your advantage.

 

 

Ransomware TableTop Exercises

When it comes to Ransomware, it’s generally a good idea to have some contingency and planning before your organization is faced with a real life issue. Here at MicroSolved we offer tabletop exercises tailored to this growing epidemic in information technology. 

 

What if your organization was affected by the Golden Eye or WannaCry today? How quick would you be able to react? Is someone looking at your router or server log files? Is this person clearly defined? How about separation of duties? Is the person looking over the log files also uncharge of escalating an issue to higher management?

 

How long would it take for you organization to even know if it was affected? Who would be in-charge of quarantining the systems? Are you doing frequent backups? Would you bet your documents on it? To answer these questions and a whole lot more it would be beneficial to do a table top exercise. 

 

A table top exercise should be implemented on an annual basis to evaluate organizational cyber incident prevention, mitigation, detection and response readiness, resources and strategies form the organizations respective Incident Response Team. 

 

As you approach an incident response there are a few things to keep in mind:

 

  1. Threat Intelligence and Preparation

An active threat intelligence will help your organization to Analyze, Organize and refine information about potential attacks that could threaten the organization as a whole.

After you gain Threat Intelligence, then there needs to be a contingency plan in place for what to do incase of an incident. Because threats are constantly changing this document shouldn’t be concrete, but more a living document, that can change with active threats.

  1. Detection and Alerting

The IT personal that are in place for Detection and Alerting should be clearly defined in this contingency plan. What is your organizations policy and procedure for frequency that the IT pro’s look at log files, network traffic for any kind of intrusion?

  1. Response and Continuity

When an intrusion is identified, who is responsible for responding? This response team should be different then the team that is in charge of “Detection and Alerting”. Your organization should make a clearly outlined plan that handles response. The worse thing is finding out you don’t do frequent backups of your data, when you need those backups! 

  1. Restoring Trust

After the incident is over, how are you going to gain the trust of your customers? How would they know there data was safe/ is safe? There should be a clearly defined policy that would help to mitigate any doubt to your consumers. 

  1. After Action Review

What went wrong? Murphy’s law states that when something can go wrong it will. What was the major obstacles? How can this be prevented in the future? This would be a great time to take lessons learned and place them into the contingency plan for future. The best way to lesson the impact of Murphy, is to figure out you have an issue on a table top exercise, then in a real life emergency! 


This post was written by Jeffrey McClure.

Brands Being Used in Pornography Search Engine Poisoning

Recently, during one of our TigerTrax™Targeted Threat Intelligence engagements, we were performing passive threat assessments for a popular consumer brand. In the engagement, we not only gathered targeted threat intelligence about their IT environments, applications and hosting partners, but also around the use of their brand on a global scale. The client had selected to take advantage of our dark net intelligence capabilities as well, and were keenly interested in how the dark net, deep web and underground portions of the Internet were engaged with their brand. This is a pretty common type of engagement for us, and we often find a wide variety of security, operational and reputational issues.

This particular time around, we ran into a rather interesting and new concern, at least on the dark net. In this case, a dark net pornography site was using the consumer brand embedded as an HTML comment in the porn site’s main pages. Overall, there were several hundred name brands in the comments. This seems to have been performed so that the search engines that index the site on the dark net, associate the site with the brands. That means when a user searches for the brand name, they get the porn site returned as being associated. In this case, it was actually the first link on several of the dark net search sites we tested. The porn site appears to be using the brand names to lure eyeballs to the site – essentially to up the chance of finding a subscriber base for their particularly nasty set of pornography offerings. Search engine poisoning has been an issue on the public web for some time, and it is a commonly understood tactic to try and link your content to brands, basically serving as “click bait” for users. However, on the dark net, this was the first time we had observed this tactic being used so overtly.

The brand owner was, of course, concerned about this illicit use of their brand. However, there is little they could do to respond, other than reporting the site to the authorities. Instead, after discussing various options, we worked with them to identify an action and response plan for how they would handle the problem if it became a public concern. We also worked with them to identify a standard process that they could follow to bring their existing legal, marketing, management and other parts of their incident response team up to date on threats like these as they emerged.

The client was very pleased to have the discussion and with the findings we identified. While any misuse of their brand is a concern, having their brand associated with pornography or other illicit material is certainly unnerving. In the end, there is little that organizations can do, other than work with authorities or work on take down efforts if the brand is misused on the public web. However, having the knowledge that the issue is out there, and working to develop the threat into existing response plans certainly goes a long way to help them minimize these kinds of risks.

To learn more about dark net brand issues, targeted threat intelligence or passive assessments, drop us a line (info@microsolved dot com) or get in touch on Twitter (@lbhuston) for a discussion. 

Just a Quick Thought & Mini Rant…

Today, I ran across this article, and I found it interesting that many folks are discussing how “white hat hackers” could go about helping people by disclosing vulnerabilities before bad things happen. 

There are so many things wrong with this idea, I will just riff on a few here, but I am sure you have your own list….

First off, the idea of a corp of benevolent hackers combing the web for leaks and vulnerabilities is mostly fiction. It’s impractical in terms of scale, scope and legality at best. All 3 of those issues are immediate faults.

But, let’s assume that we have a group of folks doing that. They face a significant issue – what do they do when they discover a leak or vulnerability? For DECADES, the security and hacking communities have been debating and riffing on disclosure mechanisms and notifications. There remains NO SINGLE UNIFIED MECHANISM for this. For example, let’s say you find a vulnerability in a US retail web site. You can try to report it to the site owners (who may not be friendly and may try to prosecute you…), you can try to find a responsible CERT or ISAC for that vertical (who may also not be overly friendly or responsive…) or you can go public with the issue (which is really likely to be unfriendly and may lead to prosecution…). How exactly, do these honorable “white hat hackers” win in this scenario? What is their incentive? What if that web site is outside of the US, say in Thailand, how does the picture change? What if it is in the “dark web”, who exactly do they notify (not likely to be law enforcement, again given the history of unfriendly responses…) and how? What if it is a critical infrastructure site – like let’s say it is an exposed Russian nuclear materials storage center – how do they report and handle that? How can they be assured that the problem will be fixed and not leveraged for some nation-state activity before it is reported or mitigated? 

Sound complicated? IT IS… And, risky for most parties. Engaging in vulnerability hunting has it’s dangers and turning more folks loose on the Internet to hunt bugs and security issues also ups the risks for machines, companies and software already exposed to the Internet, since scan and probe traffic is likely to rise, and the skill sets of those hunting may not be commiserate with the complexity of the applications and deployments online. In other words, bad things may rise in frequency and severity, even as we seek to minimize them. Unintended consequences are certainly likely to emerge. This is a very complex system, so it is highly likely to be fragile in nature…

Another issue is the idea of “before bad things happen”. This is often a fallacy. Just because someone brings a vulnerability to you doesn’t mean they are the only ones who know about it. Proof of this? Many times during our penetration testing, we find severe vulnerabilities exposed to the Internet, and when we exploit them – someone else already has and the box has been pwned for a long long time before us. Usually, completely unknown to the owners of the systems and their monitoring tools. At best, “before bad things happen” is wishful thinking. At worst, it’s another chance for organizations, governments and law enforcement to shoot the messenger. 

Sadly, I don’t have the answers for these scenarios. But, I think it is fair for the community to discuss the questions. It’s not just Ashley Madison, it’s all of the past and future security issues out there. Someday, we are going to have to come up with some mechanism to make it easier for those who know of security issues. We also have to be very careful about calling for “white hat assistance” for the public at large. Like most things, we might simply be biting off more than we can chew… 

Got thoughts on this? Let me know. You can find me on Twitter at @lbhuston.

DOJ Best Practices for Breach Response

I stumbled on this great release from the US Department of Justice – a best practices guide to breach response.

Reading it is rather reminiscent of much of what we said in the 80/20 Rule of Information Security years ago. Namely, know your own environment, data flows, trusts and what data matters. Combine that with having a plan, beforehand, and some practice – and you at least get some decent insights into what your team needs and is capable of handling. Knowing those boundaries and when to ask for outside help will take you a long way.

I would also suggest you give our State of Security Podcast a listen. Episode 6, in particular, includes a great conversation about handling major breaches and the long term impacts on teams, careers and lives.

As always, if we can assist you in preparing a breach response process, good policies, performing those network mappings or running table top exercises (or deeper technical red team exercises), let us know. We help companies around the world master these skills and we have plenty of insights we would love to share!