3 Lessons From 30 Years of Penetration Testing

I’ve been doing penetration tests for 30 years and here are 3 things that have stuck with me.

I’ve been doing penetration testing for around 3 decades now. I started doing security testing back when the majority of the world was dial-up access to systems. I’ve worked on thousands of devices, systems, network and applications – from the most sensitive systems in the world to some of the dumbest and most inane mobile apps (you know who you are…) that still have in-game purchases. 

Over that time, these three lessons have stayed with me. They may not be the biggest lessons I’ve learned, or the most impactful, but they are the ones that have stuck with me in my career the longest. 

Lesson 1: The small things make or break a penetration test. The devil loves to hide in the details.

Often people love to hear about the huge security issues. They thrill or gasp at the times when you find that breathtaking hole that causes the whole thing to collapse. But, for me, the vulnerabilities that I’m most proud of, looking back across my career are the more nuanced ones. The ones where I noticed something small and seemingly deeply detailed. You know the issues like this, you talk about them to the developer and they respond with “So what?” and then you show them that small mistake opens a window that allows you to causally step inside to steal their most critical data…

Time and time again, I’ve seen nuance vulnerabilities hidden in encoded strings or hex values. Bad assumptions disguised in application session management or poorly engineered work flows. I’ve seen developers and engineers make mistakes that are so deeply hidden in the protocol exchanges or packet stream that anyone just running automated tools would have missed it. Those are my favorites. So, my penetration testing friend, pay attention to the deep details. Lots of devils hide there, and a few of those can often lead to the promised land. Do the hard work. Test every attack surface and threat vector, even if the other surfaces resisted, sometimes you can find a subtle, almost hidden attack surface that no one else noticed and make use of it.

Lesson 2: A penetration test is usually judged by the report. Master report writing to become a better penetration tester. 

This is one of the hardest things for my mentees to grasp. You can geek out with other testers and security nerds about your latest uber stack smash or the elegant way you optimized the memory space of your exploit – but customers won’t care. Save yourself the heartbreak and disappointment, and save them the glazed eyes look that comes about when you present it to them. They ONLY CARE about the report.

The report has to be well written. It has to be clear. It has to be concise. It has to have make them understand what you did, what you found and what they need to do about it. The more pictures, screen shots, graphs and middle-school-level language, the better. They aren’t dumb, or ignorant, they just have other work to do and need the information they need to action against in the cleanest, clearest and fastest way possible. They don’t want to Google technical terms and they have no patience for jargon. So, say it clear and say it in the shortest way possible if you want to be the best penetration tester they’ve seen. 

That’s hard to swallow. I know. But, you can always jump on Twitter or Slack and tell us all about your L33T skillz and the newest SQL technique you just discovered. Even better, document it and share it with other testers so that we all get better.

Lesson 3: Penetration tests aren’t always useful. They can be harmful.

Lastly, penetration tests aren’t always a help. They can cause some damage, to weak infrastructures, or to careers. Breaking things usually comes with a cost, and delivering critical failure news to upper management is not without its risks. I’ve seen CIOs and CISOs lose their jobs due to a penetration test report. I’ve seen upper management and boards respond in entirely unkind and often undeserved ways. In fact, if you don’t know what assets your organization has to protect, what controls you have and/or haven’t done some level of basic blocking and tackling – forget pen-testing altogether and skip to an inventory, vulnerability assessment, risk assessment or mapping engagement. Save the pen-testing cost and dangerous results for when you have more situational awareness. 

Penetration testing is often good at finding the low water mark. It often reveals least resistant paths and common areas of failure. Unfortunately, these are often left open by a lack of basic blocking and tackling. While it’s good news that basics go a long way to protecting us and our data, the bad news is that real-world attackers are capable of much more. Finding those edge cases, the things that go beyond the basics, the attack vectors less traveled, the bad assumptions, the short cut and/or the thing you missed when you’re doing the basics well – that’s when penetration tests have their biggest payoffs.

Want to talk more about penetration testing, these lessons or finding the right vulnerability management engagement for your organization? No problem, get in touch and I’ll be happy to discuss how MicroSolved can help. We can do it safely, make sure it is the best type of engagement for your maturity level and help you drive your security program forward. Our reports will be clean, concise and well written. And, we’ll pay attention to the details, I promise you that. 🙂 

To get in touch, give me a call at (614) 351-1237, drop me a line via this webform or reach out on Twitter (@lbhuston). I love to talk about infosec and penetration testing. It’s not just my career, but also my passion.

The Dark Net Seems to be Changing

The dark net is astounding in its rapid growth and adoption. In my ongoing research work around underground sites, I continue to be amazed at just how much traditional web-based info is making its way to the dark net. As an example, in the last few research sessions, I have noticed several sites archiving educational white papers, economic analyses and more traditional business data – across a variety of languages. I am also starting to see changes in the tide of criminal-related data and “black market” data, in that the density of that data has begun to get displaced, in my opinion, by more traditional forms of data, discourse and commercialization.

It is not quite to the level of even the early world wide web, but it is clearly headed in a direction where the criminal element, underground markets and other forms of illicit data are being forced to share the dark net with significantly more commercial and social-centric data. Or at least, it feels that way to me. I certainly don’t have hard metrics to back it up, but it feels that way as I am working and moving through the dark net in my research. 

There is still a ways to go, before .onion sites are paved and turned into consumer malls – but that horizon seems closer now than ever before. Let me know what you think on Twitter (@lbhuston).

Podcast Episode 9 Available

Check out Episode 9 of the State of Security Podcast, just released!

This episode runs around an hour and features a very personal interview with me in the hot seat and the mic under control of @AdamJLuck. We cover topics like security history, my career, what I think is on the horizon, what my greatest successes and failures have been. He even digs into what I do every day to keep going. Let me know what you think, and as always, thanks for listening!

Podcast Episode 8 is Out

This time around we riff on Ashley Madison (minus the morals of the site), online privacy, OPSec and the younger generation with @AdamJLuck. Following that, is a short with John Davis. Check it out and let us know your thoughts via Twitter – @lbhuston. Thanks for listening! 

You can listen below:

Just a Quick Thought & Mini Rant…

Today, I ran across this article, and I found it interesting that many folks are discussing how “white hat hackers” could go about helping people by disclosing vulnerabilities before bad things happen. 

There are so many things wrong with this idea, I will just riff on a few here, but I am sure you have your own list….

First off, the idea of a corp of benevolent hackers combing the web for leaks and vulnerabilities is mostly fiction. It’s impractical in terms of scale, scope and legality at best. All 3 of those issues are immediate faults.

But, let’s assume that we have a group of folks doing that. They face a significant issue – what do they do when they discover a leak or vulnerability? For DECADES, the security and hacking communities have been debating and riffing on disclosure mechanisms and notifications. There remains NO SINGLE UNIFIED MECHANISM for this. For example, let’s say you find a vulnerability in a US retail web site. You can try to report it to the site owners (who may not be friendly and may try to prosecute you…), you can try to find a responsible CERT or ISAC for that vertical (who may also not be overly friendly or responsive…) or you can go public with the issue (which is really likely to be unfriendly and may lead to prosecution…). How exactly, do these honorable “white hat hackers” win in this scenario? What is their incentive? What if that web site is outside of the US, say in Thailand, how does the picture change? What if it is in the “dark web”, who exactly do they notify (not likely to be law enforcement, again given the history of unfriendly responses…) and how? What if it is a critical infrastructure site – like let’s say it is an exposed Russian nuclear materials storage center – how do they report and handle that? How can they be assured that the problem will be fixed and not leveraged for some nation-state activity before it is reported or mitigated? 

Sound complicated? IT IS… And, risky for most parties. Engaging in vulnerability hunting has it’s dangers and turning more folks loose on the Internet to hunt bugs and security issues also ups the risks for machines, companies and software already exposed to the Internet, since scan and probe traffic is likely to rise, and the skill sets of those hunting may not be commiserate with the complexity of the applications and deployments online. In other words, bad things may rise in frequency and severity, even as we seek to minimize them. Unintended consequences are certainly likely to emerge. This is a very complex system, so it is highly likely to be fragile in nature…

Another issue is the idea of “before bad things happen”. This is often a fallacy. Just because someone brings a vulnerability to you doesn’t mean they are the only ones who know about it. Proof of this? Many times during our penetration testing, we find severe vulnerabilities exposed to the Internet, and when we exploit them – someone else already has and the box has been pwned for a long long time before us. Usually, completely unknown to the owners of the systems and their monitoring tools. At best, “before bad things happen” is wishful thinking. At worst, it’s another chance for organizations, governments and law enforcement to shoot the messenger. 

Sadly, I don’t have the answers for these scenarios. But, I think it is fair for the community to discuss the questions. It’s not just Ashley Madison, it’s all of the past and future security issues out there. Someday, we are going to have to come up with some mechanism to make it easier for those who know of security issues. We also have to be very careful about calling for “white hat assistance” for the public at large. Like most things, we might simply be biting off more than we can chew… 

Got thoughts on this? Let me know. You can find me on Twitter at @lbhuston.

Artificial Intelligence – Let’s Let Our Computers Guard Our Privacy For Us!

More and more computer devices are designed to act like they are people, not machines. We as consumers demand this of them. We don’t want to have to read and type; we want our computers to talk to us and we want to talk to them. On top of that, we don’t want to have to instruct our computers in every little detail; we want them to anticipate our needs for us. Although this part doesn’t really exist yet, we would pay through the nose to have it. That’s the real driver behind the push to achieve artificial intelligence. 

Think for a minute about the effect AI will have on information security and privacy. One of the reasons that computer systems are so insecure now is because nobody wants to put in the time and drudgery to fully monitor their systems. But an AI could not only monitor every miniscule input and output, it could do it 24 X 7 X 365 without getting tired. Once it detected something it could act to correct the problem itself. Not only that, a true intelligence would be able monitor trends and conditions and anticipate problems before they even had a chance to occur. Indeed, once computers have fully matured they should be able to guard themselves more completely than we ever could.

And besides privacy, think of the drudgery and consternation an AI could save you. In a future world created by a great science fiction author, Charles Sheffield, everyone had a number of “facs” protecting their time and privacy. A “facs” is a facsimile of you produced by your AI. These facs would answer the phone for you, sort your messages, schedule your appointments and perform a thousand and one other tasks that use up your time and try your patience. When they run across situations that they can’t handle, they simply bring you into the loop to make the decisions. Makes me wish this world was real and already with us. Hurry up AI! We really need you!

Should MAD Make its Way Into the National Cyber-Security Strategy?

Arguably, Mutually Assured Destruction (MAD) has kept us safe from nuclear holocaust for more than half a century. Although we have been on the brink of nuclear war more than once and the Doomsday clock currently has us at three minutes ‘til midnight, nobody ever seems ready to actually push the button – and there have been some shaky fingers indeed on those buttons! 

Today, the Sword of Damocles hanging over our heads isn’t just the threat of nuclear annihilation; now we have to include the very real threat of cyber Armageddon. Imagine hundreds of coordinated cyber-attackers using dozens of zero-day exploits and other attack mechanisms all at once. The consequences could be staggering! GPS systems failing, power outages popping up, banking software failing, ICS systems going haywire, distributed denial of service attacks on hundreds of web sites, contradictory commands everywhere, bogus information popping up and web-based communications failures could be just a handful of the likely consequences. The populous would be hysterical! 

So, keeping these factors in mind, shouldn’t we be working diligently on developing a cyber-MAD capability to protect ourselves from this very real threat vector? It has a proven track record and we already have decades of experience in running, controlling and protecting such a system. That would ease the public’s very justifiable fear of creating a Frankenstein that may be misused to destroy ourselves.

Plus think of the security implications of developing cyber-MAD. So far in America there are no national cyber-security laws, and the current security mechanisms used in the country are varied and less than effective at best. Creating cyber-war capabilities would teach us lessons we can learn no other way. To the extent we become the masters of subverting and destroying cyber-systems, we would reciprocally become the masters of protecting them. When it comes right down to it, I guess I truly believe in the old adage “the best defense is a good offense”.

Thanks to John Davis for this post.

14 Talks I Would Like to Attend This Summer

Here is just a quick list, off the top of my head, of some of the topics I would like to see someone do talks about at security events this summer. If you are in need of a research topic, or something to dig into for a deep dive, give one of these a try. Who knows, maybe you will see me in the audience. If so, then feel free to sit down for a cup of coffee and a chat! 

Here’s the list, in no particular order:

  1. machine learning,  analytics in infosec
  2. detection capabilities with nuance visibility at scale
  3. decision support from security analytics & automated systems based on situational awareness
  4. rational controls and how to apply them to different industries
  5. crowdsourcing of policies and processes – wiki-based approaches
  6. internal knowledge management for security teams
  7. tools for incident response beyond the basics
  8. tools and processes for business continuity after a breach – show us your guide to “Ouchies!”
  9. attacker research that is actually meaningful and that does NOT revolve around IOCs
  10. skills and capability mapping techniques for security teams and their management
  11. new mechanisms for log management and aggregation beyond Splunk & SEIM – how would the death star handle logs?
  12. near-real time detection at a meaningful level – even better if admins can make decisions and take actions from their iPhone/iWatch, 😛
  13. extrusion/exfiltration testing capabilities & metrics-focused assessment approaches for testing exfil robustness
  14. network mapping and asset discovery techniques and tools – how would the death star map their IT networks? 🙂
Give me a shout on Twitter if you want to explore these together – @lbhuston.

Three Things That Need Spring Cleaning in InfoSec

Spring is here in the US, and that brings with it the need to do some spring cleaning. So, here are some ideas of some things I would like to see the infosec community clean out with the fresh spring air!

1. The white male majority in infosec. Yes, I am a white male, also middle aged…. But, seriously, infosec needs more brains with differing views and perspectives. We need a mix of conservative, liberal and radical thought. We need different nationalities and cultures. We need both sexes in equity. We need balance and a more organic talent pool to draw from. Let’s get more people involved, and open our hearts and minds to alternatives. We will benefit from the new approaches!

2. The echo chamber. It needs some fresh air. There are a lot of dropped ideas and poor choices laying around in there, so let’s sweep that out and start again. I believe echo chamber effects are unavoidable in small focused groups, but honestly, can’t we set aside our self-referential shouting, inside jokes, rock star egos and hubris for just one day? Can’t we open a window and sweep some of the aged and now decomposing junk outside. Then, maybe, we can start again with some fresh ideas and return to loving/hating each other in the same breath. As a stop gap, I am nominating May 1, a Friday this year, as Global Infosec Folks Talk to Someone You Don’t Already Know Day (GIFTTSYDAKD). On this day, ignore your peers in the echo chamber on social media and actually go out and talk to some non-security people who don’t have any idea what you do for a living. Take them to lunch. Discuss their lives, what they do when they aren’t working, how security and technology impacts their day to day. Just for one day, drop out of the echo chamber, celebrate GIFTTSYDAKD, and see what happens. If you don’t like it, the echo chamber can come back online with a little fresh air on May 2 at 12:01 AM EST. How’s that? Deal? 🙂

3. The focus on compliance over threats. Everyone knows in their hearts that this is wrong. It just feels good. We all want a gold star, a good report card or a measuring stick to say when we got to the goal. The problem is, crime is an organic thing. Organic, natural things don’t really follow policy, don’t stick to the script and don’t usually care about your gold star. Compliant organizations get pwned  – A LOT (read the news). Let’s spring clean the idea of compliance. Let’s get back to the rational idea that compliance is the starting point. It is the level of mutually assured minimal controls, then you have to build on top of it, holistically and completely custom to your environment. You have to tune, tweak, experiment, fail, succeed, re-vamp and continually invest in your security posture. FOREVER. There is no “end game”. There is no “Done!”. The next “bad thing” that visits the world will be either entirely new, or a new variant, and it will be capable of subverting some subset or an entire set of controls. That means new controls. Lather, rinse, repeat… That’s how life works.. To think otherwise is irrational and likely dangerous.

That’s it. That’s my spring cleaning list for infosec. What do you want to see changed around the infosec world? Drop me a line on Twitter (@lbhuston) and let me know your thoughts. Thanks for reading, and I hope you have a safe, joyous and completely empowered Spring season!

Malware Can Hide in a LOT of Places

This article about research showing how malware could be hidden in Blu-Ray disks should serve as a reminder to us all that a lot of those “smart” and “Internet-enabled” devices we are buying can also be a risk to our information. In the past, malware has used digital picture frames, vendor disks & CD’s, USB keys, smart “dongles” and a wide variety of other things that can plug into a computer or network as a transmission medium.

As the so called, Internet of Things (IoT), continues to grow in both substance and hype, more and more of these devices will be prevalent across homes and businesses everywhere. In a recent neighbor visit, I enumerated (with permission), more than 30 different computers, phones, tablets, smart TV’s and other miscellaneous devices on their home network. This family of 5 has smart radios, smart TVs and even a Wifi-connected set of toys that their kids play with. That’s a LOT of places for malware to hide…

I hope all of us can take a few minutes and just give that some thought. I am sure few of us really have a plan that includes such objects. Most families are lucky if they have a firewall and AV on all of their systems. Let alone a plan for “smart devices” and other network gook.

How will you handle this? What plans are you making? Ping us on Twitter (@lbhuston or @microsolved) and let us know your thoughts.