Three Tough Questions with Aaron Bedra

This time I interviewed Aaron Bedra about his newest creation ~ RepSheet. Check it out here:

Aaron’s Bio:

Aaron is the Application Security Lead at Braintree Payments. He is the co-author of Programming Clojure, 2nd Edition as well as a frequent contributor to the Clojure language. He is also the creator of Repsheet, a reputation based intelligence and security tool for web applications.

Question #1:  You created a tool called Repsheet that takes a reputational approach to web application security. How does it work and why is it important to approach the problem differently than traditional web application firewalling?

I built Repsheet after finding lots of gaps in traditional web application security. Simply put, it is a web server module that records data about requests, and either blocks traffic or notifies downstream applications of what is going on. It also has a backend to process information over time and outside the request cycle, and a visualization component that lets you see the current state of the world. If you break down the different critical pieces that are involved in protecting a web application, you will find several parts:

* Solid and secure programming practices

* Identity and access management

* Visibility (what’s happening right now)

* Response (make the bad actors go away)

* HELP!!!! (DDoS and other upstream based ideas)

* A way to manage all of the information in a usable way

This is a pretty big list. There are certainly some things on this list that I haven’t mentioned as well (crypto management, etc), but this covers the high level. Coordinating all of this can be difficult. There are a lot of tools out there that help with pieces of this, but don’t really help solve the problem at large.

The other problem I have is that although I think having a WAF is important, I don’t necessarily believe in using it to block traffic. There are just too many false positives and things that can go wrong. I want to be certain about a situation before I act aggressively towards it. This being the case, I decided to start by simply making a system that records activity and listens to ModSecurity. It stores what has happened and provides an interface that lets the user manually act based on the information. You can think of it as a half baked SIEM.

That alone actually proved to be useful, but there are many more things I wanted to do with it. The issue was doing so in a manner that didn’t add overhead to the request. This is when I created the Repsheet backend. It takes in the recorded information and acts on it based on additional observation. This can be done in any form and it is completely pluggable. If you have other systems that detect bad behavior, you can plug them into Repsheet to help manage bad actors.  

The visualization component gives you the detailed and granular view of offenses in progress, and gives you the power to blacklist with the click of a button. There is also a global view that lets you see patterns of data based on GeoIP information. This has proven to be extremely useful in detecting localized botnet behavior.

So, with all of this, I am now able to manage the bottom part of my list. One of the pieces that was recently added was upstream integration with Cloudflare, where the backend will automatically blacklist via the Cloudflare API, so any actors that trigger blacklisting will be dealt with by upstream resources. This helps shed attack traffic in a meaningful way.

The piece that was left unanswered is the top part of my list. I don’t want to automate good programming practices. That is a culture thing. You can, of course, use automated tools to help make it better, but you need to buy in. The identity and access management piece was still interesting to me, though. Once I realized that I already had data on bad actors, I saw a way to start to integrate this data that I was using in a defensive manner all the way down to the application layer itself. It became obvious that with a little more effort, I could start to create situations where security controls were dynamic based on what I know or don’t know about an actor. This is where the idea of increased security and decreased friction really set it and I saw Repsheet become more than just a tool for defending web applications.

All of Repsheet is open sourced with a friendly license. You can find it on Github at:

There are multiple projects that represent the different layers that Repsheet offers. There is also a brochureware site at that will soon include tutorial information and additional implementation examples.

Question #2: What is the future of reputational interactions with users? How far do you see reputational interaction going in an enterprise environment?

For me, the future of reputation based tooling is not strictly bound to defending against attacks. I think once the tooling matures and we start to understand how to derive intent from behavior, we can start to create much more dynamic security for our applications. If we compare web security maturity to the state of web application techniques, we would be sitting right around the late 90s. I’m not strictly talking about our approach to preventing breaches (although we haven’t progressed much there either), I’m talking about the static nature of security and the impact it has on the users of our systems. For me the holy grail is an increase in security and a decrease in friction.

A very common example is the captcha. Why do we always show it? Shouldn’t we be able to conditionally show it based on what we know or don’t know about an actor? Going deeper, why do we force users to log in? Why can’t we provide a more seamless experience if we have enough information about devices, IP address history, behavior, etc? There has to be a way to have our security be as dynamic as our applications have become. I don’t think this is an easy problem to solve, but I do think that the companies that do this will be the ones that succeed in the future.

Tools like Repsheet aim to provide this information so that we can help defend against attacks, but also build up the knowledge needed to move toward this kind of dynamic security. Repsheet is by no means there yet, but I am focusing a lot of attention on trying to derive intent through behavior and make these types of ideas easier to accomplish.

Question #3: What are the challenges of using something like Repsheet? Do you think it’s a fit for all web sites or only specific content?

I would like to say yes, but realistically I would say no. The first group that this doesn’t make sense for are sites without a lot of exposure or potential loss. If you have nothing to protect, then there is no reason to go through the trouble of setting up these kinds of systems. They basically become a part of your application infrastructure and it takes dedicated time to make them work properly. Along those lines, static sites with no users and no real security restrictions don’t necessarily see the full benefit. That being said, there is still a benefit from visibility into what is going on from a security standpoint and can help spot events in progress or even pending attacks. I have seen lots of interesting things since I started deploying Repsheet, even botnets sizing up a site before launching an attack. Now that I have seen that, I have started to turn it into an early warning system of sorts to help prepare.

The target audience for Repsheet are companies that have already done the web security basics and want to take the next step forward. A full Repsheet deployment involves WAF and GeoIP based tools as well as changes to the application under the hood. All of this requires time and people to make it work properly, so it is a significant investment. That being said, the benefits of visibility, response to attacks, and dynamic security are a huge advantage. Like every good investment into infrastructure, it can set a company apart from others if done properly.

Thanks to Aaron for his work and for spending time with us! Check him out on Twitter, @abedra, for more great insights!

3 Tough Questions with Bill Sempf

Recently, I caught up over email with Bill Sempf. He had some interesting thoughts on software security, so we decided to do a 3 Tough Questions with him. Check this out! :


A short biography of Bill Sempf: In 1992, Bill Sempf was working as a systems administrator for The Ohio State University, and formalized his career-long association with inter-networking. While working for one of the first ISPs in Columbus in 1995, he built the second major web-based shopping center, Americash Mall, using Cold Fusion and Oracle. Bill’s focus started to turn to security around the turn of the century. Internet driven viruses were becoming the norm by this time, and applications were susceptible to attack like never before. In 2003, Bill wrote the security and deployment chapters of the often-referenced Professional ASP.NET Web Services for Wrox, and began his career in pen testing and threat modeling with a web services analysis for the State of Ohio. Currently, Bill is working as a security-minded software architect specializing in the Microsoft space. He has recently designed a global architecture for a telecommunications web portal, modeled threats for a global travel provider, and provided identity policy and governance for the State of Ohio. Additionally, he is actively publishing, with the latest being Windows 8 Application Development with HTML5 for Dummies.


Question #1: Infosec folks have been talking about securing the SDLC for almost a decade, if that is truly the solution, why haven’t we gotten it done yet?

For the same reason that there are still bugs in software – the time and money necessary to fix things. Software development is hard, and it takes a long time and lots of money to write secure software. Building security in to the lifecycle, rather than just waiting and adding it to the test phase, is just prohibitively expensive.

That said, some companies have successfully done it. Take Microsoft for instance. For a significant portion of their history, Microsoft was the butt of nearly every joke in the security industry. Then they created and implemented the MSDL and now Microsoft products don’t even show up on the top 10 lists anymore. It is possible and it should be done. It’s just very expensive, and companies would rather take on the risk than spend the money up front.

Question #2: How can infosec professionals learn to better communicate with developers? How can we explain how critical things like SQL injections, XSS and CSRF have become in a way that makes developers want to engage?

There are two fronts to this war: the social and the technical. I think both have to be implemented in good measure to extract any success.

On the social side, infosec pros need to get out of the lab, and start talking at developer conferences. I have been doing this as a good measure since 2010, and have encouraged other community members to do the same. It is starting to work. This year at CodeMash, Rob Gillen and myself gave a day long training on everything from malware analysis to Wi-Fi to data protection. The talk was so popular that we needed to be moved into a bigger room. Security is starting to creep into the developers scope of vision.

Technically, though, security flaws need to be treated just like any other defect. The application security test team needs to be part of QA, treated just like anyone else in QA, given access to the defect tracking system, and post defects against the system as part of the QA process. Until something like the Microsoft SDL is implemented in an organization, integrating security testing with QA is the next best thing.

Question #3: What do you think happens in the future as technology dependencies and complexities ramp up? How will every day life be impacted by information security and poor development/implementations?

More and more applications and devices are using a loosely connected model to support fast UIs and easy functional development. This means more and more business functionality exposed in the form of SOAP and REST services. These endpoints are often formerly internal services that were used to provide the web server with functionality, but are gradually being exposed in order to support mobile applications. Rarely are they fully tested. In the short term future, this is going to be the most significant challenge to application security. In the long term, I have no idea. Things change so fast, it is nearly impossible to keep up.


Thanks to Bill for sharing his insights. You can discuss them with him on Twitter, where he is @sempf. As always, thanks for reading!

3 Tough Questions with Dan Houser

I recently spent some time discussion certifications, training, the future of the information security community and the “hacker conference” scene with Dan Houser. While I don’t agree with some of his views, especially about how hackers play a role in our community, I think his view points are interesting and worth a discussion. I also think his keen attention to sexism in our community is both timely and important for us to resolve. Here are my 3 Tough Questions for Dan.

A Short Biography of Mr. Houser: Dan Houser (@SecWonk) is Security & Identity Architect for a global healthcare company, with 20+ years experience creating security, cryptography and eBusiness solutions. He is a frequent speaker at regional and international security conferences, a Distinguished Toastmaster, published author, and serves on the (ISC)2 Board of Directors. Dan is passionate about professional development, teaching, motorcycles, Safe and Secure Online, advancing the role of women in Information Security, ethics, certification, and, most of all, his family.


Question #1: I know you are involved in a lot of professional organizations focused not only on providing continuing education for Information Security Professionals, but also on teaching information security skills to adults and children in the community. When Information Security Professionals come to training courses and seminars, we see they have a wide range of skills, various areas of interest and different levels of technical capability. Why do you think information security has so many problems with level-setting knowledge? Is it simply because there is such a large body of information that must be encompassed in order to be an effective security person? Or could it be the high rate of change present in the industry, or even a particular personality trait common to information security practitioners? Why is it so hard to build an Information Security Professional?


Mr. Houser: There are many reasons why it’s hard to build an Information Security Professional, (and there are some great clues in the awesome book “The Great Influenza” by John M Barry – this book is definitely worth a read!). In essence, we are building a new profession from the ground up, and 50% of the job titles you now see in information security (infosec) didn’t even exist 30 years ago. For example, my own job title didn’t exist 15 years ago: Sr. Security & Identity Architect. 

We can look to modern medicine as a parallel that began roughly 100 years ago. Although medicine has been practiced since someone first noticed bear grease on a wound seemed to help in healing, it’s only in the recent past that science was diligently applied to the practice of medicine. Law enforcement started experiencing the same thing when a scientific study of policing reversed a 4000 year old belief that patrolling was an effective deterrent to crime. The study showed that this practice in fact had a zero impact on crime prevention. Although I hope it won’t take us 4000 years to really move forward, we have to anticipate that there are a number of changes in our field that universities and corporations are finding difficult to track. One lesson we can learn from medicine is the advent of the “nurse practitioner”. This is a medical professional who has nearly the same skill in general medicine as a full M. D., but who only requires about half the investment in schooling. 

At this point, the information security industry does not have an undergraduate program, (at least one I’m familiar with), that can turn out graduates who are ready to jump right into InfoSec at a meaningful level. We also lack a journeyman/apprenticeship program in the profession. By studying our craft scientifically, encouraging professionalism, and understanding “what it is that makes a great Information Security Professional”, we will be able to determine the root studies necessary for competency, and get to train people on “the right thing”. 

We have to discard the notion that there is a single path to information security. We have to stop teaching InfoSec Professionals from curricula created to churn out developers, and understand the complete spectrum of pathways that lead to true information security. We need to understand what is valuable (and what is not).

I have made an impassioned plea, (and continue to do so), for an investment in scientific study of the information security profession; in particular to understand the root causes behind the lack of women in the field. Are they not finding the same on-ramps as men? Are they taking an off-ramp due to sexism, lack of opportunity, lack of fulfillment? We have no clue as an industry. We have some solid data showing Science, Technology, Engineering and Math (STEM) issues with gender split, and that STEM isn’t engaging and keeping women in associated disciplines. But that doesn’t necessarily mean that that is the root cause in the information security industry; we just pretend to believe it is so. Just as police practiced patrolling and doctors used blood-letting, because “everyone knows it helps”. 

Our profession is at the same point as breast-cancer research (note: not being crass, I lost my Mom to cancer). We are focusing so much on walks, runs, screening and exams that we have COMPLETELY lost sight of the fact that 18,000 women in the US die each year from breast cancer, and we have NO CLUE WHY. Frankly, that ticks me off. We must focus on understanding the cause before we can make any reasonable statements about a cure.

Through an actual scientific study of the development of the Information Security Professional – and I’m talking by actual PhD sociologists and psych folks, not geeks in InfoSec — we can learn the actual on-ramps and off-ramps in our profession. What creates a strong InfoSec Professional, why women don’t enter or quickly leave the InfoSec Profession, and how to start repairing the actual problems with the industry instead of fighting only symptoms. That will usher in a new age for creating Information Security professionals, and truly achieve gender equity in our field.


Question #2: As you look to the future of information security, what do you see as the long term role of certifying bodies such as ISC2, ISACA, etc.? What about future roles of educational organizations such as OWASP, ISSA and the like?


Mr. Houser: I think that the future is bright for these organizations because we have a continued need for differentiating professionals from pretenders, and certification is the only mechanism I can currently see that allows us to know that an individual has attained a base level of competency in a stated area of expertise. According to Frost & Sullivan statistics, we’re going to be growing by nearly double in the next decade, which will create TREMENDOUS market pressures. We must find InfoSec professionals somewhere, and we must have mechanisms in place that allow us to determine whether or not they have the requisite skills. I see no other viable means of determining that cross-market other than certification. 

Additionally, Security and Audit professional certification authorities like (ISC)2, ASIS and ISACA provide a code of ethics that governs the membership. And that’s inherently quite valuable; to know that my peers have not only met an independent standard for competency and knowledge, but are also held to an ethical code of conduct for their behavior. With us doubling-down in the next decade, we’re going to have a lot of people entering the profession from other professions, and certifications will grow in importance. (ISC)2, ASIS and ISACA further promote professionalism through local chapter representation, which is another key way to tie together the complete package.

Educational organizations that provide solid educational experiences, such as ISSA, OWASP and Infragard, can also provide vital networking and educational programs in communities to broaden the reach of the InfoSec community. I’d also add that there are some non-traditional avenues that should be considered — such as LockSport/TOOOL, Make and Meetup IT communities who often fill in the edges of our BoK with valuable insights.


Question #3: What role does the “Not a Conference” movement like BSides, DerbyCon, NotaCon play in advancing Information Security?

Mr. Houser: Our profession is challenging the nature of information use, and the exceptionally difficult challenges we have in protecting intellectual property with an increasingly advanced foe in the face of mobile, big data, cloud and internationalization.  One challenge we have as an industry is understanding the role that non-traditional knowledge plays in moving the profession forward.  There is great excitement in the industry from less-formal means of sharing information, such as DefCon, BSides, NotaCon, DerbyCon — all great stuff.  Certainly, there is substantial value we gain from meeting in different ways to share knowledge with each other.  What we must be cognizant of is that these should become further pathways for intellectual pursuit, and not forces that hold us back – that we don’t lose sight in the “not-a-conference” up-the-establishment ribaldry that we are a serious profession with serious problems, and deserve to be taken seriously.  That doesn’t mean we can’t have fun, but have to be careful that we aren’t sending the message that any rank amateur can do the work of a security professional. 

Sure, there is a lot of talent in the hacker community, just like there are uber-thieves.  However, at some point, the FBI agent who hangs out with organized crime becomes part of the problem, and can no longer be differentiated from the good guys, and have shredded their image and reputation.  Greyhat is dangerous in what it can do to your reputation and the professionalism we’ve fought very hard to achieve over the past 25 years.  There is also the issue that you absorb from associating with amateurs – sure it’s refreshing and great to feel the passion from those who do it for the love, but the unguided amateur sends the wrong message about the profession.  If anyone can do it, with the huge scarcity of Information Security folks right now, then why should they pay you a professional rate, when they can get an amateur for $12 an hour? 

The other big issue I see from greyhat conferences is that many provide glorification and validation of hacking, which I think is freaking stupid – this is like arming terrorists.  By glorifying hackers, you’re recruiting for them and filling their ranks with talented people that are then going to fight against you.  How stupid is that?!?!?  Hackers are roaches that should be squashed, not bred to make them stronger.  So, InfoSec professionals are advised to study from afar, and not wallow in the grey/black hat mentality.  What I see in some of the “not a conference” tracks is that the response to a hacker zero-day has undergone a subtle but important transition, from “Wow, that’s stunning”, to “Wow, you’re awesome”, to “What you do is awesome”… which is a whisker from “please hack more”.  By treating hackers like rock stars, you encourage their craft.  That’s nothing less than arming your enemy.  Even if you aren’t cheering, does your presence validate?  Lie down with dogs, get up with fleas.  Careful, colleagues, you’re playing with fire, and we all may get burned.


Thanks to Dan for sharing his time with us and thanks to you for reading. I look forward to doing more 3 Tough Questions articles, and if there are people in the community you think we should be talking to, point them out to me on Twitter (@lbhuston) or in the comments.

3 Tough Questions with Chris Jager

Recently, I got to spend some time interviewing Chris Jager via email on industrial control systems security. He didn’t pull any punches and neither did I. Here, are 3 Tough Questions between myself (@lbhuston) and Chris.

A Short Biography of Chris Jager (@chrisjager): I have over 15 years of experience in Information Technology and have focused on the practical application of security principles throughout my career. Most recently, I was director of the NESCO Tactical Analysis Center at EnergySec; a non-profit organization formed to facilitate information sharing, situational awareness, and education outreach to the energy sector. I am active in a number of information security workgroups and have provided operational, architectural, and regulatory compliance guidance to large and small organizations in both the public and private sectors, focusing on the energy sector exclusively since 2006.

Brent: You have spent a lot of time working on Industrial Control Systems (ICS) in your career. During that time, you have been witness to the explosion of interest in IT security as a profession. Why should some of the younger folks thinking about information security as a career consider a focus on ICS and SCADA? Why should they care?

Mr. Jager: This is a fantastic question and, if I frame my response correctly, the answer will hopefully be self-evident to your readers.

ICS and SCADA are terms that are seldom understood and often misused by information security (infosec) publications. SCADA systems typically manage geographically disperse areas and often consist of numerous functionally disparate processes.

However, because of the immense variety of different processes that can be managed by industrial control systems, ICS has become somewhat of a catchall term – including SCADA systems. For example, you’ll often find electric power generation processes such as turbine control, burner management, vibration monitoring and more lumped into the mix. Each of these processes has discrete or locally distributed control and instrumentation systems, any of which can cause catastrophic safety, reliability, and financial issues if misused.

For me, the challenge of protecting these kinds of systems is far more interesting than making sure that little Bobby can’t drop the student records table in a classroom database. Much of the actual management technology is the same as what is used in general IT, but the application is very different. Things get a little more exotic (and arcane) when you go further down the stack into digital–to-analog conversion, but it’s not overly difficult for most folks to understand once exposed to it. The negative impacts of misuse aren’t limited to convenience and financial loss. Risk to life and limb is a very real possibility in many processes that are managed by industrial control system automation that is being run out of specification.

Typically, industrial control systems are deployed in step with the physical equipment they are designed to manage. The physical equipment is often orders of magnitude more expensive than the ICS components that ship with it and may be designed for lifespans measured in decades. In short, upgrades seldom occur as they need to be engineered and tested for functionality, safety, and a myriad of other issues pertaining to the existing physical equipment.

This has led to a situation where the groups that understand control systems and processes are naturally (and often generationally) gapped from those groups who understand the current threat and vulnerability landscapes. Consequently, there are currently very few individuals that understand industrial control system security as it relates to the changing threat picture. If the challenge of doing something very few dare to try doesn’t sound good on its own, this is the sound of opportunity knocking. Answer the door!

I’d like to make one last point on this question. Take a look around your house or apartment and count the number of internet-enabled devices you have. Most people these days have far fewer traditional computers than embedded systems – devices that aren’t user-serviceable without breaking a warranty or two. And the hacking skills necessary to modify such devices to fit use cases unintended by the manufacturers seem to come naturally to the younger folk of today. Those skills are also relatively portable to the ICS/SCADA world where embedded systems are the norm. Sure, some of the protocols and hardware packages are somewhat different, but they are all relatively simple compared to what folks are tinkering with at their coffee tables. We can always use more QA/breakers – particularly at key points in the supply chain where issues can be spotted and fixed before they become permanently vulnerable installations. Again I say, “knock knock”!


Brent: You talk a lot about how challenging ICS/SCADA security is. Do you think protecting ICS/SCADA systems in a meaningful way is an attainable goal? Assuming yes, do you think it could be done during what’s left of our careers? Why or Why not?

Mr. Jager: If I didn’t think it was an attainable goal, I’d not be doing the kind of work I’ve done over the past number of years. There are much easier ways to make a buck than to have people who are entrenched in the old way of doing things actively work to prevent you from even introducing discussions about change – let alone actually implementing it!

There is momentum in this area, but much work still needs to be done. Devices still ship from manufacturers with easily discerned hardcoded administration credentials, firmware updates are accepted without challenge and more. Once deployed in the field, user passwords seldom change, vulnerabilities discovered post-installation go unmitigated, and so on.

Because we have all this noise around basic security failures and their associated issues, we don’t yet know what constitutes “meaningful” or “attainable” when we speak of complex industrial control systems. A prime example here is that the electric sector is still using the exact same set of controls and asset scoping for its regulated security standards as when I first started working in the sector in 2006. NERC CIP version 1 was in final draft form, and the current requirements catalog will remain largely unchanged until at least 2015 when and if version 5 becomes effective. There have been minor changes in the interim, but not one that comes remotely close to addressing change in the threat landscape.

Will we ever have a perfect system? No. We do, however, urgently need to stop being complacent about the subject and implement those security measures that we can.


Brent: If you had your own ICS system, let’s say you ran Chris’s power company, what would that look like? How would it be protected?

Mr. Jager: It would look very, very “dumb”. Until such time as ICS and other automation technologies are vetted by process engineers – and I’m talking about the entire ICS/automation stack, I would automate only where it was impossible to operate the business or process without it.

It seems to me that we have a major employment problem in this country and no clear path to resolution. Putting some of these people to work securing our industrial control systems is an area where the private sector can help get the country back to work without relying on government funded stimulus packages. An added bonus is that we’ll end up with a whole cadre of workers who have been exposed to the industry, a percentage of who will stay in the field and help to address the industry’s gray out problem. All it takes is one or two sizable impacts from automation failure or misuse for the cost savings seen through automation to be wiped out.

Where I had no choice but to automate, Chris’ Power Company would look very much like any power company out there today, unfortunately. There simply aren’t enough vendors and manufacturers out there presently that produce secure equipment. Even then, systems integrators often further weaken the environment by adding support accounts and other remotely accessible backdoors to these systems.

Be it in the energy sector or any other, process automation installations will inevitably mature to a state of persistent vulnerability due to their long lifespans. Vulnerability discovery and exploitation techniques advance over time, vulnerabilities are introduced through regression bugs elsewhere in the software or protocol stack, or the process itself may have changed to a point where a previously innocuous vulnerability now has the ability to introduce a large impact if exploited.

Eventually, pointing out that the emperor has no clothes becomes a career limiting move – particularly when said emperor is an exhibitionist! Instead, the focus should be on identifying the more sensitive individuals in the crowd and protecting them appropriately through sound risk identification principles. We can’t make the problems go away through risk management, but we can use the techniques to identify the things that matter most and, where we can’t mitigate the risk, implement monitoring and response controls. This sort of approach also helps prioritize future efforts and dollars.

The top security controls at Chris’ Power Company would center around monitoring and response as employees would be trained to assume the environment was in a persistent state of compromise. In the environment we live in today where threats are real and expressed, and vulnerabilities aren’t able to be universally mitigated, the only real chance at controlling risk you have is to manage the impact of a successful attack. You only get that chance if you are able to detect and respond before the attack balloons to the maximum impact value.

If you failed to give my company that chance, you wouldn’t be working at Chris’ Power Company!

Thanks to Chris Jager for his insights and passion about ICS security. We appreciate his willingness to spend time with us. Thanks, as always, to you the reader, for your attention. Until next time, stay safe out there!