3 Reasons Your Supply Chain Security Program Stinks

  1. Let’s face it, Supply Chain Security and Vendor Risk Management is just plain hard. There are a lot of moving pieces – companies, contacts, agreements, SLAs, metrics, reporting, etc. Suppliers also change frequently, since they have their own mergers/acquisitions, get replaced due to price changes or quality issues, new suppliers are added to support new product lines and old vendors go away as their product lines become obsolete. Among all of that, is cyber-security. MSI has a better and faster way forward – an automated way to reduce the churn – a way to get a concise, easy to use and manageable view of the security of your vendors’ security posture. This month, we will show you what we have been doing in secret for some of the largest companies in the world… 
  2. Vendors with good security postures often look the same as vendors with dangerous security postures, on paper at least. You know the drill – review the contracts, maybe they send you an audit or scan report (often aged), maybe they do a questionnaire (if you’re lucky). You get all of this – after you chase them down and hound them for it. You hope they were honest. You hope the data is valid. You hope they are diligent. You hope they stay in the same security posture or improve over time, and not the opposite. You hope for a lot. You just don’t often KNOW, and what most companies do know about their vendors is often quite old in Internet terms, and can be far afield from where their security posture is at the moment. MSI can help here too. This month, we will make our passive assessment tool available to the public for the first time. Leveraging it, you will be able to rapidly, efficiently and definitively get a historic and current view of the security posture of your vendors, without their permission or knowledge, with as frequent updates as you desire. You’ll be able to get the definitive audit of their posture, from the eyes of an attacker, in a variety of formats – including direct data feeds back into your GRC tools. Yes, that’s right – you can easily differentiate between good and bad security AND put an end to data entry and keyboarding sessions. We will show you how… 
  3. Supply chain security via manual processes just won’t scale. That’s why we have created a set of automated tools and services to help organizations do ongoing assessments of their entire supply chain. You can even sort your supply chain vendors by criticality or impact, and assign more or less frequent testing to those groups. You can get written reports, suitable for auditors – or as we wrote above, data feeds back to your GRC tools directly. We can test tens of vendors or thousands of vendors – whatever you need to gain trust and assurance over your supply chain vendors. The point is, we built workflows, methodologies, services and tools that scale to the largest companies on the planet. This month, we will show you how to solve your supply chain security problems.
 
If you would like a private, sneak peak preview briefing of our research and the work we have done on this issue, please get in touch with your account executive or drop us a line via info (at) microsolved /dot/ com, call us at (614) 351-1237 or click the request a quote button at the top of our website – http://microsolved.com. We’ll be happy to sit down and walk through it with you. 
 
If you prefer to learn more throughout March – stay tuned to http://stateofsecurity.com for more to come. Thanks for reading! 

Comparing 2 Models for DMZ Implementations

I recently had a discussion with another technician about the security of the two most popular DMZ implementation models. That is: 
  • The “3 Legged Model” or “single firewall” – where the DMZ segment(s) are connected via a dedicated interface (or interfaces) and a single firewall implements traffic control rules between all of the network segments (the firewall could be a traditional firewall simply enforcing interface to interface rules or a “next generation” firewall implementing virtualized “zones” or other logical object groupings)
  • The “Layered Model” or “dual firewall”- where the DMZ segment(s) are connected between two sets of firewalls, like a sandwich
 
Both approaches are clearly illustrated above, and explained in detail in the linked wikipedia article, so I won’t repeat that here. 
 
I fully believe that the “3 Legged Model” is a lower risk implementation than the layered model. This outright contradicts what the wikipedia article above states: 
 
     “The most secure approach, according to Stuart Jacobs, [1]is to use two firewalls to create a DMZ.” — wikipedia article above.
 
While the Layered model looks compelling at first blush, and seems to apply the concept of “more firewalls would need to be compromised to lead to internal network access”; I believe that, in fact, it reduces the overall security posture in the real world, and increases risk. Here’s why I feel that way. Two real-world issues that often make things that look great at first blush or that “just work” in the lab environment, have significant disadvantages in the real world are control complexity and entropy. Before we dig too deeply into those issues though, let’s talk about how the two models are similar. (Note that we are assuming that the firewalls themselves are equally hardened and monitored – IE, they have adequate and equal security postures both as an independent system and as a control set, in aggregate.)
 
Reviewing the Similarities
 
In both of the models, traffic from the DMZ segment(s) pass through the firewall(s) and traffic controls are applied. Both result in filtered access to the internal trusted network via an often complex set of rules. Since in both cases, traffic is appropriately filtered, authorization, logging and alerting can adequately occur in both models. 
 
Establishing Differences
 
Now the differences. In the 3 Legged model, the controls are contained in one place (assuming a high availability/failover pair counts as a single set of  synced controls), enforced in one place, managed and monitored in one place. The rule set does not have cascading dependencies on other implementations of firewalls, and if the rule set is well designed and implemented, analysis at a holistic level is less complex.
 
In the Layered model, the controls are contained across two separate instances, each with different goals, roles and enforcement requirements. However, the controls and rule sets are interdependent. The traffic must be controlled through a holistic approach spread across the devices, and failures at either firewall to adequately control traffic or adequately design the rule sets could cause cascading unintended results. The complexity of managing these rules across devices, with different rule sets, capabilities, goals and roles is significantly larger than in a single control instance. Many studies have shown that increased control complexity results in larger amounts of human error, which in turn contributes to higher levels of risk. 
 
Control Complexity Matters
 
Misconfigurations, human errors and outright mistakes are involved in a significant number (~95%) of compromises. How impactful are human mistakes on outright breaches? Well according to the 2015 Verizon DBIR:
 
“As with years past, errors made by internal staff, especially system administrators who were the prime actors in over 60% of incidents, represent a significant volume of breaches and records ,even with our strict definition of what an “error” is.” —DBIR
 
Specifically, misconfiguration of devices were involved in the cause of breaches directly in 3.6% of the breaches studied in the DBIR. That percentage may seem small, but the data set of 79,790 incidents resulting in 2,122 breaches that means a staggering number of 76 breaches of data were the result of misconfigurations.
 
This is exactly why control complexity matters. Since control complexity correlates with misconfiguration and human error directly, when complexity rises, so does risk – conversely, when controls are simplified, complexity falls and risk of misconfiguration and human error is reduced.
 
Not to beat on the wikipedia article and Stuart Jacob’s assertions, but further compounding the complexity of his suggestion is multiple types of firewalls, managed by multiple vendors. Talk about adding complexity, take an interdependent set of rules and spread them across devices, with differing roles and goals and you get complexity. Now make each part of the set a different device type with it’s own features, nuances, rule language, configuration mechanism and managed service vendor, and try to manage both of those vendors in sync to create a holistic implementation of a control function. What you have is a NIGHTMARE of complexity. At an enterprise scale, this implementation approach would scale in complexity, resources required and oversight needs logarthmically as new devices and alternate connections are added. 
 
So, which is less complex, a single implementation, on a single platform, with a unified rule set, managed, monitored and enforced in a single location – OR – a control implemented across multiple devices, with multiple rule sets that require monitoring, management and enforcement in interdependent deployments? I think the choice is obvious and rational.
 
Now Add Entropy
 
Ahh, entropy, our inevitable combatant and the age old foe of order. What can you say about the tendency for all things to break down? You know what I am about to point out though, right? Things that are complex, tend to break down more quickly. This applies to complex organisms, complex structures, complex machinery and complex processes. It also applies to complex controls.
 
In the case of our firewall implementation, both of our models will suffer entropy. Mistakes will be made. Firewall rules will be implemented that allow wider access than is needed. Over time, all controls lose efficiency and effectiveness. Many times this is referred to as “control drift” or “configuration drift”. In our case, the control drift over a single unified rule set would have a score of 1. Changes to the rule set, apply directly to behavior and effectiveness. However, in the case of the Layered model, the firewalls each have a distinct rule set, which will degrade – BUT – they are interdependent on each other – giving an effective score of 2 for each firewall. Thus, you can easily see, that as each firewall’s rule set degrades, the private network’s “view” of the risk increases significantly and at a more rapid pace. Simply put, entropy in the more complex implementation of multiple firewalls will occur faster, and is likely to result in more impact to risk. Again, add the additional complexity of different types of firewalls and distinct vendors for each, and the entropy will simply eat you alive…
 
Let’s Close with Threat Scenarios

Let’s discuss one last point – the actual threat scenarios involved in attacking the private network from the DMZ. In most cases, compromise of a DMZ host will give an attacker a foothold into the environment. From there, they will need to pivot to find a way to compromise internal network resources and establish a presence on the internal network. (Note that I am only focusing on this threat scenario, not the more common phishing/watering hole scenarios that don’t often involve the compromise of a DMZ host, except perhaps for exfiltration paths. But, this is outside our current scope.) If they get lucky, and the DMZ is poorly designed, they may find that their initially compromised host has some form of access to the internal network that they can exploit. But, in most cases, the attacker needs to perform lateral movement to compromise additional hosts, searching for a victim that has the capability to provide a launching point for attacks against the internal network.
 
In these cases, detection is the goal of the security team. Each attacker move and probe, should cause “friction” against the controls, thereby raising the alert and log levels and the amount of unusual activity. Ultimately, this should lead to the detection of the attacker presence and the incident response process engagement.
 
However, let’s say that you are the attacker, trying to find a host that can talk to the internal network from the DMZ in a manner that you can exploit. How likely are you to launch an attack against the firewalls themselves? After all, these are devices that are designed for security and detection. Most attackers, ignore the firewalls as a target, and continue to attempt to evade their detection capabilities. As such, in terms of the threat scenario, additional discreet firewall devices, offer little to no advantage – and the idea that the attacker would need to compromise more devices to gain access loses credibility. They aren’t usually looking to pop the firewall itself. They are looking for a pivot host that they can leverage for access through whatever firewalls are present to exploit internal systems. Thus, in this case, both deployment models are rationally equal in their control integrity and “strength” (for lack of a better term).
 
Wrapping This Up
 
So, we have established that the Layered model is more complex than the 3 Legged model, and that it suffers from higher entropy. We also established that in terms of control integrity against the most common threat scenario, the implementation models are equal. Thus, to implement the Layered model over the 3 Legged model, is to increase risk, both initially, and at a more rapid pace over time for NO increase in capability or control “strength”. This supports my assertion that the 3 Legged model is, in fact, less risky than the Layered model of implementation.
 
As always, feel free to let me know your thoughts on social media. I can be found on Twitter at @lbhuston. Thanks for reading! 

Podcast Episode 8 is Out

This time around we riff on Ashley Madison (minus the morals of the site), online privacy, OPSec and the younger generation with @AdamJLuck. Following that, is a short with John Davis. Check it out and let us know your thoughts via Twitter – @lbhuston. Thanks for listening! 

You can listen below:

State Of Security Podcast Episode 4

We are proud to announce the release of State Of Security, the podcast, Episode 4. This time around I am hosting John Davis, who riffs on policy development for modern users, crowdsourcing policy and process management, rational risk assessment and a bit of history.

Give it a listen and let us know what you think!

Thanks for supporting the podcast!

How to Use Risk Assessment to Secure Your Own Home

Risk assessment and treatment is something we all do, consciously or unconsciously, every day. For example, when you look out the window in the morning before you leave for work, see the sky is gray and decide to take your umbrella with you, you have just assessed and treated the risk of getting wet in the rain. In effect, you have identified a threat (rain) and a vulnerability (you are subject to getting wet), you have analyzed the possibility of occurrence (likely) and the impact of threat realization (having to sit soggy at your desk), and you have decided to treat that risk (taking your umbrella) risk assessment.

However, this kind of risk assessment is what is called ad hoc. All of the analysis and decision making you just made was informal and done on the fly. Pertinent information wasnt gathered and factored in, other consequences such as the bother of carrying the umbrella around wasnt properly considered, other treatment options werent considered, etc. What business concerns and government agencies have learned from long experience is that if you investigate, write down and consider such factors rationally and holistically, you end up with a more realistic idea of what you are really letting yourself in for, and therefore you are making better risk decisions formal risk assessment.

So why not apply this more formal risk assessment technique to important matters in your own life such as securing your home? Its not really difficult, but you do have to know how to go about it. Here are the steps:

1. System characterization: For home security, the system you are considering is your house, its contents, the people who live there, the activities that take place there, etc. Although, you know these things intimately it never hurts to write them down. Something about viewing information on the written page helps clarify it in our minds.

  1. Threat identification: In this step you imagine all the things that could threaten the security of your home and family. These would be such things as fire, bad weather, intruders, broken pipes, etc. For this (and other steps in the process), you can go beyond your own experience and see what threats other people have identified (i.e. google inquiries, insurance publications).

  2. Vulnerability identification: This is where you pair up the threats you have just identified with weaknesses in your home and its use. For example, perhaps your house is located on low ground that is subject to flooding, or you live in a neighborhood where burglaries may occur, or you have old ungrounded electrical wiring that may short and cause a fire. These are all vulnerabilities.

  3. Controls analysis: Controls analysis is simply listing the security mechanisms you already have in place. For example, security controls used around your home would be such things as locks on the doors and windows, alarm systems, motion-detecting lighting, etc.

  4. Likelihood determination: In this step you decide how likely it is that the threat/vulnerability will actually occur. There are really two ways you can make this determination. One is to make your best guess based on knowledge and experience (qualitative judgement). The second is to do some research and calculation and try to come up with actual percentage numbers (quantitative judgement). For home purposes I definitely recommend qualitative judgement. You can simply rate the likelihood of occurrence as high, medium or low risk.

  5. Impact analysis: In this step you decide what the consequences of threat/vulnerability realization will be. As with likelihood determination, this can be judged quantitatively or qualitatively, but for home purposes I recommend looking at worst-case scenarios. For example, if someone broke into your home, it could result in something as low impact as minor theft or vandalism, or it could result in very high impact such as serious injury or death. You should keep these more dire extremes in mind when you decide how you are going to treat the risks you find.

  1. Risk determination: Risk is determined by factoring in how likely threat/vulnerability realizations is with the magnitude of the impact that could occur and the effectiveness of the controls you already have in place. For example you could rate the possibility of home invasion occurring as low, and the impact of the occurrence as high. This would make your initial risk rating a medium. Then you factor in the fact that you have an alarm system and un- pickable door locks in place, which would lower your final risk rating to low. That final rating is known as residual risk.

  2. Risk treatment: Thats it! Once you have determined the level of residual risk, it is time to decide how to proceed from there. Is the risk of home invasion low enough that you think you dont need to apply any other controls? That is called accepting risk. Is the risk high enough that you feel you need to add more security controls to bring it down? That is called risk limitation or remediation. Do you think that the overall risk of home invasion is just so great that you have to move away? That is called risk avoidance. Do you not want to treat the risk yourself at all, and so you get extra insurance and hire a security company? That is called risk transference.

So, next time you have to make a serious decision in your life such as changing jobs or buying a new house, why not apply the risk assessment process? It will allow you to make a more rational and informed decision, and you will have the comfort of knowing you did your best in making the decision. 

Thanks to John Davis for this post.

Incident Response: Are You Ready?

All of us suffer from complacency to one extent or another. We know intellectually that bad things can happen to us, but when days, months and years go by with no serious adverse incidents arising, we tend to lose all visceral fear of harm. We may even become contemptuous of danger and resentful of all the resources and worry we expend in aid of problems that never seem to manifest themselves. But this is a dangerous attitude to fall into. When serious problems strike the complacent and unprepared, the result is inevitably shock followed by panic. And hindsight teaches us that decisions made during such agitated states are almost always the wrong ones. This is true on the institutional level as well.

During my years in the information security industry, I have seen a number of organizations founder when struck by their first serious information security incident. I’ve seen them react slowly, I’ve seen them throw money and resources into the wrong solutions, and I’ve seen them suffer regulatory and legal sanctions that they didn’t have to incur. And after the incident has been resolved, I’ve also seen them all put their incident response programs in order; they never want to have it happen again! So why not take a lesson from the stricken and put your program in order before it happens to your organization too? Preparing your organization for an information security incident isn’t really very taxing. It only takes two things: planning and practice.

When undertaking incident response planning, the first thing to do is to examine the threat picture. Join user groups and consult with other similar organizations to see what kinds of information security incidents they have experienced. Take advantage of free resources such as the Verizon Data Breach Reports and US-CERT. The important thing is to limit your serious preparations to the top several most credible incident types you are likely to encounter. This streamlines the process, lessens the amount of resources you need to put into it and makes it more palatable to the personnel that have to implement it. 

Once you have determined which threats are most likely to affect your organization, the next step is to fully document your incident response plan. Now this appears to be a daunting task, but in reality there are many resources available on the Internet that can help guide you through the process. Example incident response plans, procedures and guidance are available from SANS, FFIEC, NIST and many other reputable organizations free of charge. I have found that the best way to proceed is to read through a number of these resources and to adapt the parts that seem to fit your particular organization the best. Remember, your incident response plan is a living document and needs to reflect the needs of your organization as well as possible. It won’t do to simply adopt the first boiler plate you come across and hope that it will work.

Also, be sure that your plan and procedures contain the proper level of detail. You need to spell out things such as who will be on the incident response team, their individual duties during incidents, where the team will meet and where evidence will be stored, who should be contacted and when, how to properly react to different incidents and many other details. 

The next, and possibly the most important step in effective incident response is to practice the plan. You can have the most elegantly written security incident response plan in the world, and it is still doomed to fail during an actual incident if the plan is not practiced regularly. In all my years of helping organizations conduct their table top incident response practice sessions, I have never failed to see the process reveal holes in the plan and provide valuable lessons for the team members who participate. The important thing here is to pick real-world incident scenarios and to conduct the practice as close to the way it would actually occur as possible. We like to only inform a minimum number of response personnel in advance, and surprise the bulk of responders with the event just as it would happen if it were real. Of course there is much more to proper incident response planning and practice than I have included here. But this should start your organization along the right path. For more complete information and help with the process, don’t hesitate to contact your MSI representative. 

Thanks to John Davis for writing this post.

How Risky is the Endpoint?

I found this article quite interesting, as it gives you a heads up about the state of endpoint security, at least according to Ponemon. For those who want to skim, here is a quick summary:

“Maintaining endpoint security is tougher than ever, security professionals say, thanks largely to the huge influx of mobile devices.

According to the annual State of the Endpoint study, conducted by the Ponemon Institute and sponsored by Lumension, 71 percent of security professionals believe that endpoint security threats have become more difficult to stop or mitigate over the past two years.

…More than 75 percent said mobile devices pose the biggest threat in 2014, up from just 9 percent in 2010, according to Ponemon. Some 68 percent say their mobile devices have been targeted by malware in the past 12 months, yet 46 percent of respondents say they do not manage employee-owned mobile devices.

…And unfortunately, 46 percent of our respondents report no efforts are in place to secure them.”

…While 40 percent report they were a victim of a targeted attack in the past year, another 25 percent say they aren’t sure if they have been, which suggests that many organizations don’t have security mechanisms in place to detect such an attack, the study says. For those that have experienced such an attack, spear-phishing emails sent to employees were identified as the No. 1 attack entry point.

…The survey found that 41 percent say they experience more than 50 malware attacks a month, up 15 percent from those that reported that amount three years ago. And malware attacks are costly, with 50 percent saying their operating expenses are increasing and 67 percent saying malware attacks significantly contributed to that rising expense.

…While 65 percent say they prioritize endpoint security, just 29 percent say their budgets have increased in the past 24 months.” — Dark Reading

There are a couple of things I take away from this:

  • Organizations are still struggling with secure architectures and enclaving, and since that is true, BYOD and visiblility/prevention efforts on end-points are a growing area of frustration.
    • Organizations that focus on secure architectures and enclaving will have quicker wins
    • Organizations with the ability to do nuance detection for enclaved systems will have quicker wins
  • Organizations are still focusing on prevention as a primary control, many of them are seriously neglecting detection and response as control families
    • Organizations that embrace a balance of prevention/detection/response control families will have quicker wins
  • Organizations are still struggling in communicating to management and the user population why end-point security is critical to long term success
    • Many organizations continue to struggle with creating marketing-based messaging for socialization of their security mission
If you would like to discuss some or all of these ideas, feel free to ping me on Twitter (@lbhuston) or drop me an email. MSI is working with a variety of companies on solutions to these problems and we can certainly share what we have learned with your organization as well. 

How Cloud Computing Will Leak Into Your Enterprise

“Consumer use of the cloud”; in a phrase, is how the cloud will leak into your enterprise, whether you like it or not. Already, IT is struggling with how to manage the consumer use of devices and services in the enterprise. Skype/VoIP and WIFI were the warning shots, but the BlackBerry, iPhone, iPad and other consumer devices are the death nail for centralized IT (and IS) control.

Consumer electronics, backed by a wide array of free or low cost cloud services, are a new frontier for your organization. Services like MobileMe, DropBox, various file sharing tools and remote access services like GoToMyPC, et al. have arrived. Likely, they are in use in your environment today. Consumers use and leverage these services as a part of their increasingly de-centralized online life. Even with sites like Twitter and FaceBook growing in capability and attention, consumers grow their use, both personally and professionally of services “in the cloud”. Make no mistake, despite your controls at the corporate firewalls, consumers are using their mobile and pocket devices and a variety of these services. Unless you are searching them at the door and blocking cell phone use in your business, they are there.

This might not be “the cloud” that your server admins are worrying about. It might not represent all of the off-site system, database and other hosting tools they are focused on right now, but make no mistake, this consumer version of the cloud has all, if not more, of the same issues and concerns. Questions about your data is managed, secured and maintained all abound.

Given the “gadget posture” of most organizations and their user communities, this is not likely to be something that technical controls can adequately respond to. The consumer cloud services are too dynamic and widespread for black listing approaches to contain them. Plus, they obviously lack centralized choke points like in the old days of “network perimeter security”. The new solution, however, is familiar. Organizations must embrace policies and processes to cover these technologies and their issues. They also have to embrace education and awareness training around these topics with their user base. Those who think that denial and black listing can solve this problem are gravely mistaken. The backdoor cloud consumer movement into your organization is already present, strong and embedded. Teaching users to be focused on safe use of these services will hopefully reduce your risk, and theirs.

Choosing Your OS is NOT a Security Control

Just a quick note on the recent Google announcement about dumping Windows for desktops in favor of Linux and Mac OS X. As you can see from the linked article, there is a lot of hype about this move in the press.

Unfortunately, dumping Windows as a risk reducer is just plain silly. It’s not which OS your users use, but how safely they use it. If a user is going to make the same “bad computing hygiene” choices, they are going to get p0wned, regardless of their OS. Malware, Trojans and a variety of attacks exist for most every, if not every, platform. Many similar brower-based attacks exist across Windows, Linux and OS X. These are the attack patterns of today, not the Slammer and Code Red worm attack patterns of days gone by.

I fail to see how changing OS will have any serious impact on organizational risk. Perhaps it will decrease, a very small amount, the costs associated with old-school spyware and worms, but this, in my opinion is likely to be a decreasing return. Over time, attackers are getting better at cross platform exploitation and users are likely to quickly feel a false sense of security from their OS choice and make even more bad decisions. Combine these, and then multiply the costs of additional support calls to the help desk as users get comfortable and have configuration issues in the enterprise, and it seems to me to be a losing gambit.

Time will tell, but I think this was a pretty silly move and one that should be studied carefully before being mirrored by other firms.

MSI Launches New Threat Modeling Offering & Process

Yesterday, we were proud to announce a new service offering and process from MSI. This is a new approach to threat modeling that allows organizations to proactively model their threat exposures and the changes in their risk posture, before an infrastructure change is made, a new business operation is launched, a new application is deployed or other IT risk impacts occur.

Using our HoneyPoint technology, organizations can effectively model new business processes, applications or infrastructure changes and then deploy the emulated services in their real world risk environments. Now, for the first time ever, organizations can establish real-world threat models and risk conditions BEFORE they invest in application development, new products or make changes to their firewalls and other security tools.

Even more impressive is that the process generates real-world risk metrics that include frequency of interaction with services, frequency of interaction with various controls, frequency of interaction with emulated vulnerabilities, human attackers versus automated tools, insight into attacker capabilities, focus and intent! No longer will organizations be forced to guess at their threat models, now they can establish them with defendable, real world values!

Much of the data created by this process can be plugged directly into existing risk management systems, risk assessment tools and methodologies. Real-world values can be established for many of the variables and other metrics, that in the past have been decided by “estimation”.

Truly, if RISK = THREAT X VULNERABILITY, then this new process can establish that THREAT variable for you, even before typical security tools like scanners, code reviews and penetration testing have a rough implementation to work against to measure VULNERABILITY. Our new process can be used to model threats, even before a single line of real code has been written – while the project is still in the decision or concept phases!

We presented this material at the local ISSA chapter meeting yesterday. The slides are available here:

Threat Modeling Slides

Give us a call and schedule a time to discuss this new capability with an engineer. If your organization is ready to add some maturity and true insight into its risk management and risk assessment processes, then this just might be what you have been waiting for.