Why My AI Agents Needed CaneCorso as a Security Control Plane

AI agents are powerful because they can read, reason, summarize, decide, and act across a wide range of information sources.

That is also what makes them dangerous.

The more useful an agent becomes, the more likely it is to consume data I do not fully trust. Emails. Newsletters. RSS feeds. API responses. Documents sent as attachments. Social media. YouTube transcripts. Scraped search results. Web pages. Translated content. Random bits of text pulled from places where I do not control the author, the formatting, the intent, or the payload.

That is a very different security model than the one most of us are used to.

In traditional applications, we spend a lot of time separating code from data, users from administrators, trusted networks from untrusted networks, and internal systems from the internet. With LLMs and agents, all of those boundaries start to blur. Instructions, context, content, and intent all arrive in the same stream. The model has to reason over that stream, and the agent has to decide what to do with the result.

That is exactly why I wanted a security control plane in front of my own AI agents.

For me, that control plane became CaneCorso™.

CaneCorsoAI

The Problem Was Not Theoretical

My agents support me personally. They monitor and process a wide range of information sources, each usually aligned to a specific focus area, query, or web mission. Some are looking for security research. Some are watching industry news. Some are digesting newsletters. Some are collecting data from APIs, documents, email attachments, social media, transcripts, and scraped search results.

In other words, they spend their time eating untrusted data.

That creates a meaningful risk profile.

I wanted to protect the agents against prompt injection and malformed data attacks. I also wanted to protect upstream and downstream systems from malicious URLs, private data exposure, and unsafe content that could be carried forward into decision-making. These agents are not just producing novelty summaries. Their outputs are used to support decisions.

That matters.

If an agent reads a poisoned page, a malicious email, or a document with hidden instructions, I do not want that content passed directly to the underlying LLM. If the LLM produces something unsafe, misleading, privacy-sensitive, or operationally risky, I do not necessarily want that output passed into the next stage of logic without inspection.

Before CaneCorso, the basic pipeline looked like this:

Collect inputs → summarize/extract → reason/decide → write output.

There was some logging in place for decision analysis, KPIs, and tuning. But logging is not a trust boundary. Observability is useful after the fact. It does not, by itself, prevent hostile or malformed content from entering the LLM context window.

I needed something more like a firewall for agentic workflows.

Moving CaneCorso Into the Agent Path

CaneCorso is now the single control plane for multiple agents in my environment.

Each agent has a defined CaneCorso workflow and API key configured with specific rules and outcomes. From a security practitioner’s perspective, the model feels familiar. It is not unlike firewall or IPS policy tuning. Each workflow can be adjusted based on what the agent does, what data it sees, and what level of risk is acceptable for that mission.

Every agent now sends data through CaneCorso before that data is passed to an LLM.

That is the first and most important control point. Untrusted input does not go straight to the model anymore. It is inspected, filtered, redacted, defanged, and rated before the LLM sees it.

About half of my agents also send the LLM output corpus back through CaneCorso for a second pass before the result is allowed into downstream decision logic. That double-checking pattern has become important for workflows where the output itself may influence actions, prioritization, or further analysis.

The result is a two-layer safety pattern:

Input inspection before the LLM.

Output inspection before downstream use.

That simple architectural shift changes the trust model. I am no longer depending only on model behavior, prompt discipline, or good luck. I have a monitored, auditable control plane sitting in the path.

Token Vault Sanitization and SIEM Logging

One of the other important pieces for me has been token vault sanitization.

Private or sensitive values can be protected before they move through the workflow. That is especially important when agents are handling emails, documents, API results, and mixed internal/external content. Even personal agents can encounter sensitive material, and enterprise agents will almost certainly do so.

I am also sending full transaction details, safety ratings, and decision-making context into my SIEM logs.

That is not just for compliance theater. It gives me a way to perform forensics, review blocked or redacted content, tune policies over time, and understand how different sources behave. If a feed repeatedly triggers injection protections, I can see that. If a workflow is too permissive or too noisy, I can tune it. If something gets blocked, I can understand why.

That feedback loop is essential.

AI security is not a one-and-done configuration exercise. The attack patterns are evolving. The data sources change. The agents change. The business logic changes. The controls need to be visible enough and adjustable enough to keep up.

The Integration Experience

My agents are written in Python, and the CaneCorso documentation made the integration straightforward.

The samples were relevant, accurate, and concise. I started by building a simple API harness from the documentation. Then I tuned that harness for each agent so it used the proper workflow-specific API key. After that, I used the CaneCorso web GUI to tune each workflow.

The first agent took about 30 minutes.

Each following agent took about 10 minutes.

That is an important detail for buyers. This did not turn into a rewrite of my agent stack. It felt more like adding a security middleware layer or API gateway into the agent path. Once the pattern existed, repeating it across agents was simple.

The workflow tuning was also approachable. The GUI presents functional modules in plain language. You can turn capabilities on and off and tune the behavior without needing to write complex detection logic or understand obscure heuristics. Security people will recognize the rhythm: enable controls, test, review outcomes, tune, and repeat.

It felt like firewall or IPS rule tuning, but for AI workflows.

After testing, the agents were back in service. The system has been running seamlessly for weeks with no significant hiccups.

What It Has Caught

So far, I have seen multiple prompt injection redactions. That is not surprising, because some of my agents monitor discussions around LLM threats and AI security. In those environments, malicious or adversarial examples are not theoretical. They show up in the data.

I have also had excellent results with PII redaction and URL defanging.

The URL handling matters more than many people realize. Agents often collect links, summarize pages, follow references, or pass URLs into later workflows. Defanging malicious or suspicious URLs reduces the chance that a downstream system, user, or automation accidentally treats dangerous content as safe.

The PII redaction has also been strong. For agentic workflows, privacy protection has to be built into the pipeline. You do not want every agent team inventing its own ad hoc redaction function, especially in a regulated environment.

Another pleasant surprise has been cross-language support. Some of the feeds my agents process are in languages other than English. CaneCorso has handled injection protection well even when the LLM is being used for translation. That is a big deal, because attackers do not have to limit themselves to English, and global data sources rarely cooperate with neat security assumptions.

Latency has been in the milliseconds per API call on consumer-grade hardware.

Not too shabby.

The Confidence Gain

The biggest practical gain has been confidence.

CaneCorso does not make untrusted data magically trustworthy. No tool does that. But it significantly raises the trust level of the workflow, even when some of the data is known to be hostile or suspicious.

That confidence matters when agents are used for decision support. I am more comfortable letting agents process messy public data because I know the underlying LLMs and downstream systems have another layer of protection. I am not relying solely on system prompts, model alignment, or careful source selection.

The web is untrusted. Email is untrusted. Documents are untrusted. Social media is untrusted. Scraped content is untrusted.

Agent architectures need to be designed with that assumption in mind.

Why Potential Buyers Should Care

Prompt injection is real, prevalent, and dangerous.

We are still early in the evolution of LLM attacks. The patterns are changing quickly, and the impact will grow as agents gain access to more tools, more data, and more authority. It does not take much imagination to see these attacks evolving into deeper compromise, exfiltration, fraud, and ransomware-style workflows.

That is why I think anyone experimenting with or implementing AI agents should be looking closely at this class of control.

If your agents consume data that is not 100% trusted, you need a plan.

That applies to security teams, automation teams, developers building RAG applications, MSPs, MSSPs, executives using personal agents, and organizations building internal agentic workflows. It applies even more strongly to regulated organizations.

In my opinion, regulated organizations implementing agentic workflows without this level of protection are asking for trouble.

The enterprise argument is especially straightforward. It makes sense to have a single, monitored, auditable control plane for agents so every team does not have to roll its own controls. Without that shared layer, each agent team makes its own decisions about redaction, prompt injection protection, URL handling, logging, blocking, alerting, and auditability.

That is expensive.

It is inconsistent.

It is hard to defend.

A shared control plane reduces time, cost, and mistrust. It makes agent adoption safer and helps organizations move toward ROI without pretending the risks are not there.

The Buyer’s Note

CaneCorso is not magic.

No product can provide 100% trust in untrusted data. That is not how security works, and it is definitely not how AI security works.

But the right control can raise the trust level significantly. It can provide a consistent inspection point. It can enforce privacy protections. It can defang URLs. It can redact prompt injection attempts. It can generate logs. It can give security teams something concrete to monitor, tune, and audit.

That is the point.

The organizations that succeed with AI agents will not be the ones that simply connect models to everything and hope for the best. They will be the ones that build control points, observe behavior, tune policies, and treat agentic workflows like the high-impact systems they are becoming.

For my own agents, CaneCorso became that control point.

And once it was in place, I would not want to run them without it.

How to Learn More or Leverage MSI Expertise

If you want to discuss our experience with CaneCorso in more detail, or pilot the tool in your own environment, just get in touch. You can reach us at info@microsolved.com, or give us a call at +1.614.351.1237. We’d be happy to have a zero-pressure discussion with you. Thanks for reading, and stay safe out there! 

Attention to Privacy Issues Growing

From the board room to main street, digital privacy is becoming more and more of a hot topic.

Organizations have been asking us to discuss it with steering committees and boards. Our intelligence team has been performing privacy-related recon and other testing engagements for the last several years. More and more of our security engagements are starting to include elements of privacy concerns from organizations and individuals alike.

In the mainstream media, you have articles being pushed heavily like this – which discusses supposedly stolen NSA technology for monitoring, to discussions of personal privacy from the likes of Tim Cook, CEO of Apple.

As such, security teams should take the time to verse themselves in the privacy debate. It is likely that management and boards will be asking in the near future, if they aren’t already, for advice on the topic. This is a fantastic opportunity for security teams to engage in meaningful discussions with organizational leaders about a security-related topic on both a professional and personal scale. It might even be worth putting together a presentation, preemptively, and delivering it to the upper management and line managers around your company.

With so much attention to privacy these days, it’s a great chance to engage with people, teach basic infosec practices and have deep discussions about the changing digital world. That’s what your security team has been asking for, right? Now’s the time… 🙂 

Podcast Episode 8 is Out

This time around we riff on Ashley Madison (minus the morals of the site), online privacy, OPSec and the younger generation with @AdamJLuck. Following that, is a short with John Davis. Check it out and let us know your thoughts via Twitter – @lbhuston. Thanks for listening! 

You can listen below:

The Mixed Up World of Hola VPN

Have you heard about, or maybe you use, the “free” services of Hola VPN?

This is, of course, a VPN, in that it routes your traffic over a “protected” network, provides some level of privacy to users and can be used to skirt IP address focused restrictions, such as those imposed by streaming media systems and television suppliers. There are a ton of these out there, but Hola is interesting for another reason.

That other reason is that it turns the client machine into “exit nodes” for a paid service offering by the company:

In May 2015, Hola came under criticism from 8chan founder Frederick Brennan after the site was reportedly attacked by exploiting the Hola network, as confirmed by Hola founder Ofer Vilenski. After Brennan emailed the company, Hola modified its FAQ to include a notice that its users are acting as exit nodes for paid users of Hola’s sister service Luminati. “Adios, Hola!”, a website created by nine security researchers and promoted across 8chan, states: “Hola is harmful to the internet as a whole, and to its users in particular. You might know it as a free VPN or “unblocker”, but in reality it operates like a poorly secured botnet – with serious consequences.”[23]

In this case, you may be getting a whole lot more than you bargained for when you grab and use this “free” VPN client. As always, your paranoia should vary and you should carefully monitor any new software or tools you download – since they may not play nice, be what you thought, or be outright malicious. 

I point this whole debacle out, just to remind you, “free” does not always mean without a cost. If you don’t see a product, you are likely THE PRODUCT… Just something to keep in mind as you wander the web… 

Until next time, stay safe out there!

Artificial Intelligence – Let’s Let Our Computers Guard Our Privacy For Us!

More and more computer devices are designed to act like they are people, not machines. We as consumers demand this of them. We don’t want to have to read and type; we want our computers to talk to us and we want to talk to them. On top of that, we don’t want to have to instruct our computers in every little detail; we want them to anticipate our needs for us. Although this part doesn’t really exist yet, we would pay through the nose to have it. That’s the real driver behind the push to achieve artificial intelligence. 

Think for a minute about the effect AI will have on information security and privacy. One of the reasons that computer systems are so insecure now is because nobody wants to put in the time and drudgery to fully monitor their systems. But an AI could not only monitor every miniscule input and output, it could do it 24 X 7 X 365 without getting tired. Once it detected something it could act to correct the problem itself. Not only that, a true intelligence would be able monitor trends and conditions and anticipate problems before they even had a chance to occur. Indeed, once computers have fully matured they should be able to guard themselves more completely than we ever could.

And besides privacy, think of the drudgery and consternation an AI could save you. In a future world created by a great science fiction author, Charles Sheffield, everyone had a number of “facs” protecting their time and privacy. A “facs” is a facsimile of you produced by your AI. These facs would answer the phone for you, sort your messages, schedule your appointments and perform a thousand and one other tasks that use up your time and try your patience. When they run across situations that they can’t handle, they simply bring you into the loop to make the decisions. Makes me wish this world was real and already with us. Hurry up AI! We really need you!

Keep Your Hands Off My SSL Traffic

Hey, you, get off my digital lawn and put down my binary flamingos!!!!! 

If you have been living under an online rock these last couple of weeks, then you might have missed all of the news and hype about the threats to your SSL traffic. It seems that some folks, like Lenovo and Comodo, for example, have been caught with their hands in your cookie jar. (or at least your certificate jar, but cookie jars seem like more of a thing…) 

First, we had Superfish, then PrivDog. Now researchers are saying that more and more examples of that same code being used are starting to emerge across a plethora of products and software tools.

That’s a LOT of people, organizations and applications playing with my (and your) SSL traffic. What is an aging infosec curmudgeon to do except take to the Twitters to complain? 🙂

There’s a lot of advice out there, and if you are one of the folks impacted by Superfish and/or PrivDog directly, it is likely a good time to go fix that stuff. It also might be worth keeping an eye on for a while and cleaning up any of the other applications that are starting to be outed for the same bad behaviors.

In the meantime, if you are a privacy or compliance person for a living, feel free to drop us a line on Twitter (@lbhuston, @microsolved) and let us know what your organization is doing about these issues. How is the idea of prevalent man-in-the-middle attacks against your compliance-focused data and applications sitting with your security team? You got this, right? 🙂

As always, thanks for reading, and we look forward to hearing more about your thoughts on the impacts of SSL tampering on Twitter! 

My Thoughts of Raising Teenagers While Protecting Their Online Privacy

As a parent, who has teenagers, it can be a somewhat complicated and mortifying world when it comes to trying to allow a teenager a small level of personal “freedom” of expression and allowing them to be curious and discover new things while also satisfying the need to protect their online privacy from those who may do them harm. In this blog segment we will discuss some of my thoughts on what we as parents can do to aid our child in this ever evolving world that is the internet.

To start of with I suppose we need to first look at the child’s age and I’m not speaking to their numeric age, but rather to their level of maturity. And so when my wife and I decide what applications (apps) our children may download, it depends heavily on the content of the application, but also to the child’s maturity level. Who would want a scary game or a very provocative application to be seen or played by a minor, especially if it is something that you fundamentally don’t agree with as a parent. Let alone a game or app with overtones of sexuality that is going to be played by your teenager for hours on end. Now I am not saying that they don’t hear it and see it in the world that we live in, I am not naive, but why put it on a silver platter and feed it to them. Those things can wait a bit longer, especially if we are talking the difference between a thirteen year old versus a seventeen year old. True it is only four years, but developmentally and cognitively there are vast differences between them. Particularly in their ability to make intelligent decisions as I am sure many of you would agree!

So lets start with the basics, remember that you are the parent and a good dose of common sense goes a long way. With that we all need to be able to reach our children and so perhaps you want be able to track where your child is and more importantly they are where they say they are. Have no fear there are apps for that, but most if not all smartphones have GPS built right in. However, apps like Find My iPhone and Find My Friends can be quite helpful. Perhaps you want to limit the amount of time that a child spends online or limit the sites that they can have access to there are apps for that too. Apps such as Screentime and DinnerTime Parental Control offer you the ability to not only limit their screen time, but also limit how much they are texting and playing games. All in an effort to help them refocus on working on homework, chores or spending quality time with the family. Some parents may elect to take it a step further and want to track who their child is communicating with, read emails, see all the pictures that are sent, received and perhaps more importantly deleted. Well they can do so with an app called Teensafe. I know this one sounds a bit like big brother, but if your child is being bullied, abused, or dating without your knowledge, some parents want the ability to intervene more quickly. Especially, if the child isn’t as forth coming as the parent feels they should be.

Next, comes the security of the websites and the apps themselves. I think we as parents have a responsibility to protect our children and that responsibility should include a healthy dose of cynicism. To that end, make sure you go through each setting on an app or website that you load or your child loads onto their device(s). Making sure that you turn on or off the security settings that you feel are appropriate for your child. Lets say we allow our child to use a social media website or app, we certainly wouldn’t want a thirteen year old exposed to the entire world, when all they want to do is connect with their friends. This would potentially expose them to threats that you may not recognize as a threat until it was too late. So lets go through those settings and turn off some of those features and lock it down to a level where you as a parent are comfortable with. It may seem like just a simple click of a button, but believe me it is a very important step in ensuring your child’s online safety.

Finally, remember that you may not want to give your child the ability to download or change the settings of their devices, so maybe keeping a log of all of their passwords. Perhaps in a password vault such as 1Password would be in order. You would do this for two reasons. One to make sure that they are using a strong password, and where possible to also turn on two-step verification, but also to make sure that they don’t forget the password that they just created, because a good password should be challenging, otherwise it’s pointless. Please remember you are in charge and ultimately responsible for the safety of your child both at home and online. Secure as much as you can, where you can. So let’s be safe out there!

It should be noted that some of the apps mentioned above are free and some are open source and some are at a cost to the consumer. It is up to you to research these applications and see what best fits your security needs. 

In no way do we endorse the applications that were presented in this article we are simply stating that they may be an option for you to consider for your device. Your particular security needs for your device are up to you to decide. Be safe out there.

This post by Preston Kershner.

Consumers are Changing their Minds about Data Breaches

Per this article in Fast Company, it now seems that some 72% of consumers expressed an impact in their perception of a retail brand following a breach announcement. However, only 12% actually stopped shopping at the breached stores.

This appears to be a rising tide in the mind of consumers, with an increase in both attention and action versus previous polls.

Add to that the feelings of fatigue that we have been following on social media when breaches are announced. TigerTrax often identifies trending terms of frustration around breach announcements, and even some outright hostility toward brands with a breach. Not surprising, given the media hype cycle today.

TigerTrax also found that a high percentage of consumers were concerned to a larger extent about information privacy than in the past. Trending terms often include “opt out”, “delete my data” and various other conversation points concerning the collection and sharing of consumer information by vendors.

Retailers and other service providers should pay careful attention to this rising tide of global concern. Soon, breaches, data theft and illicit data trafficking may show significant increases in consumer awareness and brand damage is very likely to follow…

Never Store Anything on the Cloud that You Wouldn’t Want Your Mamma to See

It’s great now days, isn’t it?

You carry around devices with you that can do just about anything! You can get on the Internet and check your email, do your banking, find out what is new on Facebook, send a Tweet or a million other things. You can also take a picture, record a conversation, make a movie or store your work papers – and the storage space is virtually unlimited! And all this is just great as long as you understand what kind of risks this freedom poses to your privacy.

Remember that much of this stuff is getting stored on the cloud, and the only thing that separates your stuff from the general public is a user name, password and sometimes a security question. Just recently, a number of celebrities have complained that their photos (some of them explicit) have been stolen by hackers. These photos were stored in iCloud digital vaults, and were really very well defended by Apple security measures. But Apple wasn’t at fault here – it turns out that the celebrities themselves revealed the means to access their private stuff.

It’s called Phishing, and there are a million types of bait being used out there to fool or entice you. By clicking on a link in an innocent-looking email or answering a few simple questions, you can give away the keys to the kingdom. And even if you realize your mistake a couple of hours later, it is probably already too late to do anything about it. That naughty movie you made with your spouse during your romantic visit to Niagara Falls is already available from Peking to Panama!

Apple announced that they will soon start sending people alerts when attempts are made to change passwords, restore iCloud data to new devices or when someone logs in for the first time from new Apple devices. These are valuable controls, but really are only detective in nature and won’t actually prevent many data losses. That is why we recommend giving yourselves some real protection.

First, you should ensure that you educate yourself and your family about the dangers hackers and social engineers pose, and the techniques they use to get at your stuff. Second, it is really a lot better to store important or sensitive data on local devices if possible. But, if you must store your private data in the cloud, be sure it is well encrypted. Best of all, use some sort of good multi-part authentication technique to protect your stuff from being accessed easily by hackers. By that I mean something like a digital certificate or an RSA hard token – something you have or something you are, not just something you know.

If you do these things, then it’s a good bet your “special moments” won’t end up in your Momma’s inbox!

Thanks to John Davis for this post.

Digital Images and Recordings: How Can We Deal with the Loss of Trust?

For many decades now the human race has benefitted from the evidentiary value of surveillance videos and audio recordings. Human beings cannot be relied on to give accurate accounts of events that they have witnessed. It is a frustrating fact that eye witness testimony is highly inaccurate. More often than not, people are mistaken in their recollections or they simply fail to tell the truth. But, with some reservations, we have learned to trust our surveillance recordings. Sure, analog videos and audio recordings can be tampered with. But almost universally, analysis of such tampered material exposes the fraud. Not so anymore!

Virtually every camera, video recorder and audio recorder on the planet is now digital. And it is theoretically possible to manipulate or totally forge digital recordings perfectly. Every year now, computer generated images and sounds used in movies are becoming more seamless and convincing. I see no reason at all why we couldn’t make totally realistic-appearing movies that contain not a single human actor or location shot. Just think of it: Jimmy Stewart and John Wayne, in their primes, with their own voices, starring in a brand new western of epic proportions! Awesome! And if Hollywood can do it, you can bet that a lot of other less reputable individuals can do it as well.

So what are we going to do about surveillance recordings (everything from ATMs and convenience store videos to recordings made by the FBI)? We won’t be able to trust that they are real or accurate anymore. Are we going to return to the old days of relying on eye witness testimony and the perceptiveness of juries? Are we going to let even more lying, larcenous and violent offenders off scot free than we are today? I don’t think we as a society will be able to tolerate that. After all, many crimes don’t produce any significant forensic evidence such as finger prints and DNA. Often, video and audio recordings are our only means of identifying the bad guys and what they do.

This means that we are going to have to find ways and means to certify that the digital recordings we make remain unaltered. (Do you see a new service industry in the offing)? The only thing I can think of to solve the problem is a service similar in many ways to the certificate authorities and token providers we use today. Trusted third parties that employ cryptographic techniques and other means to ensure that their equipment and recordings remain pristine.

But that still leaves the problem of the recordings of events that individuals make with their smart phones and camcorders. Can we in all good faith trust that these recordings are any more real than the surveillance recordings we are making today? These, too, are digital recordings and can theoretically be perfectly manipulated. But I can’t see the average Joe going through the hassle and spending the money necessary to certify their private recordings. I can’t see a way out of this part of the problem. Perhaps you can come up with some ideas that would work?

Thanks to John Davis for writing this post.