Web Proxy Scanning – Attack or Desperate Search for Free Information Flow

I remember when I was coming up in the infosec world, there used to be a rallying cry among “hackers” that “information wants to be free”. Certainly, we know from history and the present that information freedom has a high value to democratic society. The fact that unrestrained communications can be used to cause social, economic and political change is a given.

I often encounter hundreds of web proxy probes against our HoneyPoints every day. As I look through the logs, research the various traffic and analyze any new events, I am in the habit of largely ignoring these simple probes. Today, however, it occurred to me that many, likely not all (but many), of these probes were folks in less open countries trying to find access mechanisms to get unrestricted access to the web. They may well be searching for an SSL wrapped pipe to retrieve current news, conversations, applications and other data from sources that the “powers that be” in their country would rather not have them see.

Of course, I know that not all proxy scans are for the purpose of escaping political oppression. I know that there are attackers, cyber-stalkers, pr0n fanatics and criminals all looking for proxies too. I also know, first hand, from our HoneyPoints that when they think they find them, many of these probes turn out to be less “CNN” and more attempts to break into the organization offering the proxy. I have seen more than my share of proxied, “internal” probes when attackers believe that their new “proxy” is real and useful.

But, even with the idea that some folks use these tools for illicit purpose, I think, some folks must be dependent on them for free access to uncensored information. Of course, the big question is, how can we help the folks that would like to use the proxy for legitimate public access to free information while refusing illicit access through our system. This is very very difficult without resorting to blacklisting, if we want to offer access to the net as a whole.

However, one of my engineer friends chimed in that perhaps access to the entire web is not really needed. What if you somehow created a system that had proper controls in place to prevent most attacks, but had a white list of sites that traffic could be proxied to. You would still be acting as a sort of “information moderator” in that you could control the sources, but what if the default page listed the sites that were allowed, and you allowed the most common news sites or other commonly sought sources for information that somehow had been vetted beforehand. Not a totally optimal situation, I understand, but better than the current scenario for some folks.

The question is, how could such a solution be created? How could it be established and managed? How would sites get vetted and could existing software be used to create these mechanisms or would new tools require development cycles?

If you have thoughts on this idea, please drop us a line. I would be very interested in your feedback!

Leave a Reply