Ask The Experts: Favorite HoneyPoint Component

This time around, we got a question from a client where HoneyPoint was being demoed for the experts.

Q: “What is your favorite component of HoneyPoint and why? How have you used it to catch the bad guys?”

Jim Klun started off with:

My favorite component is the simplest: HoneyPoint Agent. 

It’s ease of deployment and the simple fact that all alerts from an agent are of note – someone really did touch an internal service on a box where no such service legitimately exists – makes it attractive. 
No one will argue with you about meaning. 

I have recently seen it detect a new MSSQL worm (TCP 1433) within a large enterprise – information obtained from my own laptop. The Agent I had deployed on the laptop had a 1433 listener. It captured the payload from an attacking desktop box located in an office in another US state. 

The HoneyPoint Agent info was relayed to a corporate team that managed a global IPS. They confirmed the event and immediately updated their IPS that was – ideally – protecting several hundred thousand internal machines from attack. 

Honeypoint Agent: It’s simple, it works.

Adam Hostetler added his view:

I’m a simple, no frills guy, so I just like the regular old TCP listener component built into Agent. We have stood these up on many engagements and onsite visits and picked up unexpected traffic. Sometimes malware, sometimes a misconfiguration, or sometimes something innocuous (inventory management). I also find it useful for research by exposing it to the Internet.

John Davis closed with a different view:

My favorite HoneyPoint is Wasp. Watching how skilled attackers actually compromise whole networks by initially compromising one user machine gives me the shivers! Especially since most networks we see aren’t properly enclaved and monitored. If I were a CISO, knowing what is on my network at all times would be of primary importance; including what is going on on the client side! Wasp gets you that visibility and without all the traditional overhead and complexity of other end-point monitoring and white listing tools.

Have a question about HoneyPoint? Want to talk about your favorite component or use case scenario? Hit us on Twitter (@lbhuston or @microsolved). We can’t wait to hear from you. Feel free to send us your question for the experts. Readers whose questions we pick for the blog get a little surprise for their contribution. As always, thanks for reading and stay safe out there! 

Infosec, The World & YOU Episode 3 is Out!

Our newest episode is out, and this time we are joined by a very special guest, @TSGouge who discuss social engineering for companies and on the nation state scale. Victoria reveals her new plans to take over the world and Brent tries to keep up with these gals, who are straight up geniuses. We also pontificate on Syria and the potential for cyber-fallout from the action going on over there.

Check it out here

Have a global real world/cyber issue you want us to tackle? Observed an odd event that ties to a real world cause in the Internets? Drop us a line ~ we’d love to hear about it or get you on the show! 

You can find Brent on Twitter at @lbhuston and Victoria stars as @gisoboz. Get in touch! 

Ask The Experts: New Device Check Lists

This time around on Ask The Experts, we have a question from a reader and it got some great responses from the team:

 

Q: “I need a quick 10 item or less checklist that I can apply to new devices when my company wants to put them on our network. What kinds of things should I do before they get deployed and are in use around the company?”

 

Bill Hagestad started us off with:

The Top 10 checklist items a CISO/or equivalent authority should effectively manage before installing, configuring and managing new devices on a network includes the following;

 

1)Organize your staff and prepare them for the overall task of documenting and diagramming your network infrastructure – give them your commander’s network management intent;

2)Create a physical and logical network map – encourage feedback from your team regarding placement of new hardware and software;

3)Use industry standards for your network including physical and logical security, take a good look at NIST Special Publication SP 800-XX Series;

4)Make certain that you and your team are aware of the requisite compliance standards for your business and industry, it will help to ensure you are within legal guidelines before installing new devices or perhaps you may discover the hardware or software you are considering isn’t necessary after all;

5)Ensure that after you have created the necessary network maps for your infrastructure in Step 2) above, conduct a through inventory of all infrastructure which is both critical and important to your business, then document this baseline;

6)Create a hardware/software configuration change procedure; or if you already have his inlace, have your team review it for accuracy; make certain everyone on the team knows to document all changes/moves/additions on the network;

7)Focus not only on the correlation of newly implemented devices on the internal networks but also look at the dependencies and effects on external infrastructure such as voice/data networks – nothing worse than making an internal change to your network and having your Internet go down unnecessarily;

8)Ensure that new network devices being considered integrate gracefully into your existing logging and alerting mechanisms; no need to install something new only to have to recreate the proverbial wheel in order to monitor it;

9)Consider the second & third order effects of newly installed devices on the infrastructure and their potential impact on remote workers and mobile devices used on the network;

10)Install HoneyPoint Security Server (HPSS) to agentlessly & seamlessly monitor external and potential internal threats to your newly configured network….

 

Of course a very authoritative guide is published by the national Security Agency called appropriately “Manageable Network Plan” and available for download @:

 

http://www.nsa.gov/ia/_files/vtechrep/ManageableNetworkPlan.pdf


Jim Klun added:

1. Make sure the device is necessary and not just a whim on the part of management.   Explain that each new device increases risk. 

2. If the device’s function can be performed by an existing internal service, use that service instead. 

3. Inventory new devices by name, IP addresses, function and – most importantly – owners.  There should be a device owner and a business owner who can verify continued need for the device.  Email those owners regularly,   querying them about continued need. Make sure that these folks have an acknowledged role to support the application running on the devices and are accountable for its security. 

4. Research the device and the application(s) its support.  Have no black boxes in your datacenter.  Include an abstract of this in the inventory. 

5. Make sure a maintenance program is in place – hold the app and device owner accountable. 

6. Do a security audit of the device wehn fully configured. Hit it with vulnerability scanners and make sure that this happens at least quarterly. 

7. Make sure monitoring is in place and make very sure all support staff are aware of the device and any alerts it may generate. Do not blind-side the operations staff. 

8. If the device can log its activities ( system and application ) to a central log repository, ensure that happens as part of deployment. 

9. Make sure the device is properly placed in your network architecture. Internet-exposed systems should be isolated in an Internet DMZ.  Systems holding sensitive data should similarly be isolated. 

10. Restrict access to the device as narrowly as possible. 

 

Finally.. if you can, for every device in your environment, log its network traffic and create a summary of what is “normal” for that device.  

Your first indication of a compromise is often a change in the way a system “talks”. 

 

Adam Hostetler chimed in with: 

Will vary a lot depending on device, but here are some suggestions

 

1. Ensure any default values are changed. Passwords, SNMP strings, wireless settings etc.

2. Disable any unnecessary services

3. Ensure it’s running the latest firmware/OS/software

4. Add the device to your inventory/map, catalog MAC address, owner/admin, etc.

5. Perform a small risk assessment on the device. What kind of risk does it introduce to your environment? Is it worth it?

6. Test and update the device in a separate dev segment, if you have one.

7. Make sure the device fits in with corporate usage policies

8. Perform a vulnerability assessment against the device. 

9. Search the internet for any known issues, vulnerabilities or exploits that might effect the device.

  1. Configure the device to send logs to your logging server or SEIM, if you have one.

 

And John Davis got the last word by adding: 

From a risk management perspective, the most important thing a CISO needs to ensure is in place before new devices are implemented on the network is a formal, documented Systems Development Life Cycle or Change Management program. Having such a program in place means that all changes to the system are planned and documented, that security requirements and risk have been assessed before devices have purchased and installed, that system configuration and maintenance issues have been addressed, that the new devices are included in business continuity planning, that proper testing of devices (before and after implementation on the network) is undertaken and more. If a good SDLC/Change Management program is not in place, CISOs should ensure that development and implementation of the program is given a high priority among the tasks they wish to accomplish.

 

Whew, that was a great question and there is some amazing advice here from the experts! Thanks for reading, and until next time, stay safe out there! 

 

Got a question for the experts? Give us a shout on Twitter (@microsolved or @lbhuston) and we’ll base a column on your questions!

August Touchdown Task: Change Management Audit

This month’s touchdown task is to take a quick audit of your organization’s change management process. Give it a quick walkthrough.

  • Make sure that you are tracking when admins make changes to machine configurations or network device configs
  • Are proper peer review and approval processes being followed?
  • Check to make sure that the proper folks are in the loop for various kinds of communication, error handling and reporting
  • Review risk acceptance for changes and make sure it meets your expected processes
  • Examine a couple of changes and walk them through the entire process to see if things are falling through the cracks
  • Update any change management documentation to reflect new processes or technologies that may be in place now

Give this a quick review this month and you can rest assured for a while that change management is working strongly. With the coming fall and holiday rush ahead, you’ll know you have this base covered and can depend on it as a good foundation for the rest of your security initiatives. 

Until next time, as always, thanks for reading and stay safe out there! 

Always Remember the Business, InfoSec Folks

I just got out of yet another meeting with a big company partner for whom we act as an information security and threat advisor. In that meeting, I listened to a keyed-up,  hypercaffinated group of good guy security geeks tell their senior executives about the latest set of DLP controls they were putting in place. They spent 45 minutes describing packet-level checking, data flows, architecture diagrams and the technology of their solution set in painful (even for me) detail. Many of the executives were dosing lightly, while the geeks spun their techno-web. That’s when things took a turn for the worse…

The COO asked them one single question, interrupting a slide about email data flows ~ “How will this impact the business of ‘Dan’s’ group and the ‘Singularity’ project we have been working on since 2011? Doesn’t it depend on some of that data?” (**Names changed to protect the innocent and the guilty…)

Then, NOTHING HAPPENED. You could have heard a pin drop. Dead silence for close to two minutes. Finally, the COO repeated the question. Still nothing. He asked the lead geek if he knew who Dan was, and the geek said yes. He asked if Dan had been interviewed by the geeks prior to this. They said, no. The COO erupted in a rage, railing about how Singularity was the largest new line of business launch in the history of the company and how the projected income from the business would change the landscape of the firm. There were a LOT of apologies and some amount of notes taken to immediately consult with Dan. Much geek cred was lost. It will be a while before they get to present to the executives again like that. 

I tell you this story simply to remind all infosec folks about something I see all too often. It’s about the business. We are about the business. We are there to secure the business, nurture it, protect it, empower it to succeed. If that’s not where you or your team are, then you are doing it wrong. Get it right. Talk to the business. Speak their language. Give up on the “beauty of the baud” approach. Your packets and technology stack may be gorgeous to you, but if they don’t align with the business, then they won’t do anyone, including you, any good at all. Keep that in mind at all times. Also, remember to always talk to Dan ~ he’s a nice guy and he appreciates it. He can give you the answers you need and usually, he desperately wants to understand what you can do to make his project a success. Get to know all the Dan’s in your organization. They drive the world, you support them, together you build business and all of you will succeed!

Three Tough Questions with Aaron Bedra

This time I interviewed Aaron Bedra about his newest creation ~ RepSheet. Check it out here:


Aaron’s Bio:

Aaron is the Application Security Lead at Braintree Payments. He is the co-author of Programming Clojure, 2nd Edition as well as a frequent contributor to the Clojure language. He is also the creator of Repsheet, a reputation based intelligence and security tool for web applications.


Question #1:  You created a tool called Repsheet that takes a reputational approach to web application security. How does it work and why is it important to approach the problem differently than traditional web application firewalling?

I built Repsheet after finding lots of gaps in traditional web application security. Simply put, it is a web server module that records data about requests, and either blocks traffic or notifies downstream applications of what is going on. It also has a backend to process information over time and outside the request cycle, and a visualization component that lets you see the current state of the world. If you break down the different critical pieces that are involved in protecting a web application, you will find several parts:

* Solid and secure programming practices

* Identity and access management

* Visibility (what’s happening right now)

* Response (make the bad actors go away)

* HELP!!!! (DDoS and other upstream based ideas)

* A way to manage all of the information in a usable way

This is a pretty big list. There are certainly some things on this list that I haven’t mentioned as well (crypto management, etc), but this covers the high level. Coordinating all of this can be difficult. There are a lot of tools out there that help with pieces of this, but don’t really help solve the problem at large.

The other problem I have is that although I think having a WAF is important, I don’t necessarily believe in using it to block traffic. There are just too many false positives and things that can go wrong. I want to be certain about a situation before I act aggressively towards it. This being the case, I decided to start by simply making a system that records activity and listens to ModSecurity. It stores what has happened and provides an interface that lets the user manually act based on the information. You can think of it as a half baked SIEM.

That alone actually proved to be useful, but there are many more things I wanted to do with it. The issue was doing so in a manner that didn’t add overhead to the request. This is when I created the Repsheet backend. It takes in the recorded information and acts on it based on additional observation. This can be done in any form and it is completely pluggable. If you have other systems that detect bad behavior, you can plug them into Repsheet to help manage bad actors.  

The visualization component gives you the detailed and granular view of offenses in progress, and gives you the power to blacklist with the click of a button. There is also a global view that lets you see patterns of data based on GeoIP information. This has proven to be extremely useful in detecting localized botnet behavior.

So, with all of this, I am now able to manage the bottom part of my list. One of the pieces that was recently added was upstream integration with Cloudflare, where the backend will automatically blacklist via the Cloudflare API, so any actors that trigger blacklisting will be dealt with by upstream resources. This helps shed attack traffic in a meaningful way.

The piece that was left unanswered is the top part of my list. I don’t want to automate good programming practices. That is a culture thing. You can, of course, use automated tools to help make it better, but you need to buy in. The identity and access management piece was still interesting to me, though. Once I realized that I already had data on bad actors, I saw a way to start to integrate this data that I was using in a defensive manner all the way down to the application layer itself. It became obvious that with a little more effort, I could start to create situations where security controls were dynamic based on what I know or don’t know about an actor. This is where the idea of increased security and decreased friction really set it and I saw Repsheet become more than just a tool for defending web applications.

All of Repsheet is open sourced with a friendly license. You can find it on Github at:

https://github.com/repsheet

There are multiple projects that represent the different layers that Repsheet offers. There is also a brochureware site at http://getrepsheet.com that will soon include tutorial information and additional implementation examples.

Question #2: What is the future of reputational interactions with users? How far do you see reputational interaction going in an enterprise environment?

For me, the future of reputation based tooling is not strictly bound to defending against attacks. I think once the tooling matures and we start to understand how to derive intent from behavior, we can start to create much more dynamic security for our applications. If we compare web security maturity to the state of web application techniques, we would be sitting right around the late 90s. I’m not strictly talking about our approach to preventing breaches (although we haven’t progressed much there either), I’m talking about the static nature of security and the impact it has on the users of our systems. For me the holy grail is an increase in security and a decrease in friction.

A very common example is the captcha. Why do we always show it? Shouldn’t we be able to conditionally show it based on what we know or don’t know about an actor? Going deeper, why do we force users to log in? Why can’t we provide a more seamless experience if we have enough information about devices, IP address history, behavior, etc? There has to be a way to have our security be as dynamic as our applications have become. I don’t think this is an easy problem to solve, but I do think that the companies that do this will be the ones that succeed in the future.

Tools like Repsheet aim to provide this information so that we can help defend against attacks, but also build up the knowledge needed to move toward this kind of dynamic security. Repsheet is by no means there yet, but I am focusing a lot of attention on trying to derive intent through behavior and make these types of ideas easier to accomplish.

Question #3: What are the challenges of using something like Repsheet? Do you think it’s a fit for all web sites or only specific content?

I would like to say yes, but realistically I would say no. The first group that this doesn’t make sense for are sites without a lot of exposure or potential loss. If you have nothing to protect, then there is no reason to go through the trouble of setting up these kinds of systems. They basically become a part of your application infrastructure and it takes dedicated time to make them work properly. Along those lines, static sites with no users and no real security restrictions don’t necessarily see the full benefit. That being said, there is still a benefit from visibility into what is going on from a security standpoint and can help spot events in progress or even pending attacks. I have seen lots of interesting things since I started deploying Repsheet, even botnets sizing up a site before launching an attack. Now that I have seen that, I have started to turn it into an early warning system of sorts to help prepare.

The target audience for Repsheet are companies that have already done the web security basics and want to take the next step forward. A full Repsheet deployment involves WAF and GeoIP based tools as well as changes to the application under the hood. All of this requires time and people to make it work properly, so it is a significant investment. That being said, the benefits of visibility, response to attacks, and dynamic security are a huge advantage. Like every good investment into infrastructure, it can set a company apart from others if done properly.

Thanks to Aaron for his work and for spending time with us! Check him out on Twitter, @abedra, for more great insights!

Quick PHP Malware vs AV Update

It’s been a while since I checked on the status of PHP malware versus anti-virus. So, here is a quick catch up post. (I’ve been talking about this for a while now. Here is an old example.)

I took a randomly selected piece of PHP malware from the HITME and checked it out this afternoon. Much to my surprise, the malware detection via AV has gotten better.

The malware I grabbed for the test turned out to be a multi-stage PHP backdoor. The scanner thought it was exploiting a vulnerable WordPress installation. 

I unpacked the malware parts into plain text and presented both the original packed version from the log and the unpacked version to VirusTotal for detection testing. As you know, in the past, detection of malware PHP was sub single digits in many cases. That, at least to some extent has changed. For those interested, here are the links to see what was tripped.

Decoded to plain text vs Encoded, as received

As you can see, decoded to plain text scored a detection of 44% (19/43), which is significantly improved from a year or so ago. Additionally, excitingly, undecoded, the attack in raw form triggered a detection rate of 30% (13/44)! The undecoded result is HUGE, given that the same test a year or so ago often yielded 0-2% detection rates. So, it’s getting better, just SLOWLY.

Sadly though, even with the improvements, we are still well below half (50%) detection rates and many of the AV solutions that fail to catch the PHP malware are big name vendors with commercial products that organizations running PHP in commercial environments would likely be depending on. Is your AV in the missing zone? If so, you might want to consider other forms of more nuanced detection

Now, obviously, organizations aren’t just depending on AV alone for detection of web malware. But, many may be. In fact, a quick search for the dropped backdoor file on Google showed 58,800 systems with the dropped page name (a semi-unique indicator of compromise). With that many targets already victim to this single variant of PHP backdoors, it might be worth checking into if you are a corporate PHP user.

Until next time, take a look around for PHP in your organization. It is a commonly missed item in the patch and update cycles. It also has a pretty wide security posture with a long list of known attack tools and common vulnerabilities in the coding patterns used by many popular products. Give any PHP servers you have a deeper inspection and consider adding more detection capability around them. As always, thanks for reading and stay safe out there! 

Ask The Experts: Too Much Data

Q: “I have massive amounts of log files I have to dig through every day. I have tried a full blown SEIM, but can’t get it to work right or my management to support it with budget. Right now I have Windows logs, firewall logs and AV logs going to a syslog server. That gives me a huge set of text files every day. How can I make sense of all that text? What tools and processes do you suggest? What should I be looking for? HELP!!!!”

 

Adam Hostetler answered with:

 

I would say give OSSEC a try. It’s a free log analyzer/SEIM. It doesn’t

have a GUI with100 different dashboards and graphs, it’s all cli and

e-mail based (though there is a simple web interface for it also). It is

easy to write rules for, and it has default rules for many things,

except for your AV. You can write simple rules for that, especially if

you are just looking for items AV caught. It does take some tuning, as

with all analysis tools, but isn’t difficult after learning how OSSEC

works. If you want to step it up a bit, you can feed OSSEC alerts into

Splunk where you can trend alerts, or create other rules and reports in it.

 

Bill Hagestad added:

 

First things first – don’t be or feel overwhelmed – log files are what they are much disparate data from a variety of resources that need reviewing sooner rather than later.

 

Rather than looking at another new set to tools or the latest software gizmo the trade rags might suggest based on the flair of the month, try a much different and more effective approach to the potential threat surface to your network and enterprise information network.

 

First take a look at what resources need to be protected in order of importance to your business. Once you have prioritized these assets then begin to  determine what is the minimum level of acceptable risk you can assign to each resource you have just prioritized.

 

Next, make two columns on a either a piece of paper or a white board. In one column list your resources in order of protection requirements, i.e.; servers with customer data, servers with intellectual property, so and so forth. In a column to the right of the first assets list plug in your varying assigned levels of risk. Soon you will see what areas/assets within your organization/enterprise you should pay the most attention to in terms of threat mitigation.

 

After you have taken the steps to determine your own self- assessment of risk contact MicroSolved for both a vulnerability assessment and penetration test to provide additional objective perspective on threats to your IT infrastructure and commercial enterprise. 

 

Finally, Jim Klun weighed in with: 

 

You are way ahead of the game by just having a central log repository.  You can go to one server and look back in time to the point where you expect a security incident.

 

And what you have – Windows logs, firewall logs, and AV – is fantastic.  Make sure all your apps are logging as well ( logon success, logon failure).

Too often I have seen apps attacked and all I had in syslog was OS events that showed nothing.

 

Adam’s suggestion, OSSEC, is the way to go to keep cost down… but don’t just install and hope for the best.

You will have to tweak the OSSEC rules and come up with what works.

 

Here’s the rub: there is no substitute for knowing your logs – in their raw format, not pre-digested by a commercial SIEM or OSSEC.

 

That can seem overwhelming. And to that, some Unix commands and regular expressions are your friend.

 

So:

 

zcat auth.log | grep ssh | egrep -i ‘failed|accepted’

 

produces:

 

Jul  4 16:32:16 dmz-server01 sshd[8786]: Failed password for user02 from 192.168.105.51 port 38143 ssh2

Jul  4 16:33:53 dmz-server01 sshd[8786]: Accepted password for user01 from 192.168.105.38 port 38143 ssh2

Jul  4 16:36:05 dmz-server01 sshd[9010]: Accepted password for user01 from 192.168.105.38 port 38315 ssh2

Jul  5 01:04:00 dmz-server01 sshd[9308]: Accepted password for user01 from 192.168.105.38 port 60351 ssh2

Jul  5 08:21:58 dmz-server01 sshd[9802]: Accepted password for user01 from 192.168.105.38 port 51436 ssh2

Jul  6 10:21:52 dmz-server01 sshd[21912]: Accepted password for user01 from 192.168.105.38 port 36486 ssh2

Jul  6 13:43:10 dmz-server01 sshd[31701]: Accepted password for user01 from 192.168.105.30 port 34703 ssh2

Jun 26 11:21:02 dmz-server01 sshd[31950]: Accepted password for user01 from 192.168.105.70 port 37209 ssh2

 

 

Instead of miles of gibberish the log gets reduced to passed/fail authentication attempts.

 

You can spend an hour with each log source ( firewall, AV, etc) and quickly pare them down to whats interesting.

 

Then make SURE your OSSEC  rules cover what you want to see.

If that does not work – cron a script to parse the logs of interest using your regular expression expertise and have an email sent to you when something goes awry.

 

Revisist the logs manually periodically – they will change. New stuff will happen.  Only a human can catch that.

 

Take a look at:

http://www.securitywarriorconsulting.com/logtools/

 

The site lists a number of tools that may be useful

 

John Davis added:

 

You voice one of the biggest problems we see in information security programs: monitoring! People tell us that they don’t have the proper tools and, especially, they don’t have the manpower to perform effective logging and monitoring. And what they are saying is true, but unfortunately doesn’t let them out from having to do it. If you have peoples financial data, health data (HIPAA) or credit card information (PCI) you are bound by regulation or mandate to properly monitor your environment – and that means management processes, equipment, vulnerabilities and software as well as logs and tool outputs. The basic problem here is that most organizations don’t have any dedicated information security personnel at all, or the team they have isn’t adequate for the work load. Money is tight and employees are expensive so it is very difficult for senior management to justify the expenditure – paying a third party to monitor firewall logs is cheaper. But for real security there is no substitute for actual humans in the security loop – they simply cannot be replaced by technology. Unfortunately, I feel the only answer to your problem is for government and industry to realize this truth and mandate dedicated security personnel in organizations that process protected data.

 

As always, thanks for reading and if you have a question for the experts, either leave it in the comments, email us or drop us a line on Twitter at (@lbhuston). 

YAPT: Yet Another Phishing Template

Earlier this week, we gave you the touchdown task for July, which was to go phishing. In that post, we described a common scam email. I wanted to post an example, since some folks reached out on Twitter and asked about it. Here is a sample of the email I was discussing.

<paste>

Hi My name is Mrs. Hilda Abdul , widow to late Dr. Abdul A. Osman, former owner of Petroleum & Gas Company, here in Kuwait. I am 67 years old, suffering from long time Cancer of the breast.

From all indications my condition is really deteriorating and it’s quite obvious that I won’t live more than 3 months according to my doctors. This is because the cancer stage has gotten to a very bad stage.

I don’t want your pity but I need your trust. My late husband died early last year from Heart attack, and during the period of our marriage we couldn’t produce any child. My late husband was very wealthy and after his death, I inherited all his businesses and wealth .The doctor has advised me that I will not live for more than 3 months ,so I have now decided to spread all my wealth, to contribute mainly to the development of charity in Africa, America,

Asia and Europe .Am sorry if you are embarrassed by my mail. I found your e-mail address in the web directory, and I have decided to contact you, but if for any reason  you find this mail offensive, you can ignore it and please accept my apology. Before my late husband died he was major oil tycoon in Kuwait and (Eighteen Million Dollars)was deposited  in a Bank in cote d ivoire some years ago, that’s  all I have left now,

I need you to collect this funds and distribute it yourself to charity .so that when I die my soul can rest in peace. The funds will be entirely in hands and management. I hope God gives you the wisdom to touch very many lives that is my main concern. 20% of this money will be for your time and effort includin any expensese,while 80% goes to charity. You can get back to me via my private e-mail: (hilda.abdul@yahoo.com) God bless you.
1. Full name :
2. Current Address :
3. Telephone N° :
4. Occupation :
5. Age :
6. Country :

MRS. Hilda Abdul

<end paste>

As you can see, this is a common format of a phishing scam. In this case, you might want to edit the targeting mechanism a bit, so that they have to click through to a web page to answer or maybe even include a URL as supposed proof of the claim. That way you would have two ways to catch them, one by email reply and two by click through to the simple phish application.

As always your milage and paranoia may vary, but it is still pretty easy to get people to click or reply ~ even with age old spam phish attacks like this. What kind of return percentages did you get? What lessons did you learn? Drop us a line on Twitter (@lbhuston) and let us know. 

July’s Touchdown Task: Go Phish Yourself!

This month’s touchdown task is to spend about an hour doing some phishing. Phish your user base, executives and other likely targets. Use the process as a basis for ongoing awareness and security training.

Phishing is a LOT easier and more effective than you might think. We’ve made it easy for you to do, with a free tool called MSI SimplePhish. You can learn exactly how to do it by clicking here.

Pay special attention to this step:

PreCursor: Obtain permission from your security management to perform these activities and to do phishing testing. Make sure your management team supports this testing BEFORE you engage in it.

You might need a couple more ideas for some phishing templates, so here are a couple of the most simple examples from real phishing going on right now:

1. Simply send a non-sensical subject line and the entire body of the message is the phishing url. You might encode this to make it more fun using something like a URL shortener.

2. Copy one of those spam messages that go around where the target inherits 40 million dollars from an oil company exec in the Congo or somewhere. Check your spam folder for examples. Replace the URLs with your phish site URL and click send.

3.  Send a simple music trivia question, which is common knowledge, and tell them to click on the target URL to answer. Make it appear to be from a local radio station and if they answer correctly, they win a prize (movie tickets, concert tickets, etc.)

As a bonus, simply do what many testing vendors do ~ open your gmail spam folder and pick and choose any of the spam templates collected there. Lots to pick from. 

The exercise should be fun, easy and likely effective. If you need any help, drop us a line or give us a call. Until next month, stay safe out there!