HoneyPoint a Semi-Finalist for Innovation Awards in Columbus

HPSS

MSI is proud to announce their nomination in the annual Innovation Awards, sponsored by TechColumbus, which recognizes outstanding achievements in technology leadership and innovation. HoneyPoint, MicroSolved’s flagship software, has been nominated for Outstanding Product for companies with 50 employees or less.

On Thursday, February 4, 2010 the annual TechColumbus Innovation Awards will showcase Central Ohio’s many achievements by honoring its top innovators. It is a night of networking, prestige, and celebration.  From a record number of nominees, winners in 13 award categories will be announced to an audience of 1,000+ attendees.

MicroSolved, Inc. is proud to be a Semi-Finalist in the Outstanding Product category. “It is an honor to be a Semi-Finalist for this award and to be recognized for our innovations. We look forward to the event and being surrounded by our peers, colleagues and mentors to learn if we will be named Outstanding Product,” commented Brent Huston, CEO and Security Evangelist.

Huston developed HoneyPoint Security Server three years ago, motivated by a keen desire to break the attacker cycle. Huston concludes, “Attackers like to scan for security holes. HoneyPoint lies in wait and traps the attacker in the act!”

The TechColumbus Innovation Awards celebrate the spirit of innovation by recognizing outstanding technology achievements in Central Ohio. This prestigious evening showcases the region’s advancements and promising future. For more information, visit http://www.techcolumbusinnovationawards.org or www.techcolumbusinnovationawards.org. For more information on HoneyPoint, please visit http://microsolved.com/2009/HoneyPoint.html.

Don’t Forget Hacktivism as a Threat to Model

I loved this story. The idea that some “hackers” hack for political or social causes is not new. This idea stems back several years and has evolved from simple web defacements with social and political messages to the “new breed” of information theft, data disclosure and possibly even sabotage to further one’s views.

Today, all of the experts in the security field, myself included spend a great deal of time teaching people that the primary data theft threat is more organized crime than teenage vandalism. But, that said, we certainly can’t forget the idea that hacktivism is still alive and well. In fact, given the explosive growth of the Internet, the continually expanding dependence on technology for everyday life and the common availability of so much data and access, hacktivism is likely to gain in popularity, not shrink.

That brings us to a huge issue. How do we know where some of the data that hacktivists would be interested in lives? Given that people are involved today in a myriad of social activities, use of social networks and such, how do we know who might have information that a hacktivist would want and who doesn’t? The answer of course, is that we have to assume that someone in our organization might have data that is relevant to this threat, so we have to account for it when we create our threat models. If we happen to be a philanthropic organization, a government agency or a federal group, we definitely can’t overlook hacktivism as a threat, because our very existence yields reputational risk for us and a reputational trophy for many hacktivists if they make us a poster child.

While the hacktivism threat model is likely more one of opportunistic nature than dedicated, focused attacks against a given organization, that may not always hold true. One day it may not be all about what data YOU have and hold, but what data the people who WORK FOR YOU have and what roles they play in their personal lives. While this is not necessarily true today, the idea that hacktivists might one day target individuals to achieve social goals is not out of the question.

So, all of that said, how much thought have you given hacktivism? Does your risk assessment cover that as a threat? Have you done any threat models around politically or socially motivated attackers? If not, it might be a good idea to take a look at this threat vector. Their aims and goals may be different than what you had in mind when you last updated your threat models.

If You’re Still Using IE6, Read This!

We still see an alarming number of users visiting our sites using Internet Explorer 6 (IE6). Although for the first time, IE8 and IE7 both had a slightly higher share than IE6.

We urge users who continue to use IE6 to update to IE7 or IE8, or switch to an alternative as soon as possible. There are numerous reasons for this. IE6 has been shown many times to be insecure, lacking privacy options, has no protection from XSS or phishing attacks, and it’s not compliant with common web standards. It’s also much slower than modern browsers, particuarly with javascript.

Upgrading your browser can have many benefits. The most important being enhanced security and privacy. Other benefits include a better browsing experience through better compliance and faster rendering. So please, upgrade your browsers!

Beware of ‘Free’ InfoSec

It’s tempting to gravitate toward security vendors who offer assessments on the “we find holes or it’s free” basis. I wanted to take a moment and express my thoughts on this approach.

First off, security testing choices should not be based on price. They should be based on risk. The goal is to reduce the risk that any given operation (application, network, system, process, etc.) presents to the organization to a level that is manageable. Trust me, I have been in the security business for 20 years and all vendor processes are NOT created equal. Many variations exist in depth, skill level, scope, reporting capability, experience, etc. As such, selecting security testing vendors based upon price is a really bad idea. Matching vendors specific experience, reporting styles and technical capabilities to your environment and needs is a far better solution for too many reasons to expound upon here.

Second, the “find vulnerabilities or it’s free” mentality can really back fire for everyone involved. It’s hard enough for developers and technical teams to take their lumps from a security test when holes emerge, but to now also tie that to price makes it doubly difficult for them to take. “Great, I pay now because Tommy made some silly mistake!” is just one possibility. How do you think management may handle that? What about Tommy? Believe me, there can be long term side effects for Tommy’s career, especially if he is also blamed for breaking the team’s budget in addition to causing them to fail an audit.

Thirdly, it actually encourages the security assessment team to make mountains out of mole hills. Since they are rewarded only when they find vulnerabilities and the customer expectations of value are automatically built on severity (it’s human nature), then it certainly (even if only unconsciously) behooves the security team to note even small issues as serious security holes. In our experience, this can drastically impact the perceived risk of identified security issues in both technicians and management and has even been known to cause knee-jerk reactions and unneeded panic when reports arrive that show things like simple information leakage as “critical vulnerabilities”. Clearly, if the vendor is not extremely careful and mindful of ethical behavior among their teams, you can get seriously skewed views between perceived risk and real-world risk, again primarily motivated by the need to find issues to make the engagement profitable.

In my opinion, let’s stick to plain old value. My organization helps you find and manage your risk. We help you focus on the specific technical vulnerabilities in networks, systems, applications and operations that attackers could exploit to cause you damage. To do this, my company employs security engineers. These deeply skilled experts earn a wage and thus cost money. Our services are based around the idea that the work we do has value. The damages that we prevent from occurring save your company money. Some of that money pays us for our services and thus, we pay our experts. Value. End of story.

Detection, Prevention Best Measure for Risk

GirlAnalyst

For years now, security folks have been shouting to high heaven about the end of the world, cyber-terrorism, cyber-jihad and all of the other creative phrasings for increased levels of risk and attacks.

SANS Institute (SysAdmin, Audit, Network, Security) at least asks for good things, too. It is always, as they point out, so much easier to create a list of threats and attack points than a list of what we have done, and are doing right. It is human nature to focus on the shortcomings.

We have to create rational security. Yes, we have to protect against increases in risk, but we have to realize that we have only so many resources and risk will never approach zero!

We recently worked an incident where a complete network compromise was likely to have occurred. In that event, the advice of another analyst was to completely shut down and destroy the entire network, rebuild each and every device from the ground up and come back online only when a state of security was created. The problem: the business of the organization would have been decimated by such a task. Removing the IT capability of the organization as a whole was simply not tenable.

Additionally, even if all systems were “turned and burned” and the architecture rebuilt from the ground up, security “nirvana” would likely not have been reached anyway. Any misstep, misconfigured system or device or mobile system introduced into the network would immediately raise the level of risk again.

Thus, the decision was made to focus not on mitigation of the risk, but on minimizing it. Steps were taken to replace the known compromised systems. Scans and password changes became the order of the day and entire segments of the network were removed from operation to minimize the risk during a particularly critical 12 hour cycle where critical data was being processed and services performed.

Has there been some downtime? Sure. Has there been some cost? Sure. How about user and business process pain? Of course! But the impact on their organization, business bottom line and reputation has been absolutely minimized than if they had taken the “turn and burn” approach.

Rational response to risk is what we need, not gloom and doom. Finding the holes in security will always be easy, but understanding what holes need to be prevented, wrapped in detection and protected by response is the key. Only when we can clearly communicate to management and consumers alike that we have rational approaches to solving the security problems will they likely start listening again.

3 Tips to Improve Your Organization’s Application Security

Did you know that 65% of all reported attacks in 2007 were in the application layer, according to the FBI? Applications are the new playground for hackers and with more apps being developed daily, it makes for one very tempting area for the bad guys. Let’s look at three ways you can make a difference in blocking these attacks:

  1. Integrate Application Security into the Software Development Life Cycle (SDLC). Add security to the following phases: requirements, business impact analysis, functional testing, and quality assurance. When you improve your SDLC in this way, you will catch red flags during the designing phase and not later. You’ll also ensure that the security team recognizes the impact and interactions necessary for security and increase the consistency in maintaining standards.
  2. Get Proactive – Develop programming standards, embrace development frameworks, create baselines for internal and external applications, create testing procedures, and – make sure to publish this information internally.
  3. Educate Developers – This is the most important strategy. It can eliminate a significant number of vulnerabilities by providing an ongoing general awareness. Deep training for leaders will build a strong foundation for training teams who will be empowered to implement a stronger appsec program. Helping developers evaluate outdated applications, for instance, will go a long way toward preventing any potential vulnerabilities from being exploited.

SQL injection and XSS account for 32% of all indents alone! More web applications are being developed which means more targets for the attackers. The threats are data loss, regulatory and legal issues, a loss of customer confidence, a loss of system/network control, an increase of more bots, phishing expeditions, and malware. By following these tips, you will significantly decrease the number of attacks.

Evaluating your frameworks can really help with determining outdated software that would affect your applications; both internal and external. Should you have any questions about the tips or desire additional assistance in the design of your appsec program, please don’t hesitate to contact MSI for help.

How Default Credentials and Remote Administration Panels Can Expose Security

In a recent article, a project led by a computer science professor at Columbia University conducted preliminary scans of some of the largest Internet Service Providers (ISPs) in North America, Europe, and Asia. He and his team uncovered thousands of embedded devices susceptible to attack – thanks to default credentials and remote administration panels being available to the Internet.  It is amazing to us that there are still many people (and possibly organizations) who don’t take into account the security implications of not changing credentials on outward facing devices! This goes beyond patching systems and having strong password policies. It’s highly unlikely you’re developing strong passwords internally if you’re not even changing what attackers know is true externally.

The fact that these devices are available is quite scary. It becomes trivial for an attacker to take over control of what is likely the only gateway in a residential network. The average user has little need to access these devices on a regular basis, so hardening the password and recording it on paper or even using a password vault like TrueCrypt is a good option for reducing the threat level. More importantly, how many home users need outside access to their gateway?

This all goes back to the common theme of being an easy target. If you let attackers see you as the low hanging fruit, you’re just asking to become a statistic. This is the digital equivalent to walking down a dangerous street at night with your head down, shoulders slumped, avoiding eye contact, and having hundred dollar bills popping out of your pockets! We can’t make it easy for them. It’s important that we make them think twice about attacking us- and simple things like changing default passwords or patching our machines (automatic updates, anyone?) allow us to take advantage of that 80% result with only 20% effort!

Toata Scanning for Zen Shopping Cart with Brain File – Updated

If you’ve been a long time reader of this blog, then you know about our ongoing efforts to help stem the tide of web application infections. Here is another example of this effort in action.

A couple of days ago the HITME began tracking a series of new scans that are circulating from the Toata bot network. These new scans appear to be aimed at cataloging systems that are running the Zen shopping cart application. As per usual behavior of these tools, it appears that the cataloging is automated and then later, exploitation occurs from either another piece of code or human intervention.

ToataZenBrain102709.txt

Above is a link to a brain file for the Web application scanner that we produce called BrainWebScan. You can use this tool and the brain file above to scan your own servers for implementations of the Zen shopping cart. If you identify servers that have the Zen shopping cart installed, careful review of these systems should be conducted to examine them for signs of compromise. Reviews of the logs for the string “Toata” will identify if the system has already been scanned by this particular attack tool. However, other attack tools are being used that do not create specific named strings in the log files. The vulnerability that these tools are seeking to eventually exploit is unknown at this time, may be an old vulnerability or exploit, or could potentially be a new and previously unknown vulnerability.

Users of the Zen cart application are encouraged to ensure that they are following the best practices for securing the application. The application should be kept up-to-date and the Zen cart vendor website should be carefully monitored for future updates and known issues. Additional monitoring, vigilance and attention to servers running the Zen cart application should be performed at this time. It is probably not a bad idea to have these systems assessed for currently known vulnerabilities in their operating system, content management application and other web components.

If you would like assistance checking your web application or vulnerability assessment performed on your web application, please do not hesitate to contact us for immediate assistance.

PS: You can download BrainWebScan for Windows from here: http://dl.getdropbox.com/u/397669/BrainWebScan100Win.zip

Here are an additional set of gathered targets:

//zencart/includes/general.js
//zen/includes/general.js
//ZenCart/includes/general.js
//ZEN/admin/includes/stylesheet.css
//zen/admin/includes/stylesheet.css
//zen-cart/admin/includes/stylesheet.css
//zencart/admin/includes/stylesheet.css
//zc/admin/includes/stylesheet.css
//zshop/admin/includes/stylesheet.css
/zencart/install.txt
/zen-cart/install.txt
/zen/install.txt
/zcart/install.txt

Penetration Testing vs. Vulnerability Assessments

Some think penetration testing and vulnerability assessments are one and the same. However, this isn’t true. A penetration test is a method of evaluating the security of a computer system or network by simulating an attack by a malicious hacker. The process involves an active analysis of the system for any weaknesses, technical flaws or vulnerabilities. This analysis is carried out from the position of a potential attacker, and can involve active exploitation of security vulnerabilities. Any security issues that are found will be presented to the system owner together with an assessment of their impact and often with a proposal for mitigation or a technical solution.

A vulnerability assessment is the process of identifying and quantifying vulnerabilities in a system. The IT department submits the information regarding the system as opposed to an internal or external person hacking into the network. When a company hires us to do a vulnerability assessment, they have given the team specific parameters for the assessment.

Brent Huston, CEO for MSI says, “A penetration test cannot be expected to identify all possible security vulnerabilities, nor does it offer any guarantee that an organization’s information is secure. But penetration testing can serve as a start for pinpointing a system’s security vulnerabilities.”

So what are some of the areas a penetration tester might explore? An organization’s intranet is an attractive target. So is an internal phone system or database. What is becoming more vital than ever is a consistent schedule of testing. Penetration testing can no longer be done just once a year to give an accurate assessment of an organization’s vulnerabilities. There are new exploits released daily. Adding new services can also create the opportunity for a new breach. Let MSI help you arrange a subscription service for you!

7 Areas of Concern With Cloud Computing

One of President Obama’s major initiatives is to promote the efficient use of information technology. He supports the paperless office ideal that hasn’t been fully realized in the Paperwork Reduction act of 1995.
Specifically mentioned is Federal use of cloud computing. So good, bad or indifferent, the government is now moving into the world of cloud computing – despite the fact that it is a new way of doing business that still has many unaddressed problems with security and the general form that it is going to take.

The Federal CIO Council (Federal Chief Information Officers Council codified in law in E-Government act of 2002) CTO of Federal Cloud is Patrick Stingley. At the Cloud Computing Summit in April 29 2009, it was announced that the government is going to use cloud for email, portals, remote hosting and other apps that will grow in complexity as they learn about security in the cloud. They are going to use a tiered approach to cloud computing.

Here are seven problematic areas of cloud computing for which solutions need to be found:

  1. Vendor lock-in – Most service providers use proprietary software, so an app built for one cloud cannot be ported to another. Once people are locked into the infrastructure, what is to keep providers from upping the price?
  2. Lack of standards – National Institute of Standards and Technology (NIST) is getting involved and is still in development. This feeds the vendor lock-in problem since every provider uses a proprietary set of access protocols and programming interfaces for their cloud services. Think of the effect of this on security!
  3. Security and compliance – Limited security offerings for data at rest and in motion have not agreed on compliance methods for provider certification. (i.e., FISMA or common criteria. Data must be protected while at rest, while in motion, while being processed and while awaiting or during disposal.
  4. Trust – Cloud providers offer limited visibility of their methods, which limits the opportunity to build trust. Complete transparency is needed, especially for government.
  5. Service Level Agreements – Enterprise class SLAs will be needed (99.99% availability). How is the data encrypted? What level of account access is present and how is access controlled?
  6. Personnel – Many of these companies span the globe – how can we trust sensitive data to those in other countries? There are legal concerns such as a limited ability to audit or prosecute.
  7. Integration – Much work is needed on integrating the cloud provider’s services with enterprise services and make them work together.

Opportunities abound for those who desire to guide cloud computing. Those concerned with keeping cloud computing an open system drafted an Open Cloud Manifesto, asking that a straightforward conversation needs to occur in order to avoid potential pitfalls. Keep alert as the standards develop and contribute, if possible.