Web App Security Best Practices and the People Who Love Them
Creating more secure software is more important than simply buying more security software, says WhiteHat Security's Jeremiah Grossman.
Wed, October 30, 2013
CSO — When a website is attacked, the results can be devastating to an organization -- both financially and from a brand perspective. Given modern society's ever-increasing reliance on the Web, the impact of a breach and the associated costs are going up, and fast. Adding more "robust" firewalls or allocating more budget to anti-virus protection is not the answer. It's still an important step, sure, but these controls provide nearly zero protection against today's web-based attacks.
Today's focus, and what many IT and security professionals are recognizing, is that there is a much bigger software security problem at play here. Therefore, what is needed is more secure software, NOT more security software.
In a recent survey of some of these forward-thinking security professionals at a cross section of industries -- spanning banking and financial, to retail and entertainment, to healthcare and media organizations and more -- we discovered that achieving the right balance between best practices and accountability, can make a huge difference in moving the needle at their own organizations toward more secure software, rather than strictly focusing on buying more security software.
Let's take a look at best practices first. If the goal is to create more secure software, reason dictates that developer training and testing early in the software development lifecycle would be a natural approach to the problem. Looking at our data, we found:
- 57 percent of organizations said they provide some amount of instructor-led or computer-based software security training for their programmers. These organizations experienced 40 percent fewer vulnerabilities, resolved them 59 percent faster, but exhibited a 12 percent lower remediation rate.
This appears to prove our theory at least partially correct. However, taking this one level deeper beyond training, when we looked into measures taken to test applications prior to putting them into production, we found that while many organizations (85 percent of those surveyed, in fact) said they perform some amount of application security testing in pre- production website environments (i.e. staging, QA or other pre-prod systems), only 39 percent perform testing earlier in the life cycle of the development process and perform any Static Code Analysis. Surprisingly, these organizations experienced 15 percent more vulnerabilities, resolved them 26 percent slower, and had a 4 percent lower remediation rate.
So why is it that those performing SCA early in the process are seeing an increase in vulnerabilities? This is certainly counter to our theory. Why is it that for some customers, there is absolutely a measureable, positive impact, yet for others there is no discernible benefit? Perhaps it is such that developers do not understand the issues well enough and as such would require additional training in order to understand what needs fixing and how. Or it could be that developers are often reassigned to other tasks (building product features, etc.) rather than fix vulnerabilities. There are a number of possible reasons; however, we believe that the reason for this difference is that there are in fact few, if any, truly universal application best-practices.