Ron Charette, CISSP
In March 2009, OWASP took a survey of 50 companies and found 61% of those surveyed had an independent third-party security review of software code to find flaws before web applications were used live[1]. This was a new kind of a survey, which attempted to derive benchmarks from software development and how it is associated with spending. Earlier in the month, Jeremiah Grossman of WhiteHat Security Inc. put a call out for metrics on twitter: "Impossible to know what works without outcome based metrics. Limited data on what happened, less on how, and neither is tied[3]." Grossman is on a tear, and we the security community (and ironically also the user community), support him.
It is for the above reasons that this methodology and schema are proposed, which may be built on and used to capture metrics (at this point I'd call them estimates, but it is a beginning) for quantifying costs against the effect of the software cycle. Dare I call it "cost and effect?"
*Which Beans are we Counting?*
In order to capture vital data used to tie back through the software development and security testing cycles, metrics need to be captured, which involve the following phases:
- Baseline (optional at this time)
- Configuration Changes
- Vulnerabilities Found
- Remediation
- Mitigation
- Incident Response
- Budgeting
Within the above phases, it would then be necessary to capture the following attributes (more to follow):
- Type/Category
- Date Completed
- Severity (phase specific)
- Rationale (phase specific)
- Itemized Cost
*What problems does having this data solve?*
Through the collection of the above, it is felt that a historical record will be conceived to provide the information holder with strongly typed information, which then may be transformed into reports or decision points for future budgetary, software, and security-based concerns.
In its present form, thinking a little more data can be captured to add much more value cannot be helped. The benefit of this methodology is that it can be used to scale to virtually any organization or data set. The complement would be to design a schema with the flexibility to use multiple revisions (interoperability) on a technology able to fully harness the information, such as a web service (accessibility). Having this data readily available and interoperable in a universal form could potentially provide a very powerful platform.
*Factoring Budget Considerations*
The most important function of this exercise is also one of the most complex. In the development of software, acquisition cycles are often specialized, and as a result intrinsically laden with processes that do not lend well to providing a single itemized cost. For this reason, a schema is provided to address these accounting functions and to (hopefully) assist in assignment of cost to the metric model.
Worthy of note, the Budgeting schema is not for the already itemized costs placed within the phases, but a roll-up of the organizational or programmatic costs. The intent is to capture these costs and distribute them over the remaining phases once they are sufficiently known and isolated.
*In Support of an XML Schema*
After considering the above, it is then naturally deduced by the author that the typing and scalabilty of XML lend well to the type of data collected and harnessed in this manner. Exact schema to follow.
*Bibliography*
[1] http://www.owasp.org/images/b/b2/OWASP_SSB_Project_Report_March_2009.pdf
[2] http://searchsecurity.techtarget.com/news/article/0,289142,sid14_gci1351731,00.html?track=sy160
[3] http://twitter.com/jeremiahg/statuses/1263037965
Acknowledgment to Jason Oliver for providing the fire to work this through.
Proposal of Web Application Security Metric Framework to Compliance/Configuration Management Vendors (Altiris, BMC, Rational, et al) by Ron Charette is licensed under a Creative Commons Attribution-Share Alike 3.0 United States License