
Summary Bullets:
- The recent DBIR controversy over a seemingly flawed top 10 list is an opportunity to highlight that data-driven security research is no panacea for breach prevention.
- Data-driven security research shouldn’t be a drive to develop conclusions; it should an attempt to foster discussion and collaboration.
The annual release of the Verizon Data Breach Investigations Report is usually widely anticipated and well received for its data-driven insights on which attack techniques led to successful data breaches in the previous year, and what preventative actions enterprises might undertake to avoid future attacks.
This year’s report, however, has been unusually criticized because the authors’ list of the top 10 most exploited vulnerabilities (in successful breaches) seemed flawed to many vulnerability experts.
For example, one particular vulnerability in Verizon’s list, called FREAK, was a key point of contention. In reality, it would be virtually impossible to exploit FREAK to the extent Verizon claimed because not only does it require near-supercomputer-level computational power possessed by few outside of the NSA, but also its very nature as a man-in-the-middle attack means it depends on another exploit in order to be effective.
The controversy itself has been widely discussed, and while Verizon may have made some questionable choices in how it chose to select and interpret its vulnerability data, it shows why data-driven research is no panacea when it comes to figuring out how to stop attackers. The simple truth is that data-driven research is conducted by humans, who make subjective decisions on data gathering, correlation, interpretation, and presentation. Some level of inherent bias is unavoidable, even among the most sincere researchers.
The unfortunate result of this year’s DBIR controversy is likely a reduced level of confidence in what has become an incredibly useful reference for understanding ways data breaches actually happen. That would be a shame. Verizon deserves some criticism here because it changes its data-gathering parameters on a year-to-year basis without providing sufficient transparency to the public, which raises many unanswered questions from those reading the research and makes independent verification difficult. But, many critics who question the validity of the entire DBIR aren’t being fair; alleged issues with Verizon’s vulnerability research don’t invalidate unrelated findings from separate data sets that offer valuable insights.
It’s a great irony that even though we live in the Information Age, truly insightful, elucidating information is harder than ever to discern. The constant bombardment of data, both raw and manipulated, makes it nearly impossible to trust data at face value without knowing the way in which it was collected, how it was analyzed, and why it was ultimately interpreted to produce a finding.
The lesson for researchers and vendors alike is that data-driven security research should not only be a drive to develop conclusions, but should also foster discussion and collaboration. Verizon, to its credit, has done this better than most, but it and all researchers should strive to be as open as possible about their conclusions and the methods used to develop them.
The best answer is more transparency. Let the data sets do the talking: some data may not lead to an obvious conclusion, or may lead to several. That’s OK. Security researchers should show the data and the parameters and let the discussion ensue.
The DBIR will remain critically important to advancing breach prevention, as it should, but more transparency and collaboration in security research will lead to better conclusions and ultimately better strategies for thwarting breaches.