Cybersecurity

Rediscovering Vulnerabilities

By Trey Herr, Bruce Schneier
Friday, July 21, 2017, 11:30 AM

Software and computer systems are a standard target of intelligence collection in an age where everything from your phone to your sneakers has been turned into a connected computing device. A modern government intelligence organization must maintain access to some software vulnerabilities into order to target these devices. However, the WannaCry ransomware and NotPetya attacks have called attention to the perennial flipside of this issue—the same vulnerabilities that the U.S. government uses to conduct this targeting can also be exploited by malicious actors if they go unpatched.

That makes it important to consider the likelihood that multiple parties will independently discover the same vulnerability, a process we call rediscovery. But until this year, there was little hard evidence on how often it happens. This lack of data has been problematic because it should be a key element in a government’s decision whether or not to disclose a vulnerability. This decision process, called the Vulnerability Equities Process (VEP), is conceptually simple. The government’s offensive interest in a software flaw lies in its value for purposes of collecting information or compromising systems in a way that would protect citizens. The companies that make this software have an equal interest in these vulnerabilities, to fix them in order…to protect citizens (and all users).

Together with an incredibly talented rising senior in Harvard’s computer science department, we recently published the final version of a paper that analyzes a dataset of more than 4,300 vulnerabilities and estimates vulnerability rediscovery across different vendors and software types. The paper concludes that rediscovery happens more often than previously reported. For our dataset, 15% to 20% of vulnerabilities are discovered independently at least twice within a year. For just the data on Android, 13.9% of vulnerabilities are rediscovered within 60 days, rising to 19% within 90 days, and above 21% within 120 days. Chrome sees 12.87% rediscovery within 60 days; and the aggregate rate for our entire dataset generally rises over the eight-year span, topping out at 19.6% in 2016.

When combined with an estimate of the total count of vulnerabilities in use by the NSA, these rates suggest that rediscovery of vulnerabilities kept secret by the U.S. government may be the source of as many as one-third of all zero-day vulnerabilities detected in use each year. These results suggest an opportunity to sharpen the focus and resulting value of bug bounty programs The paper’s findings also indicate that policymakers should more rigorously evaluate the costs of, and requirements for, non-disclosure of software vulnerabilities.

Independently, the RAND Corporation published an excellent study earlier this year that looks at related issues, but the two papers ask slightly different questions. Our study addresses the risk associated with discovery of a vulnerability by a second party; this speaks to gauging the merits of keeping a vulnerability secret and determining the costs of non-disclosure. On the other hand, the RAND paper looks specifically at the issue of managing an intelligence agency-like stockpile of vulnerabilities against discovery of those same flaws in the public domain and delves into detail on the cost and labor associated with developing a working exploit. For those interested in a deeper discussion, authors from both papers will join several vulnerability wonks at a panel at the BlackHat conference in Las Vegas next week.

This research has value beyond calibrating the government’s disclosure of vulnerabilities. The rate of rediscovery can help researchers estimate the product lifecycle of malicious software. Rediscovery impacts the lifespan of a vulnerability; the likelihood of its being disclosed to or discovered by the vendor grows with every instance of rediscovery. Just as one can compare a supermarket’s need to renew its stock of bread vs. salted herring, some vulnerabilities are likely to “go stale,” and thus be of little value, much faster than others. Higher rates of rediscovery will drive greater churn, as exploit kits and other products dependent on vulnerabilities need to be refreshed more rapidly. 

Rediscovery can also help shape patching behavior. Bug bounties drive vulnerability disclosure to firms, a major way by which companies identify bugs to be patched. The volume of these disclosed bugs can be overwhelming, leading companies to prioritize the completion of some patches over others. Rediscovery rates can help drive that prioritization, pushing bugs with a higher chance of being discovered by other parties while in the patching process towards the top of the list. Rediscovery rates of as high as 21% in this paper help set the parameters for what companies running bounty programs could expect—establishing a potential upper bound for vulnerabilities, especially those in open source software.

The goal of this writing is not to relitigate the vulnerability equities debate, much of which can be found in the pages of this and similar venues. Our purpose is to call for data and more rigorous analysis of what it costs the intelligence community when they disclose a vulnerability and what it costs citizens and users when they don’t. Our proposal is that rather than continuing a semi-circular debate, the community of interest around vulnerabilities and their use or maintenance by the government would be better served by evidence than speculation and anecdote.

Topics: