The debate around how, where, and when to disclose a vulnerability - and of course to whom - is nearly as old as the industry that spawned the vulnerabilities. This debate will likely continue as long as humans are writing software. Unfortunately, the debate is hampered by poor terminology.
Responsible disclosure is a computer security term describing a vulnerability disclosure model. It is like full disclosure, with the addition that all stakeholders agree to allow a period of time for the vulnerability to be patched before publishing the details.
This is how Wikipedia defines responsible disclosure — but the person that coined the term has stepped away from it, advocating “coordinated disclosure” instead. To me, this makes perfect sense - if you go by the Wikipedia definition, coordinated disclosure is a better fit, it closely matches how the term is being use by vendors. The problem is that coordinated disclosure isn’t necessarily responsible — and full disclosure isn’t necessarily irresponsible.
The term is a bit of a misnomer really — as researchers our responsibility is to users, though often the term is seen as meaning a responsibility to vendors. This is the biggest issue I have with the term, it’s used as focusing on the wrong group in a disclosure. As a security researcher my responsibility is to make the world a safer place, to protect the users, not to protect the name of a vendor.
Based on this, I would say that responsible disclosure is wrong - or more accurately, how it’s been defined is wrong. As defined, what we get from the term is a one-sided view on the disclosure process and its goals. I like the term, but the definition doesn’t suit the reality of vulnerability disclosure.
Perhaps we need a better term to describe real world disclosures - full disclosure and coordinated disclosure are both potential outcomes of the sensible disclosure process. Every step in the decision process needs to factor in what’s best for users.
Let’s look at what a truly responsible disclosure decision process needs to include - decisions that are ignored by the simplistic definition used by responsible disclosure today. This is far from complete of course, these are just high level question that need consideration; there are considerations that are more specific to situations that can have a dramatic impact on what the right decision is.
Can users take action to protect themselves? If you release details publicly, are there concrete steps that individual users and companies can take to protect themselves?
Is it being actively exploited? If a vulnerability is being actively exploited, the focus has to shift to minimizing damage - this can change value of other factors drastically.
Is the issue found with minimal effort? Is the vulnerability something difficult, or something that anyone would notice of they looked in the right place? If it’s something that anyone would notice, how likely is it that others have already found it, and are using it maliciously? Even if you don’t have evidence that something is being exploited in the wild, it’s still quite possible, and this needs to be considered.
Is the issue something that can be corrected without major effort? Some issues are simple — a few lines of code and it’s gone, others are difficult to impossible to fix.
If patched today, how are users impacted? With some flaws, apply a patch to code and you are done — with others, there is still clean up to do. For example, errors in cryptography can mean that messages aren’t protected as they should be; this means that every message that is encrypted expands the issue and increases risk, that isn’t addressed once patched. There is also the related issue of backwards compatibility — breaking systems by fixing the flaw, or requiring substantial cleanup (think re-encrypting large amounts of data).
Is the vendor responsive? Vendors have a responsibility to respond quickly, and to take action quickly to address reported vulnerabilities. Are they responding at all? Are they trying to address the issue, or just keep the issue away from the press for as long as possible? If the vendor isn’t acting, is more pressure needed to get the issue resolved? Another important question when evaluating vendor response — remember, they are likely addressing other issues as well, which may be more severe; as such, something that you think is critical, may deserve less attention than other bugs that you aren’t aware of.
How severe is the issue? Is this an earth-shattering vulnerability that those affected would take drastic actions if they were aware, or is this something minor that is an issue, but not quite deserving of panic?
Sensible Disclosure should include evaluation of all of these, and then proceeding to coordinated disclosure or full disclosure, or some hybrid — based on what’s best for those impacted. This is full of nuance, full of subtleties — it’s not the black and white “tell us, only us, and we’ll tell you when you can say something” policy that vendors like, but it provides a path to acting in the interest of users first.
Users First, Always
There is nothing wrong with coordinated disclosure — this should be the goal: quick vendor response, protecting users as quickly as possible with minimal or no malicious use of a flaw. Generally speaking, contacting the vendor should be the first step, and hopefully they act quickly and the rest of the process is then easy; sometimes though they don’t, sometime full disclosure is the only option to get them to act. Sometimes the delay of working with the vendor would put people at risk.
For a security researcher, in general, full disclosure should be the last resort, pulled out when working with the vendor has failed. There are some cases where getting the word out quickly is more important though — it depends on the vulnerability, and the impact to those affected.
Each and every decision made in a disclosure process should be focused on the users, and what protects them best — some vulnerabilities require so much research, and are so difficult to exploit that taking a year to secretly fix it is fine. Others, every day that goes by moves users closer to disaster; most others are somewhere in between.
There is no one size fits all solution for vulnerability disclosure. That simple, “responsible disclosure” doesn’t address the subtitles that are actually involved. The term Sensible Disclosure may be closer to reality, though I don’t like it as much.
Be responsible, protect users — practice sensible disclosure.