Adam Caudill

Security Leader, Researcher, Developer, Writer, & Photographer

Responsible Disclosure Is Wrong

The debate around how, where, and when to disclose a vulnerability – and of course to whom – is nearly as old as the industry that spawned the vulnerabilities. This debate will likely continue as long as humans are writing software. Unfortunately, the debate is hampered by poor terminology.

Responsible disclosure is a computer security term describing a vulnerability disclosure model. It is like full disclosure, with the addition that all stakeholders agree to allow a period of time for the vulnerability to be patched before publishing the details.

This is how Wikipedia defines responsible disclosure — but the person that coined the term has stepped away from it, advocating “coordinated disclosure” instead. To me, this makes perfect sense – if you go by the Wikipedia definition, coordinated disclosure is a better fit, it closely matches how the term is being use by vendors. The problem is that coordinated disclosure isn’t necessarily responsible — and full disclosure isn’t necessarily irresponsible.

The term is a bit of a misnomer really — as researchers our responsibility is to users, though often the term is seen as meaning a responsibility to vendors. This is the biggest issue I have with the term, it’s used as focusing on the wrong group in a disclosure. As a security researcher my responsibility is to make the world a safer place, to protect the users, not to protect the name of a vendor.

Based on this, I would say that responsible disclosure is wrong – or more accurately, how it’s been defined is wrong. As defined, what we get from the term is a one-sided view on the disclosure process and its goals. I like the term, but the definition doesn’t suit the reality of vulnerability disclosure.

Sensible Disclosure #

Perhaps we need a better term to describe real world disclosures – full disclosure and coordinated disclosure are both potential outcomes of the sensible disclosure process. Every step in the decision process needs to factor in what’s best for users.

Let’s look at what a truly responsible disclosure decision process needs to include – decisions that are ignored by the simplistic definition used by responsible disclosure today. This is far from complete of course, these are just high level question that need consideration; there are considerations that are more specific to situations that can have a dramatic impact on what the right decision is.

Can users take action to protect themselves? If you release details publicly, are there concrete steps that individual users and companies can take to protect themselves?

Is it being actively exploited? If a vulnerability is being actively exploited, the focus has to shift to minimizing damage – this can change value of other factors drastically.

Is the issue found with minimal effort? Is the vulnerability something difficult, or something that anyone would notice of they looked in the right place? If it’s something that anyone would notice, how likely is it that others have already found it, and are using it maliciously? Even if you don’t have evidence that something is being exploited in the wild, it’s still quite possible, and this needs to be considered.

Is the issue something that can be corrected without major effort? Some issues are simple — a few lines of code and it’s gone, others are difficult to impossible to fix.

If patched today, how are users impacted? With some flaws, apply a patch to code and you are done — with others, there is still clean up to do. For example, errors in cryptography can mean that messages aren’t protected as they should be; this means that every message that is encrypted expands the issue and increases risk, that isn’t addressed once patched. There is also the related issue of backwards compatibility — breaking systems by fixing the flaw, or requiring substantial cleanup (think re-encrypting large amounts of data).

Is the vendor responsive? Vendors have a responsibility to respond quickly, and to take action quickly to address reported vulnerabilities. Are they responding at all? Are they trying to address the issue, or just keep the issue away from the press for as long as possible? If the vendor isn’t acting, is more pressure needed to get the issue resolved? Another important question when evaluating vendor response — remember, they are likely addressing other issues as well, which may be more severe; as such, something that you think is critical, may deserve less attention than other bugs that you aren’t aware of.

How severe is the issue? Is this an earth-shattering vulnerability that those affected would take drastic actions if they were aware, or is this something minor that is an issue, but not quite deserving of panic?

Sensible Disclosure should include evaluation of all of these, and then proceeding to coordinated disclosure or full disclosure, or some hybrid — based on what’s best for those impacted. This is full of nuance, full of subtleties — it’s not the black and white “tell us, only us, and we’ll tell you when you can say something” policy that vendors like, but it provides a path to acting in the interest of users first.

Users First, Always #

There is nothing wrong with coordinated disclosure — this should be the goal: quick vendor response, protecting users as quickly as possible with minimal or no malicious use of a flaw. Generally speaking, contacting the vendor should be the first step, and hopefully they act quickly and the rest of the process is then easy; sometimes though they don’t, sometime full disclosure is the only option to get them to act. Sometimes the delay of working with the vendor would put people at risk.

For a security researcher, in general, full disclosure should be the last resort, pulled out when working with the vendor has failed. There are some cases where getting the word out quickly is more important though — it depends on the vulnerability, and the impact to those affected.

Each and every decision made in a disclosure process should be focused on the users, and what protects them best — some vulnerabilities require so much research, and are so difficult to exploit that taking a year to secretly fix it is fine. Others, every day that goes by moves users closer to disaster; most others are somewhere in between.

There is no one size fits all solution for vulnerability disclosure. That simple, “responsible disclosure” doesn’t address the subtitles that are actually involved. The term Sensible Disclosure may be closer to reality, though I don’t like it as much.

Be responsible, protect users — practice sensible disclosure.

Adam Caudill


Related Posts

  • Win by Building for Failure

    Systems fail; it doesn’t matter what the system is. Something will fail sooner or later. When you design a system, are you focused on the happy path, or are you building with the possibility of failure in mind? If you suffered a data breach tomorrow, what would the impact be? Does the system prevent loss by design, or does it just fall apart? Can you easily minimize loss and damage, or would an attacker have free rein once they get in?

  • On The Ethics of BadUSB

    Last Friday, Brandon Wilson and I gave a talk on BadUSB at DerbyCon – I wrote some about it yesterday. Yesterday, Wired published an article on the talk, kicking off several others – only the authors of the Wired and Threatpost articles contacted us for input. There has been some questions raised as to the responsibility of releasing the code – so I want to take a few minutes to talk about what we released, why, and what the risks actually are.

  • Verizon Hum Leaking Credentials

    or, Christmas Infosec Insanity… A friend mentioned Hum by Verizon, a product that I hadn’t heard of but quickly caught my attention – both from a “here’s a privacy nightmare” perspective, and “I might actually use that” perspective. While looking at the site, I decided to take a look at the source code for the shopping page – what I saw was rather unexpected. Near the top is a large block of JSON assigned to an otherwise unused variable named phpvars – included was some validation code, a number of URLs, some HTML, and the like.

  • Juniper, Backdoors, and Code Reviews

    Researchers are still working to understand the impact of the Juniper incident – the details of how the VPN traffic decryption backdoor are still not fully understood. That such devastating backdoors could make it in to such a security-critical product, and remain for years undetected has shocked many (and pushed many others deeper into their cynicism). There are though, some questions that are far more important in the long run:

  • Dovestones Software AD Self Password Reset (CVE-2015-8267)

    Software AD Self Password Reset v3.0 by Dovestones Software contains a critical vulnerability in the password change functionality, that allows unauthenticated users to change the password of arbitrary accounts. The vendor has been working with customers to upgrade them to a fixed version. The /Reset/ChangePass function doesn’t validate that the validation questions have been answered, or validate that the account in question is enrolled. This allows an attacker to reset any account that the service account is able to reset, even if they aren’t enrolled.