Adam Caudill

Security Leader, Researcher, Developer, Writer, & Photographer

Responsible Disclosure Is Wrong

The debate around how, where, and when to disclose a vulnerability – and of course to whom – is nearly as old as the industry that spawned the vulnerabilities. This debate will likely continue as long as humans are writing software. Unfortunately, the debate is hampered by poor terminology.

Responsible disclosure is a computer security term describing a vulnerability disclosure model. It is like full disclosure, with the addition that all stakeholders agree to allow a period of time for the vulnerability to be patched before publishing the details.

This is how Wikipedia defines responsible disclosure — but the person that coined the term has stepped away from it, advocating “coordinated disclosure” instead. To me, this makes perfect sense – if you go by the Wikipedia definition, coordinated disclosure is a better fit, it closely matches how the term is being use by vendors. The problem is that coordinated disclosure isn’t necessarily responsible — and full disclosure isn’t necessarily irresponsible.

The term is a bit of a misnomer really — as researchers our responsibility is to users, though often the term is seen as meaning a responsibility to vendors. This is the biggest issue I have with the term, it’s used as focusing on the wrong group in a disclosure. As a security researcher my responsibility is to make the world a safer place, to protect the users, not to protect the name of a vendor.

Based on this, I would say that responsible disclosure is wrong – or more accurately, how it’s been defined is wrong. As defined, what we get from the term is a one-sided view on the disclosure process and its goals. I like the term, but the definition doesn’t suit the reality of vulnerability disclosure.

Sensible Disclosure #

Perhaps we need a better term to describe real world disclosures – full disclosure and coordinated disclosure are both potential outcomes of the sensible disclosure process. Every step in the decision process needs to factor in what’s best for users.

Let’s look at what a truly responsible disclosure decision process needs to include – decisions that are ignored by the simplistic definition used by responsible disclosure today. This is far from complete of course, these are just high level question that need consideration; there are considerations that are more specific to situations that can have a dramatic impact on what the right decision is.

Can users take action to protect themselves? If you release details publicly, are there concrete steps that individual users and companies can take to protect themselves?

Is it being actively exploited? If a vulnerability is being actively exploited, the focus has to shift to minimizing damage – this can change value of other factors drastically.

Is the issue found with minimal effort? Is the vulnerability something difficult, or something that anyone would notice of they looked in the right place? If it’s something that anyone would notice, how likely is it that others have already found it, and are using it maliciously? Even if you don’t have evidence that something is being exploited in the wild, it’s still quite possible, and this needs to be considered.

Is the issue something that can be corrected without major effort? Some issues are simple — a few lines of code and it’s gone, others are difficult to impossible to fix.

If patched today, how are users impacted? With some flaws, apply a patch to code and you are done — with others, there is still clean up to do. For example, errors in cryptography can mean that messages aren’t protected as they should be; this means that every message that is encrypted expands the issue and increases risk, that isn’t addressed once patched. There is also the related issue of backwards compatibility — breaking systems by fixing the flaw, or requiring substantial cleanup (think re-encrypting large amounts of data).

Is the vendor responsive? Vendors have a responsibility to respond quickly, and to take action quickly to address reported vulnerabilities. Are they responding at all? Are they trying to address the issue, or just keep the issue away from the press for as long as possible? If the vendor isn’t acting, is more pressure needed to get the issue resolved? Another important question when evaluating vendor response — remember, they are likely addressing other issues as well, which may be more severe; as such, something that you think is critical, may deserve less attention than other bugs that you aren’t aware of.

How severe is the issue? Is this an earth-shattering vulnerability that those affected would take drastic actions if they were aware, or is this something minor that is an issue, but not quite deserving of panic?

Sensible Disclosure should include evaluation of all of these, and then proceeding to coordinated disclosure or full disclosure, or some hybrid — based on what’s best for those impacted. This is full of nuance, full of subtleties — it’s not the black and white “tell us, only us, and we’ll tell you when you can say something” policy that vendors like, but it provides a path to acting in the interest of users first.

Users First, Always #

There is nothing wrong with coordinated disclosure — this should be the goal: quick vendor response, protecting users as quickly as possible with minimal or no malicious use of a flaw. Generally speaking, contacting the vendor should be the first step, and hopefully they act quickly and the rest of the process is then easy; sometimes though they don’t, sometime full disclosure is the only option to get them to act. Sometimes the delay of working with the vendor would put people at risk.

For a security researcher, in general, full disclosure should be the last resort, pulled out when working with the vendor has failed. There are some cases where getting the word out quickly is more important though — it depends on the vulnerability, and the impact to those affected.

Each and every decision made in a disclosure process should be focused on the users, and what protects them best — some vulnerabilities require so much research, and are so difficult to exploit that taking a year to secretly fix it is fine. Others, every day that goes by moves users closer to disaster; most others are somewhere in between.

There is no one size fits all solution for vulnerability disclosure. That simple, “responsible disclosure” doesn’t address the subtitles that are actually involved. The term Sensible Disclosure may be closer to reality, though I don’t like it as much.

Be responsible, protect users — practice sensible disclosure.

Adam Caudill


Related Posts

  • Irrational Attribution: APT3.14159

    Note: This is satire / fiction; well, more or less – probably more more than less. Any resemblance to real companies, living or dead, is purely coincidental. WASHINGTON, D.C — Unnamed White House officials that spoke on the condition of anonymity, have stated that a major American company has been hacked, and the attackers are threatening to release terabytes of proprietary information. The name of the company has not been released at this time.

  • SMIMP at the DEFCON Crypto Village

    Last week I gave a lighting talk at the DEFCON CryptoVillage on SMIMP. The talk went over the basics of why the project is needed, and how the specification works. Here are the slides: Here is a rough transcript of the talk: Slide 1: I’m Adam Caudill, I’m a developer and security researcher; I work on a number of different things, but my recent work has been around privacy and secure messaging.

  • phpMyID: Fixing Abandoned OSS Software

    phpMyID is a simple solution for those that want to run their own OpenID endpoint – the problem is that its author stopped maintaining the project in 2008. Despite this, there’s still quite a few people that use it, because it’s the easiest single-user OpenID option available. Unfortunately, the author didn’t follow best practices when building the software, and as a result multiple security flaws were introduced. In 2008, a XSS was identified and never fixed (CVE-2008-4730), in the years since then it seems the software has been below the radar.

  • Win by Building for Failure

    Systems fail; it doesn’t matter what the system is. Something will fail sooner or later. When you design a system, are you focused on the happy path, or are you building with the possibility of failure in mind? If you suffered a data breach tomorrow, what would the impact be? Does the system prevent loss by design, or does it just fall apart? Can you easily minimize loss and damage, or would an attacker have free rein once they get in?

  • On The Ethics of BadUSB

    Last Friday, Brandon Wilson and I gave a talk on BadUSB at DerbyCon – I wrote some about it yesterday. Yesterday, Wired published an article on the talk, kicking off several others – only the authors of the Wired and Threatpost articles contacted us for input. There has been some questions raised as to the responsibility of releasing the code – so I want to take a few minutes to talk about what we released, why, and what the risks actually are.