Adam Caudill

Independent Security Researcher & Software Developer

Responsible Disclosure Is Wrong

The debate around how, where, and when to disclose a vulnerability - and of course to whom - is nearly as old as the industry that spawned the vulnerabilities. This debate will likely continue as long as humans are writing software. Unfortunately, the debate is hampered by poor terminology.

Responsible disclosure is a computer security term describing a vulnerability disclosure model. It is like full disclosure, with the addition that all stakeholders agree to allow a period of time for the vulnerability to be patched before publishing the details.

This is how Wikipedia defines responsible disclosure — but the person that coined the term has stepped away from it, advocating “coordinated disclosure” instead. To me, this makes perfect sense - if you go by the Wikipedia definition, coordinated disclosure is a better fit, it closely matches how the term is being use by vendors. The problem is that coordinated disclosure isn’t necessarily responsible — and full disclosure isn’t necessarily irresponsible.

The term is a bit of a misnomer really — as researchers our responsibility is to users, though often the term is seen as meaning a responsibility to vendors. This is the biggest issue I have with the term, it’s used as focusing on the wrong group in a disclosure. As a security researcher my responsibility is to make the world a safer place, to protect the users, not to protect the name of a vendor.

Based on this, I would say that responsible disclosure is wrong - or more accurately, how it’s been defined is wrong. As defined, what we get from the term is a one-sided view on the disclosure process and its goals. I like the term, but the definition doesn’t suit the reality of vulnerability disclosure.

Sensible Disclosure

Perhaps we need a better term to describe real world disclosures - full disclosure and coordinated disclosure are both potential outcomes of the sensible disclosure process. Every step in the decision process needs to factor in what’s best for users.

Let’s look at what a truly responsible disclosure decision process needs to include - decisions that are ignored by the simplistic definition used by responsible disclosure today. This is far from complete of course, these are just high level question that need consideration; there are considerations that are more specific to situations that can have a dramatic impact on what the right decision is.

Can users take action to protect themselves? If you release details publicly, are there concrete steps that individual users and companies can take to protect themselves?

Is it being actively exploited? If a vulnerability is being actively exploited, the focus has to shift to minimizing damage - this can change value of other factors drastically.

Is the issue found with minimal effort? Is the vulnerability something difficult, or something that anyone would notice of they looked in the right place? If it’s something that anyone would notice, how likely is it that others have already found it, and are using it maliciously? Even if you don’t have evidence that something is being exploited in the wild, it’s still quite possible, and this needs to be considered.

Is the issue something that can be corrected without major effort? Some issues are simple — a few lines of code and it’s gone, others are difficult to impossible to fix.

If patched today, how are users impacted? With some flaws, apply a patch to code and you are done — with others, there is still clean up to do. For example, errors in cryptography can mean that messages aren’t protected as they should be; this means that every message that is encrypted expands the issue and increases risk, that isn’t addressed once patched. There is also the related issue of backwards compatibility — breaking systems by fixing the flaw, or requiring substantial cleanup (think re-encrypting large amounts of data).

Is the vendor responsive? Vendors have a responsibility to respond quickly, and to take action quickly to address reported vulnerabilities. Are they responding at all? Are they trying to address the issue, or just keep the issue away from the press for as long as possible? If the vendor isn’t acting, is more pressure needed to get the issue resolved? Another important question when evaluating vendor response — remember, they are likely addressing other issues as well, which may be more severe; as such, something that you think is critical, may deserve less attention than other bugs that you aren’t aware of.

How severe is the issue? Is this an earth-shattering vulnerability that those affected would take drastic actions if they were aware, or is this something minor that is an issue, but not quite deserving of panic?

Sensible Disclosure should include evaluation of all of these, and then proceeding to coordinated disclosure or full disclosure, or some hybrid — based on what’s best for those impacted. This is full of nuance, full of subtleties — it’s not the black and white “tell us, only us, and we’ll tell you when you can say something” policy that vendors like, but it provides a path to acting in the interest of users first.

Users First, Always

There is nothing wrong with coordinated disclosure — this should be the goal: quick vendor response, protecting users as quickly as possible with minimal or no malicious use of a flaw. Generally speaking, contacting the vendor should be the first step, and hopefully they act quickly and the rest of the process is then easy; sometimes though they don’t, sometime full disclosure is the only option to get them to act. Sometimes the delay of working with the vendor would put people at risk.

For a security researcher, in general, full disclosure should be the last resort, pulled out when working with the vendor has failed. There are some cases where getting the word out quickly is more important though — it depends on the vulnerability, and the impact to those affected.

Each and every decision made in a disclosure process should be focused on the users, and what protects them best — some vulnerabilities require so much research, and are so difficult to exploit that taking a year to secretly fix it is fine. Others, every day that goes by moves users closer to disaster; most others are somewhere in between.

There is no one size fits all solution for vulnerability disclosure. That simple, “responsible disclosure” doesn’t address the subtitles that are actually involved. The term Sensible Disclosure may be closer to reality, though I don’t like it as much.

Be responsible, protect users — practice sensible disclosure.

Making BSides Knoxville

Two years of discussions, months of planning, weekly meetings, and thousands of dollars - BSides Knoxville 2015, the first BSides Knoxville that is, is in the books. By any metric I can think of, it was a resounding success - the feedback was great, awesome talks, good food, and a great atmosphere.

I would like to give a little insight into the event, some of what I learned from it, what went right, went wrong, and how to make something like this without going insane. Hopefully this will be useful for others thinking about running a small conference, or if you just want a behind the scenes view of what goes on.


A conference of any size takes time to put together, and even a fairly small regional event is no different. The planning actually started in the summer of 2013, with an event planned for the spring of 2014 — that clearly didn’t happen. Venue quotes, preliminary budgets, talks with potential sponsors — altogether, several hundred hours of work went into the 2014 event before a painful decision had to be made.

Putting something like this together is hard — you truly have to juggle a thousand things at once, if you can’t dedicate the time needed to keep track of them all, things fall and the event fails. In May of 2014, we let the main BSides organization know that we put the event on hold, with no set date to resume. After so much work, this was a hard thing to do — but we wanted to do it right, and the team we had simply couldn’t put in enough time to make it happen. If we couldn’t make it the event Knoxville deserved, we weren’t going to do it.

At DerbyCon 2014, a few of us met and discussed the path forward to make the event happen — we still had a strong desire to make it happen, but it had to be right. After a number of conversations, it was clear that we needed to rebuild the core team.

The timing couldn’t have been better.

A couple weeks after DerbyCon, a message was relayed through the main BSides organization that someone else was interested in getting a BSides event in Knoxville. Perfect.

In November 2014 the regular meetings started, and ran through till a few days ago. Between the four person team, the time investment ranged from 10 hours to 40+ hours a week. Running the CFP, finding sponsors, negotiating with venues, badge design and manufacturing, and of course promoting the event.

The best lesson here was to find good tools and use them religiously — we coordinated everything through Trello, and it worked out beautifully. Making the move to run everything through it might have been the best decision we made.


Running a conference isn’t cheap, and without the generosity of the sponsors, it simply wouldn’t be possible. To give a rough idea, the cost per attendee was roughly $77 — we charged $10; the $67 difference was covered by the sponsors. So when I say it wouldn’t be possible without them, I’m really not kidding.

The cost of food & drinks was, by far, our largest expense — making up over half of the budget. The badges made up the next largest cost — we really wanted to do something special, useful, memorable, and we hope we did it.

Getting the money necessary and determining the best way to use it to get the most bang for everyone involved is anything but simple. Of all the time we spent in the planning meetings, approximately half was spent discussing money.


Without the team, both organizers and volunteers, there’s no way it would have happened. In total, there were four organizers and twelve volunteers. Having a good team is critical for the event’s success — without a good team of people that are willing to work hard, the event will have issues every step of the way.

Volunteering at a conference is a great way to learn about how to manage an event, and get an insight to the issues that you may have, and how to deal with them. For me, volunteering at BSides Las Vegas was a great experience — extremely educational, and something that everyone interested in running a conference should do.

For me, I was in charge of coordinating the staff, and I feel that I could have done a better job coordinating the team and promoting communication. This is my top personal item to improve for next year.


We had 20 speakers and 14 talks, plus the keynote — the content, I think, was excellent. The feedback from the speakers was excellent, and the feedback on the quality of the talks couldn’t have been better.

One of the goals was to make sure that the speakers walked away with a positive feeling about the event, and would remember it for years to come. We had custom flasks made, speaker areas for each track stocked with snacks and drinks, walk-out music, and so forth — all with the goal of making sure that all of the speakers would walk away looking forward to next year.

The one area where I think we failed the speakers, is that we didn’t accurately judge which talks would need the most space — leading to a couple cases where the smaller area was completely packed, and the larger area had half the seats empty. This is something we will do better next year.


We had a target of 200 people, including staff and speakers — and we sold out around a month before the event. Had we been able to open up more spots, we likely could have increased this number by quite a bit. There’s a huge interest in the Knoxville area for this kind of event. For our first year, the response was incredible.

Crypto Front Door: Everyone Welcome!

For decades, the US Government has fought — sometimes with itself — to prevent the use of secure cryptography. During the first crypto war, they allowed strong cryptography within the US, but other countries were limited to small keys — making brute force attacks practical. But what about those pesky US citizens? They didn’t really want them to have strong crypto either — enter key escrow.

What is key escrow?

According to Wikipedia:

Key escrow is an arrangement in which the keys needed to decrypt encrypted data are held in escrow so that, under certain circumstances, an authorized third party may gain access to those keys. These third parties may include businesses, who may want access to employees’ private communications, or governments, who may wish to be able to view the contents of encrypted communications.

Essentially, it’s a system where someone other than the the owner of the data gets a copy of the key, that can be used to decrypt, without their permission.

If the organization holding the escrow keys can surreptitiously access the data — such as retrieving it from a service provider, or capture the data from a public network, the data can be decrypted in complete secrecy. The owner of the key need not have any idea that it’s happened, or who did it. This property raises many privacy concerns — this takes a technical control and turns it into a policy matter.

In general, if there’s nothing other than a piece of paper protecting your data, it isn’t protected at all. In the case of government controlled key escrow, that’s exactly the situation. If an employee violates policy and accesses escrow keys without authorization, they have the potential to secretly access data — and it’s quite possible it would never be known by anyone. In every environment, from the NSA to the Certificate Authority system, where controls are purely policy based instead of being technical, have been bypassed for various purposes.

The most famous, or perhaps infamous, use of key escrow was the fundamentally flawed Clipper chip — an attempt by the US Government to allow encrypted calls, while allowing them easy access. Thankfully, this effort died, mostly taking key escrow with it — at least for a few years.

A rose by any other name…

The Director of the NSA, Michael Rogers, has repeatedly called for the introduction of a “frontdoor” — which is a backdoor with a policy that says they have paperwork to do before they can use the keys they hold. The people of America, and the world, are to trust the secret rubber stamp court and the policies of various agencies with our most sensitive information — hoping they follow their own rules (which are probably also secret).

While Rogers has done his best to convince people that there’s a difference, the fact is no matter what you call a backdoor — it’s still a backdoor. It’s still a way for someone that isn’t the owner of the data, to decrypt it, without their approval and quite possibly without their knowledge.

That looks just like the definition of the word backdoor:

a feature or defect of a computer system that allows surreptitious unauthorized access to data.

Massively Increased Complexity

The privacy concerns and potential for abuse aren’t the only cost for key escrow — the complexity of the system is greatly increased, and as with any systems, when complexity goes up, so does the risk of vulnerabilities. The risks are so clear, that even the US Government can’t agree on the plan — NIST has already spoken out against it.

Rogers said: “I want a front door. And I want the front door to have multiple locks.” — what he’s really talking about is key escrow with splitting. In other words, taking a key and splitting it into two or more pieces, and these pieces would be held by separate entities. The data could only be decrypted when one entity shared their half of the key with the other (intentionally or otherwise).

This adds complexity in a few ways:

  • Key Splitting — The splitting process has to be done properly to ensure that neither entity would be able to recover the full key without the cooperation of the other.
  • Key Registration — When the encryption key is generated, it has to be split, and registered with the entities that will hold the parts. There are risks of data being lost in a number of ways here, and this registry would be a natural target for attack.
  • Key Part Transport & Storage — Being a natural target for attack, the key parts need to be carefully handled throughout their lifetime.
  • Key Recovery — Certain paperwork would be required to access the key parts, this would be validated by a human — should that human ignore policy, or not verify paperwork, keys could be accessed without the proper authorization.
  • Key Destruction — Once authorization expires, or the need for access has ended, the combined key and any copies of the key parts outside of the registry should be destroyed. If not properly handled, it would provide another avenue for unauthorized access.

Of course this is just a high level view, and there’s more — all of these additions, unnecessary today, add risks of flaws (accidental or intentional) that could expose data.

Opening the floodgates…

One of the most astounding things to me about this proposal is the level of arrogance it demonstrates. The NSA (and similar organizations) has a strong preference for NOBUS (nobody but us) backdoors, and they are making the daring assumption that this will be a NOBUS backdoor.

For example, if the US Government receives a copy of iPhone encryption keys, a device made in China, do they really believe that the Chinese Government won’t take advantage of the situation to get a copy of the key as well? The US Government has set a clear precedent of using secret court orders to retrieve data — what makes them think that China would hesitate to order Foxconn’s facility in Shenzhen to provide it with a copy of all escrow key parts?

For companies with offices outside of the US, or that deal with manufacturers outside of the US, there is real risk of them being targeted to get the keys before being delivered to the government. This is the same technique that GCHQ used against Gemalto to get keys for SIM cards.

To assume that the US Government can open Pandora’s box, and yet maintain control is sheer arrogance. This opens the door not only to vulnerabilities, but to other governments that want to get in on the game. Once the requirement is set, it will be widely abused.

On the Underhanded Crypto Contest

On August 15th of last year I asked if anybody would be interested in a contest for the best, most evil underhanded crypto techniques — the response was clear, and less than a month later I announced the creation of the contest.

Before I go any further, the contest simply wouldn’t have been possible without the huge effort by Taylor Hornby to help organize, coordinate and communicate. I couldn’t have asked for a better co-organizer for this event.

Just over six months after the announcement, yesterday we finally announced the winners (only two months later than planned).

The winners, and really all of those that entered, put an amazing amount of effort into it. The entries were fantastic, and quite honestly a few people found them a bit scary - simple, subtle, effective. This is exactly what we wanted though.

The goal of the contest, and the driving reason that we required the submissions be under an open license, was to provide researchers, developers, and reviewers with better insight into how these flaws can be introduced — and hopefully how to detect them.

Based on the comments we’ve received on the winners, I think this will certainly show how subtle these attacks can be. It’s our hope that this turns into a valuable training resource for the community, and will lead to fewer backdoors — intentional or otherwise.

We are discussing plans for the next Underhanded Crypto Contest now, and we’ll be announcing something soon.

The Evolution of Paranoia

That researchers from Kaspersky Lab uncovered malware that uses hard-drive firmware has now been throughly discussed — perhaps too much for some people. It’s not exactly Earth-shattering news either, the idea has been discussed for years, and has been publicly demonstrated. Brandon Wilson and I were even working proof of concept for SSD controllers to demonstrate this based on our BadUSB work.

This isn’t about that story, exactly. This is about paranoia, and how it has changed over the last few years — and even the last few months.

I was talking to Brandon Wilson about the implications of Kaspersky’s discovery, and how, or even if, you could ever trust your platform. When you could have malicious firmware in key system components — hard drive, USB devices (keyboard, mouse, etc.), USB hub — and possibly others, how about the GPU or the webcam that’s in virtually all laptops, how could you ever feel secure? How would you ever even know about it? Every device that has updatable firmware is a possible target, and far too few of them use any form of effective security to prevent malicious changes.

I pointed out that I buy all of my computers and key hardware from stores, I don’t have any of it delivered. Why? Interdiction.

If it has my name associated with it prior to being in my hands, how do I know that it’s not been tampered with? Prior to Edward Snowden, I would have said that it was paranoid, that taking such precautions was at best a waste of time and at worst a sign of delusion. Today? If you are working in a field where you could have useful information, it seems quite reasonable. Paranoia has evolved, it has changed, what was once unreasonable is now prudent.

It was of course known that such things were possible before Snowden, but the scale was unknown — and of course NSA and the FBI aren’t the only threats, if they are doing something, you can bet they are far from alone. When that Lenovo laptop was shipped from China, do you really think that the Chinese Government wouldn’t take that chance to step in to gather some extra information? Launch-day iPhones have been shipped directly from China, and so many other examples. If NSA can tamper with a shipment, so can any country that has even temporary access to a package; if it’s on their land, they can attack it.

The focus has been on what NSA does, but the information should be used not as a way to attack NSA, but to get an insight to the global threats that everybody that could be of interest faces. It’s important to remember that they don’t just target terrorists.

I recently had a person contact me, concerned that a device she had was compromised — while I could tell her that unless she was attracting attention from a major player she was likely safe, I couldn’t tell her that she actually was safe, or anything she could do to ensure that nothing was infected. As these techniques spread to more common attackers, the risks that average people will be targeted grows dramatically. Attacks not only get better over time, they become more widespread. From repressive regimes that outsource their attacks, to poorly supervised local law enforcement, to common malware — it’s only a matter of time.

Defending against these attacks, without major changes from device manufacturers, is at best a nightmare; at worst, impossible. I have repeatedly called out USB controller manufacturers to secure their devices, as that’s the only way that BadUSB can be truly fixed. The same needs to be said for so many other device types — it’s up to device manufacturers to secure their products.

The Chain of Trust

For any system to be secure, there must be trust at some point, which can then ensure that later layers are correct and untampered. By attacking firmware, the chain is defeated at its first link — attack the hardware at the lowest level, and nothing that comes later, including the operating system, can truly be trusted.

The more important impact though is not technical, but psychological. If a person doesn’t know what they can or can’t trust, they start to fear everything. For NSA, GCHQ, and countless other agencies in the same business, this is good news — if people can’t trust their computers or their phones, they will turn to less secure means of communication. This is also extremely bad for consumers, business, and investors — as these tools can be used not just to go after government selected targets (legitimate or otherwise), but for profit, for blackmail, for revenge, or just for a thrill.


While the public focus of NSA is to combat terrorists, it’s been well documented that their targets go far beyond that — researchers, IT staff, business executives, you name it. Yet, I’m a citizen of the United States, and as such, I shouldn’t be a potential target for them (give or take being caught up because of people I know in other countries). Is that the end of the risk for me? No, not by a long shot.

While the Five Eyes countries share intelligence, they don’t share restrictions on who they can spy on. For GCHQ, whether I’m a citizen of the United States or of Afghanistan makes no difference, I’m a valid target under their laws. Canada, Germany, Russia, China, Taiwan — I’m not protected under their laws, if they think I could have interesting information, or access to interesting information, on any topic, I could be a target. So could you. What information do you have access to, who do you know, and what information could they have access to?

If you work in security, development, IT, telecom — that could mean that you have access to some information that some country would like to have. Is that paranoia? A few years ago, some would say yes — now that we have a better insight into the scope and scale of intelligence activities, we know it’s simply reality.

Personal Threat Models

I have long encouraged people to have a personal threat model — what are your realistic threats? When talking to others, keep in mind that their threat model may be different than yours, and things you see as being paranoid could be quite prudent for them, due to the different risks they face.

For me, to be honest, I’m not that interesting to a foreign power — if anything, trading emails with Glenn Greenwald and trading tweets with people like the grugq has done more to make me a target than any professional activity. The information I can access because of my job is somewhat interesting, and is certainly of value — but more to the Russian Mob than to a foreign government. I pay attention to who I talk to, to what I make public, to what my accounts have access to, so I know what my risks are.

If you work for a more interesting company, or are engaged in research that could be useful, or even just know people that could be more interesting than you, your threats could be completely different. Of course, also have to factor in locations — if you are outside of the US and work for a company that could have interesting information, then your threats may be far more complex.

Defining the line between reasonable and paranoid is harder than ever, and may vary from person to person.