Adam Caudill

Security Leader, Researcher, Developer, Writer, & Photographer

Utilitarian Nightmare: Offensive Security Tools

Or: Ethical Decision Making for Security Researchers.

There has been much discussion recently on the appropriateness of releasing offensive security tools to the world – while this storm has largely come and gone on Twitter, it’s something I still find myself thinking about. It boils down to a simple question, is it ethical to release tools that make it easy for attackers to leverage vulnerabilities that they wouldn’t otherwise be able to?

I’ve written about the ethics of releasing this type of information before (On The Ethics of BadUSB, Responsible Disclosure Is Wrong), and I’ve always believed that it should be addressed on a case by case basis, based on what is best for end-users. What action protects them most effectively?

The problem with that though is that it’s a highly subjective standard, and if you ask 10 security researchers where the line is, between helping and harming users, you’ll likely to get at least 10 answers. Perhaps to drive this conversation forward, we need to look beyond our own gut feelings and personal positions to a more accepted framework to determine what releases are ethical, and releases aren’t.

Enter Utilitarianism #

utilitarianism is generally held to be the view that the morally right action is the action that produces the most good … bringing about the greatest amount of good for the greatest number – The History of Utilitarianism, Julia Driver

Jeremy Bentham by Henry William Pickersgill.

While I personally find utilitarianism to be a most useful philosophy in general, in questions such as this, where as a security researcher you are making a decision that can impact vast numbers or people, it’s especially useful. Relying an a well studied model to determine if your decisions are sound is often a good idea – what seems right to you, may be skewed by personal interest, bias, ego, or naivety.

While there’s no need to go into the details of utilitarianism in great detail here, as it’s documented and discussed at great length by others, it is useful to understand some of the basics, especially as they apply to security research, and to the release of tools that may put people at risk. This is an incredibly complex topic, and one that has been debated ad naseum for many years, and, in all honesty, one that will continue to be debated for many years to come. Hopefully though, a brief discussion on the ethics of this issue from a philosophical perspective will help you in your future decisions.

There are a few things about utilitarianism that are of particular importance for this discussion:

  • Positive impact is_impartially_considered. What is good for you, is no more important than what is good for any other person. This is a critical aspect to understand, and one that mustn’t be ignored. If you place your own interests above those of others, you are discounting the impact of the decision on on others – quite likely resulting in a choice that has an outsized negative impact without realizing it.
  • Happiness is the only thing that has intrinsic value. Security provides happiness in that it makes people feel safe, and reduces the risk of being unhappy should they suffer an attack in the future. In this philosophical view, technical security has no intrinsic value whatsoever. This is sometimes difficult to process for those that have built their career around security – we work very hard to make the world a safer place through technical skills and extensive knowledge, but it’s the result that effort has on people (making them happy) that has value.
  • It’s the consequences that matter. Utilitarianism is a consequentialist philosophy, all that matters is the impact of a decision. Your motives for releasing something to the world are actually irrelevant, it doesn’t matter if your underlying motivation was to make people safer or to sink the stock of a company – what matters is how much happiness you added to the world, versus the amount of unhappiness. Having good intentions doesn’t make a decision ethical if the impact is negative.

With these three key points, we can begin to breakdown the decision making process and try to identify if a decision is ethical or not.

Utility Calculus #

The father of modern utilitarianism, Jeremy Bentham, helpfully developed a formula to determine if an action is ethical; here is a somewhat simplified version of that formula:

  • Intensity – How strong is the feeling of happiness?
  • Duration – How long does the happiness last?
  • Certainty – How likely is the happiness to actually occur?
  • Immediacy – How soon will the happiness occur?
  • Productivity – How likely is this happiness to lead to further happiness?
  • Purity – How likely is this happiness to lead to future unhappiness?
  • Extent – How many people with this make happy?

For some things, it’s easy to determine if a decision is ethical – for example, donating money to a food bank. On one hand, you are losing money that could have been used to made yourself happy. On the other hand, it will likely make several other people far happier than you would have been, it will have a quick impact, it may lead to avoiding illness due to a better diet, it’s unlikely to lead to future negative effects, and there is little doubt that it will actually have that positive impact. In this case, there’s no real doubt that it is, in fact, an ethical choice to make.

Not everything is this simple though. A couple years ago I wrote about breaking encryption used by ransomware, complete with the full details of the mistakes that the author made. This is a more complex situation, and some were very vocal in stating that they thought it wasn’t ethical to disclose all of the details. In this case, I was informing defenders of an approach that could lead to recovering data that had been encrypted by this malware, informing developers on how to build more secure applications by learning lessons from a public failure, and inspiring those that have to deal with ransomware to look into an approach that they could consider in future cases. I was also making it easy for the developers of the ransomware to fix their mistakes by explaining exactly what they did wrong. My belief when I wrote that article was that it would be a net positive, even in the unlikely case that the developer of the ransomware found my article and corrected their mistakes. Was it actually ethical? I’m still not sure.

The Nightmare #

As with my example about the ransomware above, offensive security tools are not nearly so easy to analyze. There are many factors that need to be considered, and then, and only then, can a determination be made as to the true impact.

  • Would an attacker of average skill be able to discover how to exploit a vulnerability if the tool wasn’t released?
  • Is there a patch available to address the issue?
    • If yes:
    • Have users had reasonable amount of time to install it?
    • Will releasing the tool now lead to more systems being patched?
    • In no:
    • Will making a tool available lead to a patch being available sooner?
    • Is there a work-around available to allow users to protect themselves?
  • If there is no tool available, will users be able to reasonably determine if they are impacted?
    • Will users be able to detect the issue in an automated fashion if no tool is released?
  • Is there a way to protect users from attackers, without giving attackers full information on how to exploit an issue?
    • Will withholding a tool lead to a false sense of security or delay patching?
  • What is the likelihood that attackers will leverage the tool against users?

There are many more questions that could be asked of course, but these are some of the basics that should be considered when determining what goes public (and when). As a researcher, determining what is released to the public is a significant decision, and one that places a great deal of responsibility on the researcher to determine the best approach for a given situation. There is no single rule that will ever be right for every situation, many factors must be weighed and reweighed as information changes, and it this process of asking questions that will allow researchers to make the best decision possible – the best decision for the greatest number of people.

I have often described security research as being in a perpetual gray area, where the lines of right and wrong, ethical and unethical are unusually blurry. Some things are clear and without question, but many things exist in a more complex state. As a researcher it’s important to find a way to break down your own decision making to ensure that what you are doing is actually in the best interest of users.

Disagree? #

That’s fine.

While this presents what I consider to be the best philosophy to evaluate the ethics of a decision as a researcher, it is not the only method, and may not be the best for you. It’s unlikely that any single philosophy will ever be accepted by everyone, but what is important is that everyone that takes on the weighty decisions that can have a substantial impact on other people evaluate how they are coming to decisions, and find an approach that ensures that it’s truly ethical.

Adam Caudill


Related Posts

  • Win by Building for Failure

    Systems fail; it doesn’t matter what the system is. Something will fail sooner or later. When you design a system, are you focused on the happy path, or are you building with the possibility of failure in mind? If you suffered a data breach tomorrow, what would the impact be? Does the system prevent loss by design, or does it just fall apart? Can you easily minimize loss and damage, or would an attacker have free rein once they get in?

  • On The Ethics of BadUSB

    Last Friday, Brandon Wilson and I gave a talk on BadUSB at DerbyCon – I wrote some about it yesterday. Yesterday, Wired published an article on the talk, kicking off several others – only the authors of the Wired and Threatpost articles contacted us for input. There has been some questions raised as to the responsibility of releasing the code – so I want to take a few minutes to talk about what we released, why, and what the risks actually are.

  • On Apple, Privacy, and Device Control

    If you’ve bothered to look at Twitter or any technology news source, you’ve seen that Apple made a major announcement: Expanded Protections for Children. This has been written about by countless outlets, so I’ll assume you’re familiar with the basics. The announcement covered a few new features being added to the next version of Apple’s operating systems, namely: Scanning of inbound and outbound messages for sexually explicit images. Scanning images being uploaded to iCloud for CSAM.

  • On the need for an open Security Journal

    The information security industry, and more significantly, the hacking community are prolific producers of incredibly valuable research; yet much of it is lost to most of those that need to see it. Unlike academic research which is typically published in journals (with varying degrees of openness), most research conducted within the community is presented at a conference – and occasionally with an accompanying blog post. There is no journal, no central source that this knowledge goes to; if you aren’t at the right conference, or follow the right people on Twitter, there’s a great chance you’ll never know it happened.

  • Proposal: Association of Security Researchers

    Security researchers play an important role in the industry, though one that doesn’t always receive the support needed. In this post, I am proposing the creation of a new non-profit entity, the International Association of Information Security Research Professionals (IAISRP), as a supporting group to push research forward, and provide the tools and resources to improve the quality of work, and the quality of life for those involved in this vital work.