Threat Modeling for Applications

security-265130_960_720

Whether you are running a bug bounty, or just want a useful way to classify the severity of security issues, it’s important to have a threat-model for your application. There are many different types of attackers, with different capabilities. If you haven’t defined the attackers you are concerned about, and how you deal with them – you can’t accurately define just how critical an issue is.

There are many different views on threat models; I’m going to talk about a simple form that’s quick and easy to define. There are books, tools, and countless threat modeling techniques – it can easily be overwhelming. The goal here is to present something that can add value, while being easy to document.

The Threat Actors

To define the threat model, you must define the different types of attackers that your application may face; some of these apply more to some applications than others. In some cases, it may be perfectly reasonable to say that you don’t protect against some of these – what’s important is to clearly document that fact.

It’s also important to think about what attackers want to achieve – are they trying to steal sensitive data such as credit card numbers or user credentials, tamper with existing data – from business plans to student grades, disrupt services (which can come at great cost), or attacking for some other purpose. There are countless reasons for attackers to target an application, some are important to understand (such as stealing credit card numbers) to prioritize protection mechanisms – in some cases, such as those that attack just because they can, it’s impossible to prioritize to protect against them, as there’s no way to predict how they’ll attack.

Passive Attackers

Passive attackers are in a position that they are able to see communications, but can’t (or won’t, to avoid detection) modify those communications. The solution to most of the passive attackers is to encrypt everything – this not only eliminates most of the passive attacks, it eliminates or greatly complicates many active attacks.

Local (User)

One of the most common examples of a local passive attacker is an evil barista – you are working from a coffee shop and on their open WiFi. At that point, they (and many others) can easily see your network traffic. This also applies to corporate networks and many other environments. What information can a Passive-Local attacker gain to perform attackers later?

Local (Server)

Users aren’t the only ones that have to worry about local passive threats, an attacker monitoring the traffic going to and from a server can be quite effective (rogue monitor port on a local switch perhaps). By watching this traffic, an attacker can gain a great deal of information for future use, and perform some very effective attacks.

One example is a password reset email – if not encrypted, an attacker could capture the reset link, and change a user’s password. This is especially effective if the attacker triggered the password reset themselves. This is made worse, due to the common user training to ignore password reset emails that they didn’t request. As an example of this, let’s look at the email that Twitter sends when you request a password reset:

Reset_your_Twitter_password

The all too common verbiage “If you didn’t make this request, please ignore this email” is present – an attacker requests a password reset, and waits for the email to go through. If the connection between the application and the remote email service isn’t encrypted, an attacker could easily take over accounts.

Upstream

Upstream providers, from the server or client end, are in a position where they have easy access to information – they are also a target for those that want that information; and they will provide it, sometimes willingly, sometimes by force. These positions of power allow for transparently monitoring all traffic to and from a device – giving no indication that an attack is ongoing.

As with passive local attackers, any unencrypted communication could be captured, and used real-time, or stored for later use to enable attacks.

Global

A special, and thankfully rare, type of passive attacker is those that have data feeds coming from many locations, and are able to analyze both ends of a communication. This type of attacker is a challenge that only the most sensitive systems need to be concerned with. Perhaps the most public example of an application that is especially vulnerable to this type of attacker is Tor, thanks to traffic correlation.

Active Attackers

Active attackers run the gamut from normal users looking to get around restrictions to professional hackers – with motivations running from boredom to profit. With passive attackers, there’s generally no way to detect an attack, as nothing is being changed. With active attackers, there are often clear signs of their presence – though too often they are missed due to a flood of false positives.

Malicious User

The malicious user is a legitimate user of your application – could be an employee or someone who signed-up from the internet – that seeks to gain additional privileges, gain access to restricted data, or otherwise make your application do whatever they want. In some of the simplest cases, it can be an employee looking to gain extra privileges to bypass required work, or it can be an attacker that is looking to steal data from the application’s database.

Most often, this type of attacker will look for easy wins, may try to understand the application by manually studying the application, or may just use one of the many tools available to see if there’s an easy way in. Most of the so-called “script kiddies” fit into this category.

Malicious Device

Not only is there a risk of a malicious user, but that a legitimate well-intended user has malware or had their device otherwise been compromised. Attackers can leverage a user’s device to perform attacks as them, or capture information. When a user’s device is compromised, an whole range of attacks are enabled – and allows attacks against many types of applications. It has been well documented that software that allows simple man-in-the-middle attacks can be pre-installed on new PCs – and it can be easily installed via other malware. By performing a local man-in-the-middle attack, an attacker could not only capture information, but inject malicious traffic.

In many cases, there is little to nothing that can be done in these situations, as there’s no reliable way to ensure that a device isn’t compromised.

Advanced / Persistent Attacker

From script kiddies with a lot of time on their hands, to professional hackers, this class of attacker will use similar tactics to the Malicious User, but will move on to more advanced attacks and better tools to achieve their goal. From using effective Cross-Site Scripting (beyond proof-of-concepts like alert('xss')), to SQL Injection, and so many more attack types, these attackers are the ones that are most likely to find real issues. When building a threat model, this class of attacker is likely the most important to consider, as they are the greatest and most likely threat.

IT / SysAdmin

One threat that is too often overlooked is the people running the servers – they have access to the logs, can see the server setup, and in some cases can see the application source code and configuration files. This type of attacker has far more insight than most attackers do – in some cases, it’s assumed that those that run the servers simply must be trusted, as there’s no way to fully protect against them.

This is a case where detailed logging, change approval, and separation of duties – mostly process controls – can help, though are unlikely to prevent someone with this degree of knowledge from taking what they want. When defining a threat model, it’s important to address this issue – identify the technical and procedural controls that are in place, and determine how, or if, you will address the situations where such an attacker could be effective.

Developers / DBA

Perhaps the hardest to address are those that build and maintain the application itself. How do you prevent them from inserting a backdoor? Code reviews are normally the answer, but there are real flaws to relying on them. While there may be policies that prevent developers from accessing the production database or credentials – a simple “mistake” made that displays a detailed error message, or allows SQL or code to be injected can quickly render those policies worthless.

A malicious developer is an extremely difficult attacker to address – because they have so much control, and such intimate knowledge of how their applications work. When defining a threat model, you have to account for a member of the development or DBA staff going rogue, ignoring all policies that get into their way, and inserting backdoors or otherwise opening a door that completely defeats the security of the application

Host / Service Provider

When a host or service provider becomes malicious, it can be difficult to impossible to maintain security. As fewer and fewer applications are hosted within corporate walls, it’s important to understand that there is now an additional team of system administrators, networking and IT support that are suddenly involved. They could modify network traffic, add, remove, or replace hardware, clone drives, etc. – this realization has held many organizations back from moving to the cloud, as the fear of breaches and compliance nightmares is too great.

Another issue often overlooked, is the issue of other users of the same host, and this is especially true when using virtual servers; from side channels that allow encryption keys to be stolen, to breaking out the the hypervisor to attack other servers on the same pyhsical hardware.

For high security applications, building an application that can withstand an assault by its hosting provider is a sizeable challenge, though a challenge that must at least be acknowledged.

Local & Upstream Network Providers

If the local or upstream network providers are used to attack, in one way or another, maintaining security can come with a surprising number of challenges. The key to defeating QUANTUM INSERT is as simple as using HTTPS – but there are others that are more challenging.

I recently reviewed an application that verified that the packages it installed were authentic by checking PGP signatures – unfortunately the PGP public key wasn’t securely delivered. Installing that software when under an active network attack would have allowed the attacker to run their own code as root.

Building A Threat Model

A threat model doesn’t have to be some compliance laden document that means more to auditors than developers and security engineers, it can be a simple listing of threat actors and what the defense against them is – or if a given actor isn’t deemed to be a threat, document that. Having a very simple list-based document can provide guidance to the development team, to those classifying security reports, and to those submitting those reports.

As is true with most things in security, simplicity is key.

A Sample

To get you started, I’ve defined a sample, so you can see how easy this can be to setup for your application (this will also save you a bit of time when setting up a bug bounty or pentest). This is an extremely simple, but useful model – it provides critical information and insight.

Application Name: Summus
Application Type: Web
Public Facing: Yes
Language / Platform: C# / ASP.NET MVC
URL: https://www.example.com/summus
TLS In Use: Yes

Application Description:

Web-based data capture application with four user roles (user, manager, QA, admin); users are able to read scripting and enter data from phone calls and back office transactions. Managers are able to review captured data, flag transactions for QA review, edit data, and approve transactions. QA is able to review captured data, approve transactions, and mark transactions as reviewed, flag for escalation, and assign a quality score. Administrators are able to setup new campaigns (scripting, data capture, etc.), update scripting, update data capture, change settings (approx. 200 options – some global, some campaign specific).

Application is written in C#, and uses the ASP.NET MVC framework; data is stored in SQL Server, and data access is performed via LINQ to SQL. Application is hosted on a cluster of eight IIS 8.5 servers. Application is hosted behind a CDN with a Web Application Firewall. Data capture fields may be flagged as sensitive, and encrypted using AES-256-GCM when stored in the database.

Attackers:

Note: Violations of any of the assumptions listed below, or defects in the processes used are considered to be a vulnerability.

Passive – Local – User: TLS is required; both via redirect from HTTP to HTTPS, and via the Strict-Transport-Security header. Due to these mitigations, it is believed that an attacker monitoring a user’s traffic is not able to gain useful information.

Passive – Local – Server: The server is hosting in a secure, PCI-compliant facility. Firewall rules and switch configurations are audited on a quarterly basis; traffic flows are monitored in real-time by Network Administration; switch ports are set to connect to a single MAC only; unused switch ports are disabled; network access & event logs are monitored in real-time by Network Administration.

Passive – Upstream: Due to the use of TLS, it is believed that an upstream provider is unable to access useful information.

Passive – Global: Due to the use of TLS, it is believed that an upstream provider is unable to access useful information.

Active – Malicious User: The application takes advantage of Cross-Site Scripting and Cross-Site Request Forgery protections provided by the ASP.NET MVC framework; data access is performed via an ORM to eliminate SQL Injection attacks; each page includes permission checks to prevent vertical privilege escalation; a Campaign ID and User ID are stored server-side as part of the session to prevent horizontal privilege escalation.

Active – Malicious Device: As there is no ability to remotely detect if a user’s device has been compromised, this is considered out of scope.

Active – Advanced: In addition to the protections mentioned in the Active – Malicious User section, the application is scanned on a regular basis by commercial scanning tools, and a quarterly pentest is conducted to identify vulnerabilities.

Active – IT / SysAdmin: This scenario is considered to be out of scope for this document. IT controls are documented in a seperate document.

Active – Developer / DBA: All code changes are required to pass through a peer code review prior to being merged into the master branch of source control; developers do not have access to production servers; static code analysis and vulnerability scan is performed prior to deployments; deployments are performed by System Administrators after approval from Change Control group; database server access & event logs are monitored in Real-Time by System Administrators.

Active – Host: Due to other controls, this scenario is considered to be out of scope.

Active – Network Providers: Due to the use of TLS, it is believed that an upstream provider is unable to access useful information, or perform meaningful alteration to traffic.

In closing…

I hope that this will help you define not only a useful threat model for your application, but also come to a better understanding of what threats your application faces and how to address them.

When Hashing isn’t Hashing

Hash

Anyone working in application security has found themselves saying something like this a thousand times: “always hash passwords with a secure password hashing function.” I’ve said this phrase at nearly all of the developer events I’ve spoken at, it’s become a mantra of sorts for many of us that try to improve the security of applications. We tell developers to hash passwords, then we have to qualify it to explain that it isn’t normal hashing.

In practically every use-case, hashing should be as fast as possible – and cryptographic hashing functions are designed with this in mind. The primary exception to this is password hashing, instead of using one of the standard hashing functions such as SHA2 or SHA3, a key derivation function (KDF) is recommended as they are slower, and the performance (or cost) can be tuned based on the required security level and acceptable system impact. The recommended list of KDFs look something like this:

  • Argon2
  • scrypt
  • bcrypt
  • PBKDF2

When the security community is telling developers to hash passwords, what we really mean is to apply a key derivation function – with appropriate cost values. What this means is, when we use the term ‘hash’ with developers it could mean two very different things depending on the context, and they may well not be aware of that. With too many developers not understanding what hashing even is, relying on them to understand that the meaning changes depending on context is just setting them up for failure.

Encrypting Passwords

All too often, we hear discussions of encrypting passwords, and this often comes in one of two forms:

  • In a breach notification, it’s quite common to see a vendor say that passwords were encrypted, but when pressed for details, they reveal that they were actually hashed. This has often led to a great deal of criticism within the infosec echo chamber, though I’ve long felt that the term ‘encrypt’ was used intentionally, even though it’s incorrect. This is because the general public understands (generally) that ‘encrypt’ means that the data is protected – they have no idea what hashed means. I see this as a situation where public relations takes priority over technical accuracy – and to be honest, I can’t entirely disagree with that decision.
  • Those that know that cryptographic protection is / should be applied to passwords, but aren’t familiar with the techniques or terminology of cryptography. In these cases, it’s a lack of education – for those of us that work with cryptography on a daily basis, it’s easy to forget that we operate in a very complex arena that few others understand to any degree. Educating developers is critical, and there are many people putting a heroic level of effort into teaching anyone that will listen.

I point this out because anytime encrypting passwords is mentioned, the reaction is too frequently vitriolic instead of trying to understand why the term was used, or offering to educate those that are trying to do what’s right, but don’t know better.

Is hashing passwords wrong?

Obviously passwords should be processed through a key derivation function, but is it wrong to tell developers to hash passwords? By using a term that is context dependant, are we adding unnecessary confusion? Is there a better way to describe what we mean?

In 2013, Marsh Ray suggested the term ‘PASH’ – and that is one of many suggestions that have come up over the years to better differentiate hashing and password hashing. The movement to define a more meaningful term has been restarted by Scott Arciszewski, quite possibly the hardest working person in application security now; he has been leading a war on insecure advice on Stack Overflow, giving him a great insight into the cost of poor terminology.

If the security community switched to a term such as ‘pash’ to describe applying a KDF with an appropriate cost to a password, it would greatly simplify communication, and set clear expectations. As password hashing is a completely different operation from what hashing means in almost every other instance, it makes sense to call it something different.

To advance the state of application security, it’s important to ensure that developers understand what we mean – clear communication is critical. This is a topic that isn’t clear to developers, and thus requires somebody explain what hashing means in the context of passwords. Countless hours are invested it explaining how passwords should be handled, and that normal hashing and password hashing are different – this could be simplified with a single descriptive term.

Path forward

The challenge with this is coming to a consensus that a change is needed, and what term should be used. Obviously, there is no governing body for the community – a term is used, or not used. Personally, I feel a change is indeed needed, and I would back the use of ‘pash’ as suggested by Marsh Ray. I believe it’s a reasonably descriptive term, and is distinctive enough to clarify that it is different from normal hashing.

I would like to see broad discussion in the community on this topic, and hopefully a broad enough consensus is reached that the term can be well defined and used broadly. We need to do a better job of instructing developers, and clear terminology is a critical part of that.

Seamless Phishing

Phishing attacks are a fact of life, especially for users of the largest sites – Facebook being the most common I’m seeing today. Pretty much everybody, from the SEC to antivirus companies have published guides on what users should do to avoid phishing – so I picked one at random and pulled out the key points:

  • 1). Always check the link, which you are going to open. If it has some spelling issues, take a double-take to be sure — fraudsters can try to push on a fake page to you.
  • 2). Enter your username and password only when connection is secured. If you see the “https” prefix before the site URL, it means that everything is OK. If there is no “s” (secure) — beware.
  • 5). Sometimes emails and websites look just the same as real ones. It depends on how decently fraudsters did their “homework.” But the hyperlinks, most likely, will be incorrect — with spelling mistakes, or they can address you to a different place. You can look for these tokens to tell a reliable site from a fraud.

This is from Kaspersky – unfortunately the advice is far from great, but it follows pretty closely to what is generally advised. It’s quite common for people to be told to check the URL to make sure it’s the site they think it is, check for a padlock, and if everything looks right, it should be safe. Except of course, for when this advice isn’t nearly enough.

Seamless Integration

Facebook allows for third-party applications to integrate seamlessly, this has been a key to achieving such a high level of user engagement. When accessing an application via Facebook, you end up at a URL like this:

Facebook_Phish_URL

As you can see, the URL is *.facebook.com – as people would expect. It uses HTTPS, and not just HTTPS, but HTTPS with an Extended Validation certificate. It passes those first critical tests that users rely on to keep them safe. Let’s take a look at the page that URL points to:

Facebook_Phish

The header is from Facebook, as is the side-bar – but the rest of the page is actually an iframe from a malicious third-party. What appears at first glance to be a legitimate Facebook page, is actually a Facebook page that includes a login form that is being used for phishing.

Facebook_Phish_Direct

Everything looks right, the style makes sense, there are no obvious errors, the URL is right, and there’s the padlock that everyone is taught to look for. This is a fantastic phishing attack – not at all hard to implement, and it passes all of the basic checks; this is the kind of attack that even those that are careful can fall for.

Because of just how seamless Facebook has made their integration, they have opened the door for extremely effective phishing attacks that few normal users would notice. Anytime an application allows third-parties to embed content blindly, they are doing so at the cost of security. This shows the need for increased vigilance on the part of Facebook – doing a better job of monitoring applications, and that users need to be taught that going through a simple checklist is far from adequate to prevent attacks – as is often the case, checklists don’t solve security problems.

Thanks to @Techhelplistcom for pointing this out.

PL/SQL Developer: HTTP to Command Execution

While looking into PL/SQL Developer – a very popular tool for working with Oracle databases, to see how it encrypts passwords I noticed something interesting. When testing Windows applications, I make it a habit to have Fiddler running, to see if there is any interesting traffic – and in this case, there certainly was.

PL/SQL Developer has an update mechanism which retrieves a file containing information about available updates to PL/SQL Developer and other components; this file is retrieved via HTTP, meaning that an attacker in a privileged network position could modify this file.

This file is retrieved each time the application starts, and if a version listed in the file is greater than the version installed, the user will be prompted to upgrade (default behavior; otherwise user not prompted until they select Help | Check Online Updates). They have the following options:

  • Update: If a URL is provided, the application will download a file (also over HTTP), and apply the update. If no URL is provided, the option is not presented to the user.
  • Download: Executes the URL provided, so that the user’s browser will open, and immediately download the file. This is typically an executable (*.exe); as is the case elsewhere, the file is retrieved over HTTP, and no validation is being performed.
  • Info: If a URL, it’s executed so that the user’s browser opens to the specified URL; otherwise content is displayed in a message box.

The are (at least) two issues here:

  • Redirect to malicious download; as the user is likely unaware that they shouldn’t trust the file downloaded as a result of using the Download option, an attacker could replace the URL and point to a malicious file, or simply leverage their privileged position to provide a malicious file at the legitimate URL.
  • Command Execution; when the user selects the Download option, the value in the file is effectively ShellExecute’d, without any validation – there is no requirement that it be a URL. If a command is inserted, it will be executed in the context of the user.

This means that a user believing that they are downloading an update, can actually be handing full control over to an attacker – this is a case where not bothering to use HTTPS to secure traffic, can provide multiple methods for an attacker to gain control of the user’s PC. This is a great example of the importance of using HTTPS for all traffic – it’s not just about privacy, it’s also critical for integrity.

The tested version of PL/SQL Developer was 11.0.4, though the issue likely well predates that version. The vendor reports that this issue has been addressed by enforcing HTTPS on their website, and application changes made in version 11.0.6. It is recommended that all users update to the latest version.

Vulnerability Note: VU#229047
CVE: CVE-2016-2346

Technical Details

The update file is retrieved from http://www.allroundautomations.com/update/pls.updates – the request issued by the application looks like this:

Here’s what a response looks like – it’s a INI-like file, the Download value is the item we care about most here:

By changing the returned file, replacing this line:

Download=http://files.allroundautomations.com/plsqldev1104.exe

With this:

Download=calc.exe

When the user selects the Download option, calc.exe will be executed.

Here is an example of a pls.updates file that demonstrates this flaw (the key changes are increasing the Version, so that the user will see it as an update, clearing the Update value, so the only option is Download, and setting Download to the command that you wish to be executed):

Special Thanks

Thanks to Garret Wassermann of CERT/CC for his assistance and Allround Automations for addressing the issue.

Crypto Crisis: Fear over Freedom

Yesterday, President Obama spoke at SXSW on topics including the oft-discussed fight between Apple and the FBI – what he called for, while more thoughtful than some of the other comments that we have been hearing from Washington, was still tragically misinformed. He repeated the call for a compromise, and by compromise, he meant backdoors.

Here, I feel I must paraphrase one of my favorite authors to properly express the magnitude of what’s being discussed here:

Tell me, ‘friend’, when did the United States abandon reason for madness?!

Cryptography is critical is every aspect of modern life – from shopping to protecting national secrets, from medical devices to the phones that diplomats use, from your home router to the infrastructure that powers global communication. Cryptography is ubiquitous and essential to keep everything from foreign powers to bored teenagers from wreaking unimaginable havoc. And world leaders are proposing that we replace real security with a TSA-style show that looks secure, but isn’t actually effective (beyond providing a false sense of security).

Mr. President

In one simple statement, he made his position perfectly clear:

[T]here has to be some concession to the need to be able get into that information somehow.

This is, quite honestly, a binary issue, a backdoor is present or it isn’t – there’s no partial backdoor, there is no technology that only allows access to the backdoor if there’s a court order, there’s no technology to ensure that the backdoor isn’t abused. You have a backdoor, or you don’t. That simple.

He did acknowledge some of the issues here:

So we’re concerned about privacy. We don’t want government to be looking through everybody’s phones willy-nilly, without any kind of oversight or probable cause or a clear sense that it’s targeted who might be a wrongdoer.

What makes it even more complicated is that we also want really strong encryption. Because part of us preventing terrorism or preventing people from disrupting the financial system or our air traffic control system or a whole other set of systems that are increasingly digitalized is that hackers, state or non-state, can just get in there and mess them up.

It’s good that he understands that strong cryptography is critical, but that doesn’t stop him from saying that backdoors must be added. Like so many that aren’t familiar with how these technologies actually work, he is hoping that some new value between True and False will be found – that you can somehow have a backdoor, but control it. Unfortunately for him, or perhaps for everyone if he gets his way, there is no ItDepends value sitting between those two.

There is some sign that he has heard the reality of the situation, and states it fairly clearly:

Now, what folks who are on the encryption side will argue, is that any key, whatsoever, even if it starts off as just being directed at one device, could end up being used on every device. That’s just the nature of these systems. That is a technical question. I am not a software engineer. It is, I think, technically true […]

This should have been the end of the discussion, if you add a backdoor, it can be abused. But it wasn’t. He acknowledges that the kind of magical backdoor that the government wants isn’t possible, and then goes on to repeat that there has to be compromise, there has to be a way for the government to access data, there has to be backdoors:

My conclusions so far is that you cannot take an absolutist view on this. So if your argument is “strong encryption no matter what, and we can and should in fact create black boxes,” that, I think, does not strike the kind of balance that we have lived with for 200, 300 years. And it’s fetishizing our phones above every other value. And that can’t be the right answer.

Looking forward…

Let us assume for a moment that the US Government gets what it wants, what does that mean, how does that impact the US and the rest of the world?

We are being watched.

From the beginning of the case, officials from other governments have chimed in to support the FBI – it’s clear that governments around the globe are waiting to see what happens here. Apple has offices in several countries, it is not only possible, but likely that they would serve Apple with sealed orders to provide them with access to the backdoor, for their own use.

Based on the same decision, Microsoft could be forced to add a backdoor to BitLocker, to allow government access to encrypted desktops and laptops. If you want to actually encrypt your device, there’s always VeraCrypt (they are based in France, so maybe not). This also raises serious questions around things like LUKS – could US-based developers even be allowed to contribute to it?

Economic impact.

If backdoors are mandated, it would become impossible to recommend any product made by a company with offices in the US – to do so would be unethical, as the security would be known to be compromised. For any organization that is interested in the security of their systems, the logical option would be to look for solutions in other parts of the world, avoiding anything coming from the US. This leads to a very unfortunate outcome – to remain competitive globally, it would be in the best interest of US-based technology companies to move their offices out of the country.

Unknown threats.

There aren’t many people who are able to build effective backdoors; the crypto community is fairly small, and only a small percentage of that group is capable of building a backdoor that wouldn’t be an immediate disaster (though still likely a disaster in the long-term). This leads to two possible outcomes:

  • Backdoors are built by people who don’t know what they are doing, and open systems immediately to attackers.
  • Backdoors are contracted out to a very small number of consulting firms, making them a huge target for attacks.

Either way, what you have is a situation where you, as a consumer, or a corporate buyer, a consultant, etc. have no idea about any of these:

  • How well was the backdoor designed? Is it only obscurity that protects it? Will it be broken once reviewed by the crypto community?
  • How is access to the backdoor restricted?
  • How many people have access? The developers could have maintained copies, an employee could have walked out with a copy before being fired, an attacker could have targeted the developers to steal a copy – this goes on and on.
  • How many organizations have access? If a consultant was brought in to develop the backdoor, did they keep a copy?
  • How many governments have access? The reasonable assumption would have to be that every country that the company has offices in, has requested a copy.

I suspect that the answer is going to come down to how do we create a system where the encryption is as strong as possible. The key is as secure as possible. It is accessible by the smallest number of people possible for a subset of issues that we agree are important.

Secure as possible, except against the unknown list of people and various governments that have access to the backdoor. That isn’t security, and isn’t in the long-term interest of anyone.