In the world of security, everything comes down to trust; sooner or later you have to trust something. Often, this something is a human. While we are busy building advanced cryptosystems that will survive the heat death of the universe, sooner or later, digging down layer by layer, you get down to a human and their limited memory. While we may build software, hardware, and other systems to protect this chain of trust, it almost always ends with a human.
I was part of a standards body meeting discussing the security controls of a system, and someone complained that without a certain control, the system would be little more than a shell game. Where the ultimate source of trust was covered in layers and moved around, but never truly secured. They were, of course, correct. Though it’s not because of the missing control, but because the chain of trust ended with a human’s memory.
Any security architecture that involves a human can ultimately be reduced to a shell game, with its strength limited to what a human can readily remember.
It’s not that difficult to design systems that are robust, provable, and provide substantial protection. It’s extremely difficult to design such a system if a human is involved at any point. Human memory is limited, fallible, and degrades over time; thus if a system needs to be accessed by a human, the security of the system will be reduced down to what a human can readily remember. Be it a password, a PIN, or any other secret committed to memory, this will always be the weakest link. A password that one can remember will, without doubt, be of lower entropy and more easily guessed than any other secret used in the system.
When designing a new system, it’s common for there to be many layers, each with its own security properties and trust requirements, each layer assuming that the prior layer is just as secure. Yet, these robust defences, strong proofs, large keys, and detailed trust relationships all end up with the same thing: a secret a human needs to remember. For example, you can build an authentication system that provides a strong proof of identity, great phishing resistance, robust cryptography, and in the end, it’s protected by a password like Winter2024!
. Everything is great, until a human enters the picture.
Now, you may ask yourself, why not eliminate the human with hardware? This is why we have hardware security keys, right? In theory, this is the perfect solution, as it eliminates a human’s memory from the chain, and allows every secret in a system to be truly random. There’s a problem though, people don’t have perfect memory and tend to lose things, including hardware.
Once you introduce hardware1 as the final link in the chain of trust, you face a challenge: what happens when that hardware is lost or broken? There are generally two options here:
- Say ’too bad’ and accept that whatever was behind this system you’ve designed is lost.
- Implement a recovery mechanism, which adds new links to the chain. Links that end with a human2.
Option 1 doesn’t sit well with users (and can have devastating effects), and option 2 doesn’t avoid the problem of relying on a human’s poor memory - it just moves the problem to another part of the system. This is often the case with security, the problem just gets moved because there isn’t a good solution.
This is the point where many otherwise incredibly secure systems show their weaknesses. You can use a secure authentication system like passkeys, strong phishing resistance, randomly generated so they are impossible to guess. Yet the security is reduced to the password for the service you use to sync them (such as an Apple account or password manager) and thus the security of the mechanism to reset that password.
Many years ago when I was becoming more involved in the crypto3 community, one of my favourite questions to ask was how key management should be implemented. What’s the right way to protect that final secret? The answer was always the same: “that’s a good question” followed by quickly changing the topic. The reason is that while it’s easy to hide the ultimate root of trust, add layers, move it around, and build complex systems that would protect against even the most advanced attackers, handling that ultimate key is a very hard problem.
A lot of progress has been made to limit the impact of relying on human memory. Passkeys and password managers allow for far stronger secrets to be used, and with the associated improvements against common attacks such as phishing, this represents a vast amount of progress. However, the same underlying issue still exists.
The fact is that this is an effectively unsolved problem; the solutions could address it add far too much complexity for end users to rely on, and no amount of additional layers or moving the key around will fix it.
Key management has always been the greatest challenge for protecting secrets, and it will continue to be for the foreseeable future. As long as the chain ends with a human, the security of the system will be reduced to what that human can remember.
This isn’t to say that we should stop finding ways to better secure users, data, and the secrets that are critical to protecting them. Incredible progress has been made, and is still being made. Data is better protected today than it’s ever been. That said, that protection is often weaker than we would all like it to be.
This applies equally to hardware surrogates, such as password managers that play the same role that a hardware key would, just with a longer chain of trust involved. ↩︎
This is effectively true, though not strictly true. It’s entirely possible to build recovery systems that rely on additional hardware tokens, though due to the complexity to implement and execute recovery, such mechanisms are far from common. For consumers and the vast majority of enterprise systems, systems like this effectively don’t exist. Only systems that need the highest security and can justify the additional costs and complications consider designs like this. As this post is focused on the common systems that most people are exposed to, any design that’s impractical for broad use won’t be considered. ↩︎
Crypto means cryptography. Yes, that’s a hill I’ll die on. ↩︎