The Manifesto

As a child, all of my time was spent reading – at the age of 8 or 9 I was staying up all night reading the likes of Dickens and Verne, at 11 or 12, I was tearing through encyclopedias, medical texts, and anything else I could get my hands on. I had a love for learning, for understanding, a desire to know everything, and an insatiable curiosity that often led me in interesting directions (in that ancient curse “may you have an interesting life” kind of way). Then one day my father came home with a large box – and my world was changed forever.

I don’t remember the year – but I remember well the feeling of awe every time I heard my 2400 baud modem negotiating a connection, linking me to another world, a better world. I soon started finding other people like me, that thought the same way I did – we were all alike, too smart for our own good. One day I stumbled upon a short essay – and though I had read many of the greatest books ever written, none of them resonated with me in the same way this short essay did:

The work goes by many names:

  • The Hacker’s Manifesto
  • The Conscience of a Hacker
  • The Mentor’s Manifesto
  • The Mentor’s Last Words

By whatever name you want to call it, it speaks to a generation that found something amazing – a world of peers, a world where information was shared freely and not horded for power or money, a world where your only limitation was your own mind. It was a brotherhood2 – and yet I had never actually met any of them (and years later, I still have met very few).

Perhaps it’s a side effect of being raised in a pentecostal church, but I can’t help but read that text with a certain religious fervor – like a preacher at a digital pulpit calling to his brothers and sisters to stand. Instead of fire and brimstone, he spoke of acceptance and understand; instead of pleading for forgiveness, he taught to seek knowledge and truth.

I learned many things from this brotherhood – I learned about people and cultures, I learned that your mind is the only asset you have that really matters. I also learned some less philosophical things as well – I learned to explore computer systems, to break their security, and to find what really makes them tick. I learned to code so that I could explore faster, I learned to secure systems to protect them from the rogues that didn’t accept the ethics of the time.

There were clear lines, and strong ethics – while there were rouges, anarchists who loved to destroy, and crackers as they were called – just out for a buck, most of us followed the rules. You didn’t do anything malicious – never damage or destroy anything, and you left things more secure than you found them. It was all about learning, all about exploring – it was a quest for knowledge.

I learned much in that quest, if not for that brotherhood it’s unlikely that I would have learned so much about software development or systems security, not to mention making a career out if it.

Today, things are different. That generation has moved on, mostly to the corporate world – driven by another need: food. Groups like Anonymous claim to stand for the common good – but no good comes from destructive techniques (especially when executed by those that don’t understand what they are doing). The brotherhood that accepted everybody without question is no more – now we have governments playing along, cracking groups drawn along religious and racial lines, and anarchists that destroy much in the name of doing a little good (i.e. the legacy of groups like LulzSec).

It’s a different world.

2 – This is a good time to point out that hackers and crackers are very different – please look at the word “hacker” in its original context.

This post was originally written in 2012, but never published. The reference to LulzSec for example, does show the age, but the point is as valid as ever.

Responsible Disclosure Is Wrong

The debate around how, where, and when to disclose a vulnerability – and of course to whom – is nearly as old as the industry that spawned the vulnerabilities. This debate will likely continue as long as humans are writing software. Unfortunately, the debate is hampered by poor terminology.

Responsible disclosure is a computer security term describing a vulnerability disclosure model. It is like full disclosure, with the addition that all stakeholders agree to allow a period of time for the vulnerability to be patched before publishing the details.

This is how Wikipedia defines responsible disclosure — but the person that coined the term has stepped away from it, advocating “coordinated disclosure” instead. To me, this makes perfect sense – if you go by the Wikipedia definition, coordinated disclosure is a better fit, it closely matches how the term is being use by vendors. The problem is that coordinated disclosure isn’t necessarily responsible — and full disclosure isn’t necessarily irresponsible.

The term is a bit of a misnomer really — as researchers our responsibility is to users, though often the term is seen as meaning a responsibility to vendors. This is the biggest issue I have with the term, it’s used as focusing on the wrong group in a disclosure. As a security researcher my responsibility is to make the world a safer place, to protect the users, not to protect the name of a vendor.

Based on this, I would say that responsible disclosure is wrong – or more accurately, how it’s been defined is wrong. As defined, what we get from the term is a one-sided view on the disclosure process and its goals. I like the term, but the definition doesn’t suit the reality of vulnerability disclosure.

Sensible Disclosure

Perhaps we need a better term to describe real world disclosures – full disclosure and coordinated disclosure are both potential outcomes of the sensible disclosure process. Every step in the decision process needs to factor in what’s best for users.

Let’s look at what a truly responsible disclosure decision process needs to include – decisions that are ignored by the simplistic definition used by responsible disclosure today. This is far from complete of course, these are just high level question that need consideration; there are considerations that are more specific to situations that can have a dramatic impact on what the right decision is.

Can users take action to protect themselves? If you release details publicly, are there concrete steps that individual users and companies can take to protect themselves?

Is it being actively exploited? If a vulnerability is being actively exploited, the focus has to shift to minimizing damage – this can change value of other factors drastically.

Is the issue found with minimal effort? Is the vulnerability something difficult, or something that anyone would notice of they looked in the right place? If it’s something that anyone would notice, how likely is it that others have already found it, and are using it maliciously? Even if you don’t have evidence that something is being exploited in the wild, it’s still quite possible, and this needs to be considered.

Is the issue something that can be corrected without major effort? Some issues are simple — a few lines of code and it’s gone, others are difficult to impossible to fix.

If patched today, how are users impacted? With some flaws, apply a patch to code and you are done — with others, there is still clean up to do. For example, errors in cryptography can mean that messages aren’t protected as they should be; this means that every message that is encrypted expands the issue and increases risk, that isn’t addressed once patched. There is also the related issue of backwards compatibility — breaking systems by fixing the flaw, or requiring substantial cleanup (think re-encrypting large amounts of data).

Is the vendor responsive? Vendors have a responsibility to respond quickly, and to take action quickly to address reported vulnerabilities. Are they responding at all? Are they trying to address the issue, or just keep the issue away from the press for as long as possible? If the vendor isn’t acting, is more pressure needed to get the issue resolved? Another important question when evaluating vendor response — remember, they are likely addressing other issues as well, which may be more severe; as such, something that you think is critical, may deserve less attention than other bugs that you aren’t aware of.

How severe is the issue? Is this an earth-shattering vulnerability that those affected would take drastic actions if they were aware, or is this something minor that is an issue, but not quite deserving of panic?

Sensible Disclosure should include evaluation of all of these, and then proceeding to coordinated disclosure or full disclosure, or some hybrid — based on what’s best for those impacted. This is full of nuance, full of subtleties — it’s not the black and white “tell us, only us, and we’ll tell you when you can say something” policy that vendors like, but it provides a path to acting in the interest of users first.

Users First, Always

There is nothing wrong with coordinated disclosure — this should be the goal: quick vendor response, protecting users as quickly as possible with minimal or no malicious use of a flaw. Generally speaking, contacting the vendor should be the first step, and hopefully they act quickly and the rest of the process is then easy; sometimes though they don’t, sometime full disclosure is the only option to get them to act. Sometimes the delay of working with the vendor would put people at risk.

For a security researcher, in general, full disclosure should be the last resort, pulled out when working with the vendor has failed. There are some cases where getting the word out quickly is more important though — it depends on the vulnerability, and the impact to those affected.

Each and every decision made in a disclosure process should be focused on the users, and what protects them best — some vulnerabilities require so much research, and are so difficult to exploit that taking a year to secretly fix it is fine. Others, every day that goes by moves users closer to disaster; most others are somewhere in between.

There is no one size fits all solution for vulnerability disclosure. That simple, “responsible disclosure” doesn’t address the subtitles that are actually involved. The term Sensible Disclosure may be closer to reality, though I don’t like it as much.

Be responsible, protect users — practice sensible disclosure.

Crypto Front Door: Everyone Welcome!

For decades, the US Government has fought — sometimes with itself — to prevent the use of secure cryptography. During the first crypto war, they allowed strong cryptography within the US, but other countries were limited to small keys — making brute force attacks practical. But what about those pesky US citizens? They didn’t really want them to have strong crypto either — enter key escrow.

What is key escrow?

According to Wikipedia:

Key escrow is an arrangement in which the keys needed to decrypt encrypted data are held in escrow so that, under certain circumstances, an authorized third party may gain access to those keys. These third parties may include businesses, who may want access to employees’ private communications, or governments, who may wish to be able to view the contents of encrypted communications.

Essentially, it’s a system where someone other than the the owner of the data gets a copy of the key, that can be used to decrypt, without their permission.

If the organization holding the escrow keys can surreptitiously access the data — such as retrieving it from a service provider, or capture the data from a public network, the data can be decrypted in complete secrecy. The owner of the key need not have any idea that it’s happened, or who did it. This property raises many privacy concerns — this takes a technical control and turns it into a policy matter.

In general, if there’s nothing other than a piece of paper protecting your data, it isn’t protected at all. In the case of government controlled key escrow, that’s exactly the situation. If an employee violates policy and accesses escrow keys without authorization, they have the potential to secretly access data — and it’s quite possible it would never be known by anyone. In every environment, from the NSA to the Certificate Authority system, where controls are purely policy based instead of being technical, have been bypassed for various purposes.

The most famous, or perhaps infamous, use of key escrow was the fundamentally flawed Clipper chip — an attempt by the US Government to allow encrypted calls, while allowing them easy access. Thankfully, this effort died, mostly taking key escrow with it — at least for a few years.

A rose by any other name…

The Director of the NSA, Michael Rogers, has repeatedly called for the introduction of a “frontdoor” — which is a backdoor with a policy that says they have paperwork to do before they can use the keys they hold. The people of America, and the world, are to trust the secret rubber stamp court and the policies of various agencies with our most sensitive information — hoping they follow their own rules (which are probably also secret).

While Rogers has done his best to convince people that there’s a difference, the fact is no matter what you call a backdoor — it’s still a backdoor. It’s still a way for someone that isn’t the owner of the data, to decrypt it, without their approval and quite possibly without their knowledge.

That looks just like the definition of the word backdoor:

a feature or defect of a computer system that allows surreptitious unauthorized access to data.

Massively Increased Complexity

The privacy concerns and potential for abuse aren’t the only cost for key escrow — the complexity of the system is greatly increased, and as with any systems, when complexity goes up, so does the risk of vulnerabilities. The risks are so clear, that even the US Government can’t agree on the plan — NIST has already spoken out against it.

Rogers said: “I want a front door. And I want the front door to have multiple locks.” — what he’s really talking about is key escrow with splitting. In other words, taking a key and splitting it into two or more pieces, and these pieces would be held by separate entities. The data could only be decrypted when one entity shared their half of the key with the other (intentionally or otherwise).

This adds complexity in a few ways:

  • Key Splitting — The splitting process has to be done properly to ensure that neither entity would be able to recover the full key without the cooperation of the other.
  • Key Registration — When the encryption key is generated, it has to be split, and registered with the entities that will hold the parts. There are risks of data being lost in a number of ways here, and this registry would be a natural target for attack.
  • Key Part Transport & Storage — Being a natural target for attack, the key parts need to be carefully handled throughout their lifetime.
  • Key Recovery — Certain paperwork would be required to access the key parts, this would be validated by a human — should that human ignore policy, or not verify paperwork, keys could be accessed without the proper authorization.
  • Key Destruction — Once authorization expires, or the need for access has ended, the combined key and any copies of the key parts outside of the registry should be destroyed. If not properly handled, it would provide another avenue for unauthorized access.

Of course this is just a high level view, and there’s more — all of these additions, unnecessary today, add risks of flaws (accidental or intentional) that could expose data.

Opening the floodgates…

One of the most astounding things to me about this proposal is the level of arrogance it demonstrates. The NSA (and similar organizations) has a strong preference for NOBUS (nobody but us) backdoors, and they are making the daring assumption that this will be a NOBUS backdoor.

For example, if the US Government receives a copy of iPhone encryption keys, a device made in China, do they really believe that the Chinese Government won’t take advantage of the situation to get a copy of the key as well? The US Government has set a clear precedent of using secret court orders to retrieve data — what makes them think that China would hesitate to order Foxconn’s facility in Shenzhen to provide it with a copy of all escrow key parts?

For companies with offices outside of the US, or that deal with manufacturers outside of the US, there is real risk of them being targeted to get the keys before being delivered to the government. This is the same technique that GCHQ used against Gemalto to get keys for SIM cards.

To assume that the US Government can open Pandora’s box, and yet maintain control is sheer arrogance. This opens the door not only to vulnerabilities, but to other governments that want to get in on the game. Once the requirement is set, it will be widely abused.

The Evolution of Paranoia

That researchers from Kaspersky Lab uncovered malware that uses hard-drive firmware has now been throughly discussed — perhaps too much for some people. It’s not exactly Earth-shattering news either, the idea has been discussed for years, and has been publicly demonstrated. Brandon Wilson and I were even working proof of concept for SSD controllers to demonstrate this based on our BadUSB work.

This isn’t about that story, exactly. This is about paranoia, and how it has changed over the last few years — and even the last few months.

I was talking to Brandon Wilson about the implications of Kaspersky’s discovery, and how, or even if, you could ever trust your platform. When you could have malicious firmware in key system components — hard drive, USB devices (keyboard, mouse, etc.), USB hub — and possibly others, how about the GPU or the webcam that’s in virtually all laptops, how could you ever feel secure? How would you ever even know about it? Every device that has updatable firmware is a possible target, and far too few of them use any form of effective security to prevent malicious changes.

I pointed out that I buy all of my computers and key hardware from stores, I don’t have any of it delivered. Why? Interdiction.

If it has my name associated with it prior to being in my hands, how do I know that it’s not been tampered with? Prior to Edward Snowden, I would have said that it was paranoid, that taking such precautions was at best a waste of time and at worst a sign of delusion. Today? If you are working in a field where you could have useful information, it seems quite reasonable. Paranoia has evolved, it has changed, what was once unreasonable is now prudent.

It was of course known that such things were possible before Snowden, but the scale was unknown — and of course NSA and the FBI aren’t the only threats, if they are doing something, you can bet they are far from alone. When that Lenovo laptop was shipped from China, do you really think that the Chinese Government wouldn’t take that chance to step in to gather some extra information? Launch-day iPhones have been shipped directly from China, and so many other examples. If NSA can tamper with a shipment, so can any country that has even temporary access to a package; if it’s on their land, they can attack it.

The focus has been on what NSA does, but the information should be used not as a way to attack NSA, but to get an insight to the global threats that everybody that could be of interest faces. It’s important to remember that they don’t just target terrorists.

I recently had a person contact me, concerned that a device she had was compromised — while I could tell her that unless she was attracting attention from a major player she was likely safe, I couldn’t tell her that she actually was safe, or anything she could do to ensure that nothing was infected. As these techniques spread to more common attackers, the risks that average people will be targeted grows dramatically. Attacks not only get better over time, they become more widespread. From repressive regimes that outsource their attacks, to poorly supervised local law enforcement, to common malware — it’s only a matter of time.

Defending against these attacks, without major changes from device manufacturers, is at best a nightmare; at worst, impossible. I have repeatedly called out USB controller manufacturers to secure their devices, as that’s the only way that BadUSB can be truly fixed. The same needs to be said for so many other device types — it’s up to device manufacturers to secure their products.

The Chain of Trust

For any system to be secure, there must be trust at some point, which can then ensure that later layers are correct and untampered. By attacking firmware, the chain is defeated at its first link — attack the hardware at the lowest level, and nothing that comes later, including the operating system, can truly be trusted.

The more important impact though is not technical, but psychological. If a person doesn’t know what they can or can’t trust, they start to fear everything. For NSA, GCHQ, and countless other agencies in the same business, this is good news — if people can’t trust their computers or their phones, they will turn to less secure means of communication. This is also extremely bad for consumers, business, and investors — as these tools can be used not just to go after government selected targets (legitimate or otherwise), but for profit, for blackmail, for revenge, or just for a thrill.

Targets

While the public focus of NSA is to combat terrorists, it’s been well documented that their targets go far beyond that — researchers, IT staff, business executives, you name it. Yet, I’m a citizen of the United States, and as such, I shouldn’t be a potential target for them (give or take being caught up because of people I know in other countries). Is that the end of the risk for me? No, not by a long shot.

While the Five Eyes countries share intelligence, they don’t share restrictions on who they can spy on. For GCHQ, whether I’m a citizen of the United States or of Afghanistan makes no difference, I’m a valid target under their laws. Canada, Germany, Russia, China, Taiwan — I’m not protected under their laws, if they think I could have interesting information, or access to interesting information, on any topic, I could be a target. So could you. What information do you have access to, who do you know, and what information could they have access to?

If you work in security, development, IT, telecom — that could mean that you have access to some information that some country would like to have. Is that paranoia? A few years ago, some would say yes — now that we have a better insight into the scope and scale of intelligence activities, we know it’s simply reality.

Personal Threat Models

I have long encouraged people to have a personal threat model — what are your realistic threats? When talking to others, keep in mind that their threat model may be different from yours, and things you see as being paranoid could be quite prudent for them, due to the different risks they face.

For me, to be honest, I’m not that interesting to a foreign power — if anything, trading emails with Glenn Greenwald and trading tweets with people like the grugq has done more to make me a target than any professional activity. The information I can access because of my job is somewhat interesting, and is certainly of value — but more to the Russian Mob than to a foreign government. I pay attention to who I talk to, to what I make public, to what my accounts have access to, so I know what my risks are.

If you work for a more interesting company, or are engaged in research that could be useful, or even just know people who could be more interesting than you, your threats could be completely different. Of course, also have to factor in locations — if you are outside of the US and work for a company that could have interesting information, then your threats may be far more complex.

Defining the line between reasonable and paranoid is harder than ever, and may vary from person to person.

Religion, Free Speech & Freedom from Offense

When I was a teenager I worked as a photojournalist and through that experience I learned just how important it is that the public, and the press in particular be able to speak openly, freely, and without restriction.

I also learned how important discretion is — I routinely worked events where people died, those people had families and they would see the photographs that documented the end of a life. Photos chosen for publishing had to be carefully picked, making the wrong choice could offend some, and truly hurt others. I saw people break down and cry when seeing photos I took — I saw the results of brash carelessness on families that were already hurting, already devastated.

I once was tasked with documenting a hate crime — a black effigy hung from a tree, followed shortly thereafter by a body found in a river — hands bound, and clearly related. People were scared, the mock hanging was a warning, and the body found proved that the threat was real. What gets shown and what doesn’t in cases like this is a very difficult choice. On one hand you risk offended and inciting fear — maybe even panic, on the other, you withhold useful information, stifle discussion, and risk leaving the truth sitting in a box, hidden from the world.

For all of the bad, there was also good — lives changed, hard questions asked, reforms enacted, true change made. This wasn’t done without stepping on toes though, hard decisions had to be made to find the right balance.

Making people comfortable is easy — give them what they want and no more. To make people think though, requires making them uncomfortable, requires pushing them outside of their comfort zone — and occasionally, offending them.

The attack on Charlie Hebdo

I firmly believe that journalism, legitimate journalism, is among the most critical tasks in a free society. Shining a light on the good and the bad — the eyes and ears of the people, too often the last chance for justice. When questions can’t be asked, when public figures are put beyond satire and debate, when some topics are unquestionably untouchable, then freedom dies. Slowly at first, then the line inches ever forward until the press is nothing but a mouthpiece for their puppet masters and feeding the public little more than entertainment – no challenges, no discomfort, no thought required entertainment.

Charlie Hebdo made a habit of making people uncomfortable — they attacked everyone and everything in power, they left nothing untouched. In doing so they offended almost everyone — some got mad and stomped away; others took it as a chance to reflect, not only on the statement, but their own reactions, feelings, and beliefs; a few though decided that they needed to die for it.

Those at Charlie Hebdo worked despite threats and attacks, they continued in the face of danger. Every issue published was an act of bravery — sometimes tasteless, sometimes wantonly offensive, but still an act of bravery.

In an effort to silence the criticism of their preferred historical figure, a small group following an extreme and radical interpretation of a religion, took it upon themselves to silence journalists and artists by force. The goal though, went far beyond Charlie Hebdo — the attack was meant to send a wave of fear and terror throughout the world and leave journalists too afraid to say anything or risk a similar fate.

In the hours after the attack, there were clear indications that the extremists that sought to censor the world, may have actually achieved that goal. Publications around the world censored the cartoons of Charlie Hebdo, an act I consider to be cowardice, and willing, knowing capitulation. In the face of danger, some will choose to be brave and stand for what they believe — others will abandon what they believe readily when faced with the threat, or even the idea of danger.

Nothing is beyond ridicule, no person above satire — not political leaders, not Muhammad, not Jesus, not Zoroaster, not Zeus, not Ra, not Utu.

#JeSuisCharlie

Robert Graham posted an image on Twitter that immediately gave me mixed feelings; I agreed and disagreed all at the same time. On one hand, the image is the very definition of satire — it’s a strong point on the perception that these religious extremists are leaving many with. On the other hand, it could further inflame the situation, insulting some and adding more energy to those that have shown they will not be subject to rational thought. It was also an act of defiance, a statement that he would not be censored, and a recommendation that others should follow his example and show that the world will not allow a group of extremists to define what’s acceptable.

For those that are offended by this image, I’m sorry that you feel as you do — though I will offer no apology for posting it. Offending for the sake of offense should be avoided — and is an act I disagree with, offending for the sake of making a point though, is sometimes necessary.

The point here is clear — irrational extremists are acting in unimaginable violence against those that they disagree with, and in doing so, branding the religion as one of violence and hate. This is a fact that everyone needs to understand.

Religious Violence

Violence and religion have went hand in hand throughout recorded history. Christianity has mostly moved away from violence and many of its ancient prejudices (though certainly not all) — something Islam is still struggling with, based on the extreme views and actions of not only terrorist organizations, but governments.

While extremists have done much to harm Islam, there seems to be a pervasive penchant for violence among the more ‘conservative’ Islamic countries — this acceptance of violence and frequent perversions of justice have also done much to make the world question the Islamic commitment to peace.

For me, as an atheist, knowing that there are thirteen Islamic countries where I could be put to death for my lack of faith certainly makes me question just how much peace factors into Islamic views.

While most Muslims are peaceful, the large numbers that espouse peace through forced conformity and violence taint the view of the entire religion.

Freedom from Offense

One of the most bizarre and damaging perversions on the innate right to free speech, is that there is an implied inverse right to not be exposed to anything offensive. Yet this fictitious right to not be offended is antithetical to freedom of speech – you can have only one of the two.

A right to not be offended is a personal right that would trump the rights of all others — freedom of speech does not imply that anyone must listen, only that you have the right to speak. A right to not be offended would require others to not speak if you didn’t like what they had to say. Such a right is a logical impossibility — if we accept that there is truly an innate right to free speech, then there is no overriding inverse.

So the reality is simple — there are times to preserve the critical and innate right to free speech, that some will be offended.

One challenge for a journalist is to effectively get a message across that challenges without offending more than absolutely necessary. I can’t say if Charlie Hebdo crossed that line, but even if they did, they were within their rights.

All Speech Has Value

One final note, inspired by a friend, is that all speech — from the inane and ignorant to true hate speech can have some value. It provides insight, understanding, and perspective that would be missed otherwise.

For those that don’t share perspective with the speaker, such speech that many consider worthless can be a learning experience. You may never agree with them, but at least you can better understand them, and that may lead to clarity.

Embrace the speech that you disagree with and better understand the people behind a perspective that is new to you — it’s a chance to expand your mind, and maybe even bridge a gap and create new understanding.