When Hashing isn’t Hashing

Hash

Anyone working in application security has found themselves saying something like this a thousand times: “always hash passwords with a secure password hashing function.” I’ve said this phrase at nearly all of the developer events I’ve spoken at, it’s become a mantra of sorts for many of us that try to improve the security of applications. We tell developers to hash passwords, then we have to qualify it to explain that it isn’t normal hashing.

In practically every use-case, hashing should be as fast as possible – and cryptographic hashing functions are designed with this in mind. The primary exception to this is password hashing, instead of using one of the standard hashing functions such as SHA2 or SHA3, a key derivation function (KDF) is recommended as they are slower, and the performance (or cost) can be tuned based on the required security level and acceptable system impact. The recommended list of KDFs look something like this:

  • Argon2
  • scrypt
  • bcrypt
  • PBKDF2

When the security community is telling developers to hash passwords, what we really mean is to apply a key derivation function – with appropriate cost values. What this means is, when we use the term ‘hash’ with developers it could mean two very different things depending on the context, and they may well not be aware of that. With too many developers not understanding what hashing even is, relying on them to understand that the meaning changes depending on context is just setting them up for failure.

Encrypting Passwords

All too often, we hear discussions of encrypting passwords, and this often comes in one of two forms:

  • In a breach notification, it’s quite common to see a vendor say that passwords were encrypted, but when pressed for details, they reveal that they were actually hashed. This has often led to a great deal of criticism within the infosec echo chamber, though I’ve long felt that the term ‘encrypt’ was used intentionally, even though it’s incorrect. This is because the general public understands (generally) that ‘encrypt’ means that the data is protected – they have no idea what hashed means. I see this as a situation where public relations takes priority over technical accuracy – and to be honest, I can’t entirely disagree with that decision.
  • Those that know that cryptographic protection is / should be applied to passwords, but aren’t familiar with the techniques or terminology of cryptography. In these cases, it’s a lack of education – for those of us that work with cryptography on a daily basis, it’s easy to forget that we operate in a very complex arena that few others understand to any degree. Educating developers is critical, and there are many people putting a heroic level of effort into teaching anyone that will listen.

I point this out because anytime encrypting passwords is mentioned, the reaction is too frequently vitriolic instead of trying to understand why the term was used, or offering to educate those that are trying to do what’s right, but don’t know better.

Is hashing passwords wrong?

Obviously passwords should be processed through a key derivation function, but is it wrong to tell developers to hash passwords? By using a term that is context dependant, are we adding unnecessary confusion? Is there a better way to describe what we mean?

In 2013, Marsh Ray suggested the term ‘PASH’ – and that is one of many suggestions that have come up over the years to better differentiate hashing and password hashing. The movement to define a more meaningful term has been restarted by Scott Arciszewski, quite possibly the hardest working person in application security now; he has been leading a war on insecure advice on Stack Overflow, giving him a great insight into the cost of poor terminology.

If the security community switched to a term such as ‘pash’ to describe applying a KDF with an appropriate cost to a password, it would greatly simplify communication, and set clear expectations. As password hashing is a completely different operation from what hashing means in almost every other instance, it makes sense to call it something different.

To advance the state of application security, it’s important to ensure that developers understand what we mean – clear communication is critical. This is a topic that isn’t clear to developers, and thus requires somebody explain what hashing means in the context of passwords. Countless hours are invested it explaining how passwords should be handled, and that normal hashing and password hashing are different – this could be simplified with a single descriptive term.

Path forward

The challenge with this is coming to a consensus that a change is needed, and what term should be used. Obviously, there is no governing body for the community – a term is used, or not used. Personally, I feel a change is indeed needed, and I would back the use of ‘pash’ as suggested by Marsh Ray. I believe it’s a reasonably descriptive term, and is distinctive enough to clarify that it is different from normal hashing.

I would like to see broad discussion in the community on this topic, and hopefully a broad enough consensus is reached that the term can be well defined and used broadly. We need to do a better job of instructing developers, and clear terminology is a critical part of that.

Seamless Phishing

Phishing attacks are a fact of life, especially for users of the largest sites – Facebook being the most common I’m seeing today. Pretty much everybody, from the SEC to antivirus companies have published guides on what users should do to avoid phishing – so I picked one at random and pulled out the key points:

  • 1). Always check the link, which you are going to open. If it has some spelling issues, take a double-take to be sure — fraudsters can try to push on a fake page to you.
  • 2). Enter your username and password only when connection is secured. If you see the “https” prefix before the site URL, it means that everything is OK. If there is no “s” (secure) — beware.
  • 5). Sometimes emails and websites look just the same as real ones. It depends on how decently fraudsters did their “homework.” But the hyperlinks, most likely, will be incorrect — with spelling mistakes, or they can address you to a different place. You can look for these tokens to tell a reliable site from a fraud.

This is from Kaspersky – unfortunately the advice is far from great, but it follows pretty closely to what is generally advised. It’s quite common for people to be told to check the URL to make sure it’s the site they think it is, check for a padlock, and if everything looks right, it should be safe. Except of course, for when this advice isn’t nearly enough.

Seamless Integration

Facebook allows for third-party applications to integrate seamlessly, this has been a key to achieving such a high level of user engagement. When accessing an application via Facebook, you end up at a URL like this:

Facebook_Phish_URL

As you can see, the URL is *.facebook.com – as people would expect. It uses HTTPS, and not just HTTPS, but HTTPS with an Extended Validation certificate. It passes those first critical tests that users rely on to keep them safe. Let’s take a look at the page that URL points to:

Facebook_Phish

The header is from Facebook, as is the side-bar – but the rest of the page is actually an iframe from a malicious third-party. What appears at first glance to be a legitimate Facebook page, is actually a Facebook page that includes a login form that is being used for phishing.

Facebook_Phish_Direct

Everything looks right, the style makes sense, there are no obvious errors, the URL is right, and there’s the padlock that everyone is taught to look for. This is a fantastic phishing attack – not at all hard to implement, and it passes all of the basic checks; this is the kind of attack that even those that are careful can fall for.

Because of just how seamless Facebook has made their integration, they have opened the door for extremely effective phishing attacks that few normal users would notice. Anytime an application allows third-parties to embed content blindly, they are doing so at the cost of security. This shows the need for increased vigilance on the part of Facebook – doing a better job of monitoring applications, and that users need to be taught that going through a simple checklist is far from adequate to prevent attacks – as is often the case, checklists don’t solve security problems.

Thanks to @Techhelplistcom for pointing this out.

PL/SQL Developer: HTTP to Command Execution

While looking into PL/SQL Developer – a very popular tool for working with Oracle databases, to see how it encrypts passwords I noticed something interesting. When testing Windows applications, I make it a habit to have Fiddler running, to see if there is any interesting traffic – and in this case, there certainly was.

PL/SQL Developer has an update mechanism which retrieves a file containing information about available updates to PL/SQL Developer and other components; this file is retrieved via HTTP, meaning that an attacker in a privileged network position could modify this file.

This file is retrieved each time the application starts, and if a version listed in the file is greater than the version installed, the user will be prompted to upgrade (default behavior; otherwise user not prompted until they select Help | Check Online Updates). They have the following options:

  • Update: If a URL is provided, the application will download a file (also over HTTP), and apply the update. If no URL is provided, the option is not presented to the user.
  • Download: Executes the URL provided, so that the user’s browser will open, and immediately download the file. This is typically an executable (*.exe); as is the case elsewhere, the file is retrieved over HTTP, and no validation is being performed.
  • Info: If a URL, it’s executed so that the user’s browser opens to the specified URL; otherwise content is displayed in a message box.

The are (at least) two issues here:

  • Redirect to malicious download; as the user is likely unaware that they shouldn’t trust the file downloaded as a result of using the Download option, an attacker could replace the URL and point to a malicious file, or simply leverage their privileged position to provide a malicious file at the legitimate URL.
  • Command Execution; when the user selects the Download option, the value in the file is effectively ShellExecute’d, without any validation – there is no requirement that it be a URL. If a command is inserted, it will be executed in the context of the user.

This means that a user believing that they are downloading an update, can actually be handing full control over to an attacker – this is a case where not bothering to use HTTPS to secure traffic, can provide multiple methods for an attacker to gain control of the user’s PC. This is a great example of the importance of using HTTPS for all traffic – it’s not just about privacy, it’s also critical for integrity.

The tested version of PL/SQL Developer was 11.0.4, though the issue likely well predates that version. The vendor reports that this issue has been addressed by enforcing HTTPS on their website, and application changes made in version 11.0.6. It is recommended that all users update to the latest version.

Vulnerability Note: VU#229047
CVE: CVE-2016-2346

Technical Details

The update file is retrieved from http://www.allroundautomations.com/update/pls.updates – the request issued by the application looks like this:

Here’s what a response looks like – it’s a INI-like file, the Download value is the item we care about most here:

By changing the returned file, replacing this line:

Download=http://files.allroundautomations.com/plsqldev1104.exe

With this:

Download=calc.exe

When the user selects the Download option, calc.exe will be executed.

Here is an example of a pls.updates file that demonstrates this flaw (the key changes are increasing the Version, so that the user will see it as an update, clearing the Update value, so the only option is Download, and setting Download to the command that you wish to be executed):

Special Thanks

Thanks to Garret Wassermann of CERT/CC for his assistance and Allround Automations for addressing the issue.

Crypto Crisis: Fear over Freedom

Yesterday, President Obama spoke at SXSW on topics including the oft-discussed fight between Apple and the FBI – what he called for, while more thoughtful than some of the other comments that we have been hearing from Washington, was still tragically misinformed. He repeated the call for a compromise, and by compromise, he meant backdoors.

Here, I feel I must paraphrase one of my favorite authors to properly express the magnitude of what’s being discussed here:

Tell me, ‘friend’, when did the United States abandon reason for madness?!

Cryptography is critical is every aspect of modern life – from shopping to protecting national secrets, from medical devices to the phones that diplomats use, from your home router to the infrastructure that powers global communication. Cryptography is ubiquitous and essential to keep everything from foreign powers to bored teenagers from wreaking unimaginable havoc. And world leaders are proposing that we replace real security with a TSA-style show that looks secure, but isn’t actually effective (beyond providing a false sense of security).

Mr. President

In one simple statement, he made his position perfectly clear:

[T]here has to be some concession to the need to be able get into that information somehow.

This is, quite honestly, a binary issue, a backdoor is present or it isn’t – there’s no partial backdoor, there is no technology that only allows access to the backdoor if there’s a court order, there’s no technology to ensure that the backdoor isn’t abused. You have a backdoor, or you don’t. That simple.

He did acknowledge some of the issues here:

So we’re concerned about privacy. We don’t want government to be looking through everybody’s phones willy-nilly, without any kind of oversight or probable cause or a clear sense that it’s targeted who might be a wrongdoer.

What makes it even more complicated is that we also want really strong encryption. Because part of us preventing terrorism or preventing people from disrupting the financial system or our air traffic control system or a whole other set of systems that are increasingly digitalized is that hackers, state or non-state, can just get in there and mess them up.

It’s good that he understands that strong cryptography is critical, but that doesn’t stop him from saying that backdoors must be added. Like so many that aren’t familiar with how these technologies actually work, he is hoping that some new value between True and False will be found – that you can somehow have a backdoor, but control it. Unfortunately for him, or perhaps for everyone if he gets his way, there is no ItDepends value sitting between those two.

There is some sign that he has heard the reality of the situation, and states it fairly clearly:

Now, what folks who are on the encryption side will argue, is that any key, whatsoever, even if it starts off as just being directed at one device, could end up being used on every device. That’s just the nature of these systems. That is a technical question. I am not a software engineer. It is, I think, technically true […]

This should have been the end of the discussion, if you add a backdoor, it can be abused. But it wasn’t. He acknowledges that the kind of magical backdoor that the government wants isn’t possible, and then goes on to repeat that there has to be compromise, there has to be a way for the government to access data, there has to be backdoors:

My conclusions so far is that you cannot take an absolutist view on this. So if your argument is “strong encryption no matter what, and we can and should in fact create black boxes,” that, I think, does not strike the kind of balance that we have lived with for 200, 300 years. And it’s fetishizing our phones above every other value. And that can’t be the right answer.

Looking forward…

Let us assume for a moment that the US Government gets what it wants, what does that mean, how does that impact the US and the rest of the world?

We are being watched.

From the beginning of the case, officials from other governments have chimed in to support the FBI – it’s clear that governments around the globe are waiting to see what happens here. Apple has offices in several countries, it is not only possible, but likely that they would serve Apple with sealed orders to provide them with access to the backdoor, for their own use.

Based on the same decision, Microsoft could be forced to add a backdoor to BitLocker, to allow government access to encrypted desktops and laptops. If you want to actually encrypt your device, there’s always VeraCrypt (they are based in France, so maybe not). This also raises serious questions around things like LUKS – could US-based developers even be allowed to contribute to it?

Economic impact.

If backdoors are mandated, it would become impossible to recommend any product made by a company with offices in the US – to do so would be unethical, as the security would be known to be compromised. For any organization that is interested in the security of their systems, the logical option would be to look for solutions in other parts of the world, avoiding anything coming from the US. This leads to a very unfortunate outcome – to remain competitive globally, it would be in the best interest of US-based technology companies to move their offices out of the country.

Unknown threats.

There aren’t many people who are able to build effective backdoors; the crypto community is fairly small, and only a small percentage of that group is capable of building a backdoor that wouldn’t be an immediate disaster (though still likely a disaster in the long-term). This leads to two possible outcomes:

  • Backdoors are built by people who don’t know what they are doing, and open systems immediately to attackers.
  • Backdoors are contracted out to a very small number of consulting firms, making them a huge target for attacks.

Either way, what you have is a situation where you, as a consumer, or a corporate buyer, a consultant, etc. have no idea about any of these:

  • How well was the backdoor designed? Is it only obscurity that protects it? Will it be broken once reviewed by the crypto community?
  • How is access to the backdoor restricted?
  • How many people have access? The developers could have maintained copies, an employee could have walked out with a copy before being fired, an attacker could have targeted the developers to steal a copy – this goes on and on.
  • How many organizations have access? If a consultant was brought in to develop the backdoor, did they keep a copy?
  • How many governments have access? The reasonable assumption would have to be that every country that the company has offices in, has requested a copy.

I suspect that the answer is going to come down to how do we create a system where the encryption is as strong as possible. The key is as secure as possible. It is accessible by the smallest number of people possible for a subset of issues that we agree are important.

Secure as possible, except against the unknown list of people and various governments that have access to the backdoor. That isn’t security, and isn’t in the long-term interest of anyone.

PL/SQL Developer: Nonexistent Encryption

(See here for another issue discovered during this research; Updates over HTTP & Command Execution.)

PL/SQL Developer by Allround Automations has an option to store the user’s logon history with passwords – the passwords are encrypted with a proprietary algorithm. At this point, you should know how this is going to go.

For those that don’t know, PL/SQL Developer is a tool for developers and database administrators to access Oracle – an essential tool in many enterprise environments. Instead of using something that provides some actual security like DPAPI (which itself is far from perfect, as we saw with the UPEK fiasco), they opted to use a proprietary “encryption” algorithm to protect these passwords – making it trivial to recover the passwords for any attacker that can access the preferences file(s).

Some time ago I asked the vendor about the security of the password storage – they are aware of the lack of security, but don’t make it clear to their customers.

The fact that they are aware that it isn’t secure, yet this issue has existed for years – nor made it clear to users what they are risking by activating the option is extremely disappointing. Vendors have a responsibility to protect customer information, and broken features like this completely ignore that.

The Algorithm

The encryption algorithm is quite simple, primarily consisting of a bit shift and xor – let’s take a closer look at how it works. The ciphertext produced looks like this:

273645624572423045763066456443024120413041724566408044424900...

The first group of four digits (2736) is the key – it’s generated based on the system uptime, producing an integer between 0 and 999, then 2,000 is added. This means that the key is has 1,000 possible values, or just under 10 bits. Of course, when you store the key with the encrypted data – key size really doesn’t matter.

After the key at the beginning, each group of four digits represents one byte – this simple code is all that’s needed to encrypt:

When you encrypt the string [email protected], here’s what the encrypted data breaks down to:

  • 2736 = Key
  • 4562 = u
  • 4572 = s
  • 4230 = e
  • 4576 = r
  • 3066 = /
  • 4564 = p
  • 4302 = a
  • 4120 = s
  • 4130 = s
  • 4172 = w
  • 4566 = o
  • 4080 = r
  • 4442 = d
  • 4900 = @
  • 4190 = s
  • 4328 = e
  • 4194 = r
  • 4076 = v
  • 4390 = e
  • 4160 = r

The Data

The login information is stored in an INI-like file called user.prefs – under the headings of [LogonHistory] and [CurrentConnections]; storage of passwords is an option that is turned off by default, though storage of history is turned on by default. All data stored in these sections is encrypted using this method, so the presence of data in these sections does not necessarily mean that passwords are present.

These files can be stored in a number of locations (the latter are more common with older versions of the application):

  • C:\Users\<username>\AppData\Roaming\PLSQL Developer\Preferences\<username>\
  • C:\Program Files\PLSQL Developer\Preferences\<username>\
  • C:\Program Files (x86)\PLSQL Developer\Preferences\<username>\

The data format for the two sections is somewhat different, in [LogonHistory], the data is in the following format:

<username>/<password>@<server>

In [CurrentConnections], the format is <username>,<password>,<server>,,,; the login can also be stored in C:\Users\<username>\AppData\Roaming\PLSQL Developer\PLS-Recovery\*.cfg, in this same format.

This encryption method is also used in other files, though in less predictable locations.

The Proof of Concept

We have released a proof of concept tool to decrypt these logins, and as is typical, it’s open source. Simply run the executable from the command line, and it will search for the preference files and print any information it’s able to retrieve.

You can also pass in the name of a remote machine, and it will attempt to use the administrative (c$) share.

Credit

Special thanks to my frequent research partner, Brandon Wilson, for his help with this project.