Adam Caudill

Security Leader, Researcher, Developer, Writer, & Photographer

Developers: Placing Trust in Strangers

Much has been said, especially recently, about that mess of dependencies that modern applications have – and for those of us working in application security, there is good reason to be concerned about how these dependencies are being handled. While working on YAWAST, I was adding a new feature, and as a result, I needed a new dependency – ssllabs.rb.

While most Ruby dependencies are delivered via Gems, ssllabs.rb is a little different – it pulls directly from Github:

gem 'ssllabs', github: 'Shopify/ssllabs.rb'

This got me thinking about a pattern that I’m seeing more often, across multiple languages, of developers pulling dependencies directly from the Github account of other people. There are hundreds of thousands of examples just in Ruby, that doesn’t include various build scripts or other places this is done – or all the other languages. To make this worse, this is generally a pull from HEAD, instead of a specific revision.

Absolute Trust #

By pulling in a dependency, especially from HEAD, from another developer is placing a remarkable amount of trust in them – as there’s no release process, no opportunity for code signing (which is missing from most dependency systems), nothing but faith in that developer.

But, it’s not just faith in the developer – it’s faith in the strength of their password, faith in the security of their computer, faith in the security of their email account, faith that they have two-factor authentication enabled – I could keep going. If the developer they pull from is compromised – in any of many ways, it’s a trivial matter to commit a change as them. Once committed, that change will be picked up the next time those that use it build (or update) – giving an attacker code running on their machine – code that will most likely end up in production.

I’ve written before about the difficulty in spotting malicious code – an apparently simple refactoring commit can easy hide the insertion of malicious code, and it’s likely that it wouldn’t be quickly noticed.

Once a developer’s Github account or computer is compromised, it’s a simple matter to commit something that appears innocent, but adds a backdoor, which could be picked up by hundreds or thousands of those that depend on the code before they are able to recover access, detect, and address the malicious code. Depending on the attack vector, it could take days for the developer to regain full access – that’s a large window to be deploying malicious code. That’s assuming of course, that the developer even notices it, many popular dependencies have been inactive for months or even years.

A larger problem… #

This is, of course, just one front in a larger problem – the external dependencies that are used are rarely reviewed, almost never signed, the dependencies they bring in are almost always ignored. Far too many development shops have no idea of what they are actually pushing to production – or what issues that code is bringing with it.

There are so many opportunities to attack dependency management – thankfully few attacks are seen in the wild, but I don’t take too much comfort in that.

Adam Caudill


Related Posts

  • On Automatic Updates and Supply Chain Attacks

    Once again, a supply chain attack is in the news; this time, it’s a ransomware attack against Kaseya which has impacted hundreds if not thousands of businesses. According to Kevin Beaumont, the attackers used a 0day vulnerability in the Kaseya VSA appliance to deploy a fake update to all systems it managed; that update is actually the REvil ransomware. As this is a VSA is used by Managed Service Providers (MSPs), this resulted in an attack not just on the MSPs but also their customers.

  • Juniper, Backdoors, and Code Reviews

    Researchers are still working to understand the impact of the Juniper incident – the details of how the VPN traffic decryption backdoor are still not fully understood. That such devastating backdoors could make it in to such a security-critical product, and remain for years undetected has shocked many (and pushed many others deeper into their cynicism). There are though, some questions that are far more important in the long run:

  • Ruby + GCM Nonce Reuse: When your language sets you up to fail…

    A couple hours ago, Mike Santillana posted to oss-security about a rather interesting find in Ruby’s OpenSSL library; in this case, the flaw is subtle – so much so that it’s unlikely that anyone would notice it, and it’s a matter of a seemingly insignificant choice that determines if your code is affected. When performing AES-GCM encryption, if you set the key first, then the IV, and you are fine – set the IV first, you’re in trouble.

  • Threat Modeling for Applications

    Whether you are running a bug bounty, or just want a useful way to classify the severity of security issues, it’s important to have a threat-model for your application. There are many different types of attackers, with different capabilities. If you haven’t defined the attackers you are concerned about, and how you deal with them – you can’t accurately define just how critical an issue is. There are many different views on threat models; I’m going to talk about a simple form that’s quick and easy to define.

  • Win by Building for Failure

    Systems fail; it doesn’t matter what the system is. Something will fail sooner or later. When you design a system, are you focused on the happy path, or are you building with the possibility of failure in mind? If you suffered a data breach tomorrow, what would the impact be? Does the system prevent loss by design, or does it just fall apart? Can you easily minimize loss and damage, or would an attacker have free rein once they get in?