Adam Caudill

Security Leader, Researcher, Developer, Writer, & Photographer

Developers, Developers, Developers

Note: This was written in 2012, but not published at the time. The point is still valid, perhaps moreso than ever and deserves to be made publicly. The content has been updated as appropriate, though the core of this article remains intact from the 2012 draft. I would like to note that this doesn’t apply to every environment, there are some where developers are very knowledgeable about security, and write code with minimal issues – my current employer happens to be one of those rare & exciting places. I hope that some of these issues have improved over the last 8 years, though in many places, these issues are alive and well.

During a server migration, an ASP.NET application was discovered1 that no members of the team were aware of2; this mystery application was old, broken, undocumented, and a surprise to all involved. It was partially deployed, in that some key files had been removed at some point during its history which prevented it from executing properly. While not currently functional, there was no way to know when it had last worked or what it was used for – it was a surprise for all involved. As this was on a production server, I researched the application, both to understand what this application was, and if it had presented any risk to the company.

I found quickly that the code for the application wasn’t present in the company’s source control system – I had personally overseen the migration from an older system to the current system, and knew for a fact that everything had been imported. Not being there meant that it had never been added to source control in the first place. Thankfully for me, most of the source code files were available on the server (along with older source files for a Classic ASP version of the application); not ideal, but enough to get a decent idea of what was going on.

Within the first five minutes of the review, I spot a SQL command being built from strings that were pulled from user input, no validation, no sanitization. SQL injection in the most classic form. This was the first of many issues found in the hours that followed. As I worked though the code, it was clear that security wasn’t a consideration when this application was built.

This application was public facing, and had been in production for a number of years, and nobody had noticed the glaring vulnerabilities. They even survived a rewrite from Classic ASP to ASP.NET, the vulnerabilities persisted despite rewrite and at least a couple years of maintenance. Seemingly unnoticed by attackers, thankfully.

While this specific application had an unusually high number of vulnerabilities for such a small application, seeing this sort of thing is nothing new. It’s not really even enough to get excited about anymore, and I wouldn’t have, if it wasn’t for something else I’ve been thinking about recently.

Developers #

My heart sits in two worlds, torn between my love of creating things, and a passion for breaking things. Because of this, I tend to have a somewhat different perspective on issues from some others in either the security or development realm. I’ve built a career on combining these passions to create better software, regardless of the type of team (red or blue) I’m working with.

Developers are extremely good at finding solutions to problems, and at figuring out how to make things work, but too often are profoundly poor at understanding how it will be attacked in the wild. Thinking like at attacker is something that doesn’t come naturally to most – it’s a skill that takes time to develop, and unfortunately many developers are never given the chance to do that. Security education for developers has always been an issue (as I’ve noted before), too few developers are given even a basic introduction to secure development practices in college and public example code is often riddled with errors. Because of this, vulnerabilities are common and work in the various security fields is plentiful.

There are many methodologies and techniques for addressing security issues during the development process, yet they don’t seem to work in many environments – they are too expensive, too complex, too slow, etc. Many of the issues won’t be discovered until the security professionals get involved, including the simple issues that never should have been created. Much time has been spent, and much hot air expelled, in the quest for a solution to this problem – debating who’s at fault, and who’s responsible for fixing it. Yet, here in 2012 2020, SQL injection is still alive and well.

There was a tweet that really made me think more about this problem:

As both a security professional and developer, and as someone that truly loves development and the development community, I had to admit something painful: we suck.

Why does this keep happening?!? #

Some of this may be cultural, in some environments the view is that security belongs to the security people, not the developers. Of course, that couldn’t be more wrong. Too often though, that’s the mentality, developers that don’t take real ownership of their security bugs. There’s a testing phase in the SDLC, and that’s where too many developers believe security fits in. In reality of course, security belongs in every phase of the process.

Developers also tend to be ignorant of how attackers work, of the devastating impact of attacks that are so simple to perform. It’s this ignorance that I believe is the real issue, it’s not laziness or the inability to do the job right, it’s that they really don’t know better. Education continues to fail developers around the world by neglecting to provide useful information on security issues, and the methods and mentality of attackers. I’ve spoken to many developers on security issues, speaking at conferences and leading secure development training classes, what I found was consistent: most developers believe strongly in taking the time and effort to build secure systems, though they lack the knowledge and insight to do it effectively. Without an education in the darker arts of computer science, most are simply unequipped to perform their job properly.

As a security professional, this is were we have to admit something pailful: we’ve failed them.

The security community has done a fantastic job of sharing information (with some caveats), though our efforts to present this information outside of our realm has seen more limited success. Too many developers are still unaware of the implications of their decisions, and the repercussions of seemingly minor changes. The answer to these issues isn’t to be found in expensive products, over-reliance on consultants, or magic boxes with lots of blinking lights, there’s one answer: education. While tools and frameworks are getting better, and making some mistakes harder to make, that only solves part of the problem. Without understanding more about common vulnerabilities, techniques and methods used by attackers, the attacker thought process, and tools that attackers use, these bugs will continue to be introduced, and continue to be missed during changes and peer reviews.

If there was hope, it must lie in the developers.


  1. The code discussed here was discovered in the fall of 2012, when this piece was originally drafted. It is safe to say that this event has long been forgotten by all involved, except as preserved here. ↩︎

  2. Based on the file metadata, it appears that the application had last been updated almost a decade earlier (2002-2003). This meant that not a single member of the current development team was aware of the existence of the application, who wrote it, or what it did. ↩︎

Adam Caudill


Related Posts

  • First, Do No Harm: Developers & Bad APIs

    Primum non nocere (first, do no harm) – an iconic phrase in modern medicine, yet also applicable to many other fields. This is something I wish more people would think about, developers especially – and primarily when writing new APIs. In general, developers don’t have an impressive history with security – quite frankly, developers suck. Seeing as I consider myself a developer, that’s painful to admit. Chris Andrè Dale posted an interesting article some time ago that got me thinking: Why it’s easy being a hacker: A SQL injection case study – Chris pointed out the problems with educational material that developers are using, and just how bad the examples are.

  • Threat Modeling for Applications

    Whether you are running a bug bounty, or just want a useful way to classify the severity of security issues, it’s important to have a threat-model for your application. There are many different types of attackers, with different capabilities. If you haven’t defined the attackers you are concerned about, and how you deal with them – you can’t accurately define just how critical an issue is. There are many different views on threat models; I’m going to talk about a simple form that’s quick and easy to define.

  • phpMyID: Fixing Abandoned OSS Software

    phpMyID is a simple solution for those that want to run their own OpenID endpoint – the problem is that its author stopped maintaining the project in 2008. Despite this, there’s still quite a few people that use it, because it’s the easiest single-user OpenID option available. Unfortunately, the author didn’t follow best practices when building the software, and as a result multiple security flaws were introduced. In 2008, a XSS was identified and never fixed (CVE-2008-4730), in the years since then it seems the software has been below the radar.

  • Win by Building for Failure

    Systems fail; it doesn’t matter what the system is. Something will fail sooner or later. When you design a system, are you focused on the happy path, or are you building with the possibility of failure in mind? If you suffered a data breach tomorrow, what would the impact be? Does the system prevent loss by design, or does it just fall apart? Can you easily minimize loss and damage, or would an attacker have free rein once they get in?

  • On Automatic Updates and Supply Chain Attacks

    Once again, a supply chain attack is in the news; this time, it’s a ransomware attack against Kaseya which has impacted hundreds if not thousands of businesses. According to Kevin Beaumont, the attackers used a 0day vulnerability in the Kaseya VSA appliance to deploy a fake update to all systems it managed; that update is actually the REvil ransomware. As this is a VSA is used by Managed Service Providers (MSPs), this resulted in an attack not just on the MSPs but also their customers.