Adam Caudill

Security Engineer, Researcher, & Developer

Developers, Developers, Developers

Note: This was written in 2012, but not published at the time. The point is still valid, perhaps moreso than ever and deserves to be made publicly. The content has been updated as appropriate, though the core of this article remains intact from the 2012 draft. I would like to note that this doesn’t apply to every environment, there are some where developers are very knowledgeable about security, and write code with minimal issues – my current employer happens to be one of those rare & exciting places. I hope that some of these issues have improved over the last 8 years, though in many places, these issues are alive and well.

During a server migration, an ASP.NET application was discovered1 that no members of the team were aware of2; this mystery application was old, broken, undocumented, and a surprise to all involved. It was partially deployed, in that some key files had been removed at some point during its history which prevented it from executing properly. While not currently functional, there was no way to know when it had last worked or what it was used for – it was a surprise for all involved. As this was on a production server, I researched the application, both to understand what this application was, and if it had presented any risk to the company.

I found quickly that the code for the application wasn’t present in the company’s source control system – I had personally overseen the migration from an older system to the current system, and knew for a fact that everything had been imported. Not being there meant that it had never been added to source control in the first place. Thankfully for me, most of the source code files were available on the server (along with older source files for a Classic ASP version of the application); not ideal, but enough to get a decent idea of what was going on.

Within the first five minutes of the review, I spot a SQL command being built from strings that were pulled from user input, no validation, no sanitization. SQL injection in the most classic form. This was the first of many issues found in the hours that followed. As I worked though the code, it was clear that security wasn’t a consideration when this application was built.

This application was public facing, and had been in production for a number of years, and nobody had noticed the glaring vulnerabilities. They even survived a rewrite from Classic ASP to ASP.NET, the vulnerabilities persisted despite rewrite and at least a couple years of maintenance. Seemingly unnoticed by attackers, thankfully.

While this specific application had an unusually high number of vulnerabilities for such a small application, seeing this sort of thing is nothing new. It’s not really even enough to get excited about anymore, and I wouldn’t have, if it wasn’t for something else I’ve been thinking about recently.


My heart sits in two worlds, torn between my love of creating things, and a passion for breaking things. Because of this, I tend to have a somewhat different perspective on issues from some others in either the security or development realm. I’ve built a career on combining these passions to create better software, regardless of the type of team (red or blue) I’m working with.

Developers are extremely good at finding solutions to problems, and at figuring out how to make things work, but too often are profoundly poor at understanding how it will be attacked in the wild. Thinking like at attacker is something that doesn’t come naturally to most – it’s a skill that takes time to develop, and unfortunately many developers are never given the chance to do that. Security education for developers has always been an issue (as I’ve noted before), too few developers are given even a basic introduction to secure development practices in college and public example code is often riddled with errors. Because of this, vulnerabilities are common and work in the various security fields is plentiful.

There are many methodologies and techniques for addressing security issues during the development process, yet they don’t seem to work in many environments – they are too expensive, too complex, too slow, etc. Many of the issues won’t be discovered until the security professionals get involved, including the simple issues that never should have been created. Much time has been spent, and much hot air expelled, in the quest for a solution to this problem – debating who’s at fault, and who’s responsible for fixing it. Yet, here in 2012 2020, SQL injection is still alive and well.

There was a tweet that really made me think more about this problem:

As both a security professional and developer, and as someone that truly loves development and the development community, I had to admit something painful: we suck.

Why does this keep happening?!?

Some of this may be cultural, in some environments the view is that security belongs to the security people, not the developers. Of course, that couldn’t be more wrong. Too often though, that’s the mentality, developers that don’t take real ownership of their security bugs. There’s a testing phase in the SDLC, and that’s where too many developers believe security fits in. In reality of course, security belongs in every phase of the process.

Developers also tend to be ignorant of how attackers work, of the devastating impact of attacks that are so simple to perform. It’s this ignorance that I believe is the real issue, it’s not laziness or the inability to do the job right, it’s that they really don’t know better. Education continues to fail developers around the world by neglecting to provide useful information on security issues, and the methods and mentality of attackers. I’ve spoken to many developers on security issues, speaking at conferences and leading secure development training classes, what I found was consistent: most developers believe strongly in taking the time and effort to build secure systems, though they lack the knowledge and insight to do it effectively. Without an education in the darker arts of computer science, most are simply unequipped to perform their job properly.

As a security professional, this is were we have to admit something pailful: we’ve failed them.

The security community has done a fantastic job of sharing information (with some caveats), though our efforts to present this information outside of our realm has seen more limited success. Too many developers are still unaware of the implications of their decisions, and the repercussions of seemingly minor changes. The answer to these issues isn’t to be found in expensive products, over-reliance on consultants, or magic boxes with lots of blinking lights, there’s one answer: education. While tools and frameworks are getting better, and making some mistakes harder to make, that only solves part of the problem. Without understanding more about common vulnerabilities, techniques and methods used by attackers, the attacker thought process, and tools that attackers use, these bugs will continue to be introduced, and continue to be missed during changes and peer reviews.

If there was hope, it must lie in the developers.

  • The code discussed here was discovered in the fall of 2012, when this piece was originally drafted. It is safe to say that this event has long been forgotten by all involved, except as preserved here. 
  • Based on the file metadata, it appears that the application had last been updated almost a decade earlier (2002-2003). This meant that not a single member of the current development team was aware of the existence of the application, who wrote it, or what it did. 
  • Adam Caudill