Looking for value in EV Certificates

When you are looking for TLS (SSL) certificates, there are three different types available, and vary widely by price and level of effort required to acquire them. Which one you choose impacts how your certificate is treated by browsers; the question for today is, are EV certificates worth the money? To answer this, we need to understand what the differences are just what you are getting for your money.

The Three Options

For many, the choice of certificate type has more to do with price than type – and for that matter, not that many people even understand that there are real differences in the types of certificates that a certificate authority (CA) can issue. The key difference between the options is how the requester is verified by the CA prior to issuing the certificate. Here is a brief overview of the options available.

DV (Domain Validation)

Domain Validation is the most common type in use today; for these certificates the CA verifies that the person requesting the certificate has control of the domain, and nothing else. This is done with a number of techniques, such as telling the requester to place a random value in a certain file on the web server. These are also the least expensive, often available for less than the cost of a fast-food meal, or even free from CAs like Let’s Encrypt.

DV certificates are issued with little to no human interaction from the CA, and can often be automated from the requester’s side as well. Protocols such as ACME allow a fully automated request & issuance process, allowing you to easily request and update certificates – the process can be scheduled and handled without a human involved at all.

In the past, HTTPS was viewed as a sign of website trustworthiness; getting a valid HTTPS certificate was too difficult for typical phishing websites. Dhamija et al. challenged 22 people to identify phishing websites, and 17 of them failed to check the connection security indicator during the study. This demonstrated that connection security indicators were ineffective at preventing phishing attacks. Subsequently, HTTPS has ceased to be a useful signal for identifying phishing websites because it is no longer unusual to find malicious websites that support HTTPS.

Source: Rethinking Connection Security Indicators, Felt et al.

The purpose of DV certificates is to enable encrypted connections, it doesn’t validate who is running the domain, or if they are honest, or even if what they do with the domain is legal – the sole purpose is to enable secure connections between the browser and the web server.

OV (Organizational Validation)

An Organizational Validation (also known as High Assurance) certificate is quite a bit more expensive at roughly $200 (though may be as much as $500) per year, and is more complex to request due to additional paperwork involved. The increase in price compared to DV is largely due to the extra work required as part of the verification process; in addition to validating control of the domain, the CA will also verify documents that prove the requester is a legally formed entity (via licenses or incorporation documents).

EV (Extended Validation)

Finally, we have EV, the most expensive at roughly $300 (though may be as much as $1,000) per year, EV certificates require the most detailed verification process, and extend upon the requirements of OV certificates. Documents such as proof of having a bank account, proof of address, more detailed requirements on proof of incorporation, proof that the person requesting the certificate is an employee and properly authorized to request the certificate may be required.

Acquiring an EV certificate is a complex process, which may require not only time and effort from technical employees, but also effort on the part of company executives to produce all of the required documentation.

Of drama and marketing

Thanks to the widely varying price, CAs have an interest in directing customers to more expensive options – OV and EV certificates generate far more profit than DV certificates. Thanks to a few CAs offering free DV certificates, and the introduction of Let’s Encrypt which operates at a massive scale (its market share has gone from 5% to 26% in the last year) – this has led to a race to the bottom on pricing for these certificates, killing the profit in them. This has led to increased attacks by CAs against DV offerings, in an effort to boost OV and EV offerings. The focus of these attacks is primarily around the use of DV certificates in phishing attacks (much has been written about why this isn’t really a problem, so I won’t repeat that fight here.).

The value of OV is questionable at best, and for how it’s used today, it really isn’t any better than DV despite the marketing hype. Much to the chagrin of CAs, OV certificates are given the same treatment that DV certificates receive in browsers – there’s no visible difference between them, so users are completely unaware that you’ve spent the extra money on the OV certificate. CAs have pushed browsers to change this, so that these certificates have additional value to justify the expense, though have had no success in doing so.

EV certificates on the other hand do receive special treatment by browsers:

The green bar with the name and location of the organization (in some browsers) is an exclusive feature of EV certificates, and provide users a way to know who the certificate was issued to. It is this special bar that sets EV apart from the other certificate types, and drives the marketing that makes them sell.

Security: DV vs. OV vs. EV

With a substantial difference in price and marketing, do OV and EV certificates provide better security? No.

No matter how you look at it, no matter how it’s marketed, the fact is that all three certificate types provide the exact same level of security. The only real difference between them is that OV and EV certificates contain an extra identifier that tells the browser which type of certificate it is. The encryption is the same, there’s no change in the security of the connection between the browser and server.

Perhaps the best explanation of why EV certificates don’t actually add any security at a technical level is this 2008 paper, if you haven’t read it, you should.

EV Certificates & Pinning

There is one mechanism available where an EV certificate can increase security1, though not without risk. For CAs that issue their EV certificates from a dedicated intermediate, which isn’t uncommon, sites can use Public Key Pinning (HPKP) to pin the CAs dedicated EV intermediate, ensuring that only EV certificates can be used – and preventing a DV certificate, even from the same CA from being accepted. While the death of HPKP has been predicted, it is quite usable, when great care is taken. HPKP allows a site to pin the key used in specific certificates, so once a browser has seen the direction to only trust certain keys, it will reject any that it sees that don’t match. It’s a very powerful tool, though one I rarely recommend because like many things that offer such power, it is easy to get wrong, and can take a site down for an extended period of time with little ability to recover.

This provides a means to ensure that only certificates issued by a specific, selected, EV intermediate can be used – if an attacker tries to impersonate a site (say, via DNS hijacking), this pin will prevent the browser from accepting just any certificate.

The Identity Argument

The key selling point for OV and EV certificates – both for marketing to customers and the politics of the industry – is that issuance of these certificates involve identity verification. The argument put forward is that you can trust sites that use these certificates more because someone at a CA verified that they are who they claim to be.

This argument relies on one key point: that users know the difference.

The problem with this is, users generally don’t know what that green bar is, or what it means. If it disappeared, would they even notice? Thanks to the special treatment by the browsers, EV certificates do provide the opportunity for users to see that EV certificates are different, but to provide any protection against phishing or similar attacks, users must be aware of its presence, and notice when it isn’t present.

This is the key question, if users are aware, it adds value against phishing attacks, if they aren’t, it doesn’t.

“EV is an anti-phishing defense, although its use is limited by lack of support from popular websites and some major mobile browsers. All major desktop browsers display EV information, but some mobile browsers (including Chrome and Opera for Android) do not display EV information. Older literature suggests that EV indicators may need improvement. Jackson et al. asked study participants to identify phishing attacks and found that “extended validation did not help users defend against either attack”. When testing new security indicators, Sobey et al. concluded that Firefox 3’s EV indicators did not influence decision making for online purchases.”

Source: Rethinking Connection Security Indicators, Felt et al.

Some fraction of users will understand this, and be aware of changes – for this group of users, it adds value because it’s another piece of information that allows them to evaluate how much they trust the site. Though research indicates that few understand the difference, and thus the impact is minimal.

At this point it should be clear, the value proposition for EV certificates isn’t in technical security, it’s a potential boost to user awareness – the opportunity it gives users to make a more informed decision before they provide sensitive information is an edge over OV and DV certificates.


I was debating this topic with coworkers recently – the value of EV certificates is limited, it does help inform some users but the percentage is low, it can be used with HPKP to make it harder for an attacker to hijack DNS and perform a successful man-in-the-middle or redirection attack, but that comes with the inherent issues of HPKP and its ability to easily take a site down completely.

With this limited value, it’s difficult to determine if it’s worth the expense – if you are protecting a highly sensitive system, preventing even a single phishing attack could justify the expense, for other systems, it may in fact be a waste of money. As such, it is up to site operators to determine if the small impact that it provides justifies the expense and work required.

  1. Even this comes with its own set of limitations, there is still the issue of third-party content, such as JavaScript, which provides another route of attack that isn’t mitigated by this technique. When using content from a third-party, you accept their weaknesses and the risk that it adds to your own systems. If you are relying on EV + HPKP, but are using a JavaScript library from a CDN that uses a DV certificate, that still provides an attack vector that bypasses the value of EV+HPKP. This is the reason that Jackson & Barth suggested a httpev:// URL scheme to provide isolation from https:// URLs, ensuring that only resources with EV certificates are loaded. 

On the need for an open Security Journal

The information security industry, and more significantly, the hacking community are prolific producers of incredibly valuable research; yet much of it is lost to most of those that need to see it. Unlike academic research which is typically published in journals (with varying degrees of openness), most research conducted within the community is presented at a conference – and occasionally with an accompanying blog post. There is no journal, no central source that this knowledge goes to; if you aren’t at the right conference, or follow the right people on Twitter, there’s a great chance you’ll never know it happened.

There are many issues that this creates, here I will cover a few:

Citing Prior Research

In most conference presentations, seeing prior research cited is the exception, not the rule; this is not because the presenter is trying to take all of the credit for themselves, but a symptom of a larger issue: they are likely unaware of what’s been done before them. Unlike other fields where it’s clear that research builds on the work of those that have come before, in information security, the basis of research is unclear at best and totally lost at worst.

This leads to work being recreated, as happened recently in the case of TIME / HEIST – where the same discovery was made and presented by two groups nearly four years apart. In this case, for one reason or another, the HEIST researchers were unaware of the TIME research, and presented it as a new discovery. This clearly demonstrates two major problems:

  • Researchers aren’t seeing prior research.
  • Research is not being published in a way that makes it easy to find.

When Brandon Wilson and I were working on verifying and extending the BadUSB research, we were clearly aware of the work done by SR Labs and clearly disclosed the role their work played in what we released – what we should have cited though was a number of others that had performed research on similar devices, such as the work on SD Cards, though we weren’t aware of it at the time we began our work. In this case, there’s a blog post and a recorded talk (which is far better than most others), though it’s still not something we had seen.

By not citing prior work, we not only deny credit to those that had moved the state of the art forward, we continually reinvent the wheel – instead of building on the full knowledge of those that came before us, we are recreating the basic results again. Instead of iterating and improving, we are missing the insights and advantages of learning from others, we are recreating the same mistakes that had been solved in the past, we are wasting time by rediscovering what is already known.

There is also the issue of finding the latest research on a topic – when sources are properly cited, it’s possible to follow the chain to founding research, and to the latest research, as this is so rarely done in work produced by the community, it’s impossible to find the latest, to see the impact of the research you do, or see what’s been done to validate research. By not having these connections, a great deal of insight is completely lost.

This is also a very significant issue for those performing academic research – as it’s considered misconduct to not cite sources, yet without a way to clearly identify existing research, it’s difficult to impossible to cite the work that the community and industry does. This furthers the gap that exists between academic and applied information security. Some criticize academic researchers for being out of touch with the rest of the world – a major part of that is that we make it impossible for them not to be.

Losing History

Perhaps the greatest cost of not having a central knowledge store is that much research is lost completely – the blogs of independent researchers are notoriously unstable, often disappearing and taking all of content with it. We are sometimes lucky that the content is reproduced in a mailing list or been archived in the Wayback Machine, though in too many cases it is truly gone.

Countless hours are invested every year, and there is at least one conference every week of the year – with material that may never be presented or recorded again. Only those that attended are exposed to it, so it exists only in the memory of a few select people.

There was a time that a person could go to enough conferences, read enough blogs, follow enough mailing lists to keep up with the majority of new research – those days have long since passed. Today, it is impossible for any individual to remain truly abreast of new research.

Steps Forward & Backwards

In the past, zines such as Phrack could help share tha great deal of knowledge that’s produced, though now with years between releases, it is far from able to keep up. An effort that was a real step forward, PoC||GTFO, has helped some – with a few issues per year and has been able to issue the collected papers from conferences. Though the highly humorous tone, irregular schedule, and level of effort required to release a single issue bring up questions of suitability for the solution to this problem.

The Proposal: An Open Journal

On August 21st I tweeted a poll asking a simple question:

If there was an open, semi-formal journal, would you submit your papers, talks, research for publication?

This poll was seen 14,862 times, received 55 retweets, and numerous replies; there were 204 votes, which break down like this:

  • Yes: 42%
  • Maybe: 27%
  • No: 7%
  • Why is this needed? 24%

The last number is the most interesting to me: to many of us, the issues are clear and of increasing importance, to others, it’s less so. When I posted this poll, I knew that number would be interesting, but at 24%, it’s more significant than I expected. There are, of course, academic journals available, though they are not suited to the needs of the community – nor entirely appropriate for the research that is published. This shows the deep cultural gap between academic and practitioners, and why purely academic journals haven’t been able to address these needs.

In the replies, a number of questions were raised, which reveal some interesting issues and concerns:

  • I don’t write formal papers, I don’t know what would be expected.
  • I can present at a conference, but formal papers make me uncomfortable.
  • Do I have to pay to have the work published?
  • How would this be funded?

There are a number of interesting things here, the most significant I see is that it’s clear that the publishing model used for academic journals doesn’t work for the community. This is often independent work that has little to no funding, so there are no grants, no assistants to help with the paper, and no familiarity with the somewhat unique world of academic journals.

For such an effort to succeed, a number of objectives would have to be met:

  • No cost to publish.
  • No cost to access.
  • Simple submission requirements to minimize pain and learning curve.
  • Cooperation with conferences to encourage submissions.
  • Regular publication schedule.
  • Volunteers willing to coordinate and edit.
  • Community control, no corporate interference or control.
  • All rights should remain with the authors, with license granted for publishing.
  • 100% non-profit.

In my view, a new organization needs to be created, with the goal of becoming an IRS recognized non-profit, with a board of directors elected by the community to guide the creation and publication of this journal. Funding should be from organization members and corporate sponsors; with a strong editorial independence policy to ensure that sources of funding can not interfere with their decisions or what is published.

The journal should strive for sufficient peer review and editorial quality to ensure that the journal is recognized as a legitimate source of trustworthy information, and as a permanent archive of the collected knowledge of the industry. Access to this information should be free to all, so that knowledge spreads, and is not locked behind a paywall or left to perish – unknown and unseen. The journal should strive to be as complete as possible, working with researchers, with companies, and with conferences to collect and published as much high quality research as possible.

Publication in this journal should be a matter of pride for authors, something they advertise as an achievement.

Path Forward

To move forward with this, will require the help and support of many people – it is not a simple task, and comes with many complications to succeed. Though as the industry and community grow, it’s clear to many that a solution is needed for this problem. The knowledge produced needs to be collected, and made easy to find, easy to cite, and freely available to all of those that seek it.

Threat Modeling for Applications


Whether you are running a bug bounty, or just want a useful way to classify the severity of security issues, it’s important to have a threat-model for your application. There are many different types of attackers, with different capabilities. If you haven’t defined the attackers you are concerned about, and how you deal with them – you can’t accurately define just how critical an issue is.

There are many different views on threat models; I’m going to talk about a simple form that’s quick and easy to define. There are books, tools, and countless threat modeling techniques – it can easily be overwhelming. The goal here is to present something that can add value, while being easy to document.

The Threat Actors

To define the threat model, you must define the different types of attackers that your application may face; some of these apply more to some applications than others. In some cases, it may be perfectly reasonable to say that you don’t protect against some of these – what’s important is to clearly document that fact.

It’s also important to think about what attackers want to achieve – are they trying to steal sensitive data such as credit card numbers or user credentials, tamper with existing data – from business plans to student grades, disrupt services (which can come at great cost), or attacking for some other purpose. There are countless reasons for attackers to target an application, some are important to understand (such as stealing credit card numbers) to prioritize protection mechanisms – in some cases, such as those that attack just because they can, it’s impossible to prioritize to protect against them, as there’s no way to predict how they’ll attack.

Passive Attackers

Passive attackers are in a position that they are able to see communications, but can’t (or won’t, to avoid detection) modify those communications. The solution to most of the passive attackers is to encrypt everything – this not only eliminates most of the passive attacks, it eliminates or greatly complicates many active attacks.

Local (User)

One of the most common examples of a local passive attacker is an evil barista – you are working from a coffee shop and on their open WiFi. At that point, they (and many others) can easily see your network traffic. This also applies to corporate networks and many other environments. What information can a Passive-Local attacker gain to perform attackers later?

Local (Server)

Users aren’t the only ones that have to worry about local passive threats, an attacker monitoring the traffic going to and from a server can be quite effective (rogue monitor port on a local switch perhaps). By watching this traffic, an attacker can gain a great deal of information for future use, and perform some very effective attacks.

One example is a password reset email – if not encrypted, an attacker could capture the reset link, and change a user’s password. This is especially effective if the attacker triggered the password reset themselves. This is made worse, due to the common user training to ignore password reset emails that they didn’t request. As an example of this, let’s look at the email that Twitter sends when you request a password reset:


The all too common verbiage “If you didn’t make this request, please ignore this email” is present – an attacker requests a password reset, and waits for the email to go through. If the connection between the application and the remote email service isn’t encrypted, an attacker could easily take over accounts.


Upstream providers, from the server or client end, are in a position where they have easy access to information – they are also a target for those that want that information; and they will provide it, sometimes willingly, sometimes by force. These positions of power allow for transparently monitoring all traffic to and from a device – giving no indication that an attack is ongoing.

As with passive local attackers, any unencrypted communication could be captured, and used real-time, or stored for later use to enable attacks.


A special, and thankfully rare, type of passive attacker is those that have data feeds coming from many locations, and are able to analyze both ends of a communication. This type of attacker is a challenge that only the most sensitive systems need to be concerned with. Perhaps the most public example of an application that is especially vulnerable to this type of attacker is Tor, thanks to traffic correlation.

Active Attackers

Active attackers run the gamut from normal users looking to get around restrictions to professional hackers – with motivations running from boredom to profit. With passive attackers, there’s generally no way to detect an attack, as nothing is being changed. With active attackers, there are often clear signs of their presence – though too often they are missed due to a flood of false positives.

Malicious User

The malicious user is a legitimate user of your application – could be an employee or someone who signed-up from the internet – that seeks to gain additional privileges, gain access to restricted data, or otherwise make your application do whatever they want. In some of the simplest cases, it can be an employee looking to gain extra privileges to bypass required work, or it can be an attacker that is looking to steal data from the application’s database.

Most often, this type of attacker will look for easy wins, may try to understand the application by manually studying the application, or may just use one of the many tools available to see if there’s an easy way in. Most of the so-called “script kiddies” fit into this category.

Malicious Device

Not only is there a risk of a malicious user, but that a legitimate well-intended user has malware or had their device otherwise been compromised. Attackers can leverage a user’s device to perform attacks as them, or capture information. When a user’s device is compromised, an whole range of attacks are enabled – and allows attacks against many types of applications. It has been well documented that software that allows simple man-in-the-middle attacks can be pre-installed on new PCs – and it can be easily installed via other malware. By performing a local man-in-the-middle attack, an attacker could not only capture information, but inject malicious traffic.

In many cases, there is little to nothing that can be done in these situations, as there’s no reliable way to ensure that a device isn’t compromised.

Advanced / Persistent Attacker

From script kiddies with a lot of time on their hands, to professional hackers, this class of attacker will use similar tactics to the Malicious User, but will move on to more advanced attacks and better tools to achieve their goal. From using effective Cross-Site Scripting (beyond proof-of-concepts like alert('xss')), to SQL Injection, and so many more attack types, these attackers are the ones that are most likely to find real issues. When building a threat model, this class of attacker is likely the most important to consider, as they are the greatest and most likely threat.

IT / SysAdmin

One threat that is too often overlooked is the people running the servers – they have access to the logs, can see the server setup, and in some cases can see the application source code and configuration files. This type of attacker has far more insight than most attackers do – in some cases, it’s assumed that those that run the servers simply must be trusted, as there’s no way to fully protect against them.

This is a case where detailed logging, change approval, and separation of duties – mostly process controls – can help, though are unlikely to prevent someone with this degree of knowledge from taking what they want. When defining a threat model, it’s important to address this issue – identify the technical and procedural controls that are in place, and determine how, or if, you will address the situations where such an attacker could be effective.

Developers / DBA

Perhaps the hardest to address are those that build and maintain the application itself. How do you prevent them from inserting a backdoor? Code reviews are normally the answer, but there are real flaws to relying on them. While there may be policies that prevent developers from accessing the production database or credentials – a simple “mistake” made that displays a detailed error message, or allows SQL or code to be injected can quickly render those policies worthless.

A malicious developer is an extremely difficult attacker to address – because they have so much control, and such intimate knowledge of how their applications work. When defining a threat model, you have to account for a member of the development or DBA staff going rogue, ignoring all policies that get into their way, and inserting backdoors or otherwise opening a door that completely defeats the security of the application

Host / Service Provider

When a host or service provider becomes malicious, it can be difficult to impossible to maintain security. As fewer and fewer applications are hosted within corporate walls, it’s important to understand that there is now an additional team of system administrators, networking and IT support that are suddenly involved. They could modify network traffic, add, remove, or replace hardware, clone drives, etc. – this realization has held many organizations back from moving to the cloud, as the fear of breaches and compliance nightmares is too great.

Another issue often overlooked, is the issue of other users of the same host, and this is especially true when using virtual servers; from side channels that allow encryption keys to be stolen, to breaking out the the hypervisor to attack other servers on the same pyhsical hardware.

For high security applications, building an application that can withstand an assault by its hosting provider is a sizeable challenge, though a challenge that must at least be acknowledged.

Local & Upstream Network Providers

If the local or upstream network providers are used to attack, in one way or another, maintaining security can come with a surprising number of challenges. The key to defeating QUANTUM INSERT is as simple as using HTTPS – but there are others that are more challenging.

I recently reviewed an application that verified that the packages it installed were authentic by checking PGP signatures – unfortunately the PGP public key wasn’t securely delivered. Installing that software when under an active network attack would have allowed the attacker to run their own code as root.

Building A Threat Model

A threat model doesn’t have to be some compliance laden document that means more to auditors than developers and security engineers, it can be a simple listing of threat actors and what the defense against them is – or if a given actor isn’t deemed to be a threat, document that. Having a very simple list-based document can provide guidance to the development team, to those classifying security reports, and to those submitting those reports.

As is true with most things in security, simplicity is key.

A Sample

To get you started, I’ve defined a sample, so you can see how easy this can be to setup for your application (this will also save you a bit of time when setting up a bug bounty or pentest). This is an extremely simple, but useful model – it provides critical information and insight.

Application Name: Summus
Application Type: Web
Public Facing: Yes
Language / Platform: C# / ASP.NET MVC
URL: https://www.example.com/summus
TLS In Use: Yes

Application Description:

Web-based data capture application with four user roles (user, manager, QA, admin); users are able to read scripting and enter data from phone calls and back office transactions. Managers are able to review captured data, flag transactions for QA review, edit data, and approve transactions. QA is able to review captured data, approve transactions, and mark transactions as reviewed, flag for escalation, and assign a quality score. Administrators are able to setup new campaigns (scripting, data capture, etc.), update scripting, update data capture, change settings (approx. 200 options – some global, some campaign specific).

Application is written in C#, and uses the ASP.NET MVC framework; data is stored in SQL Server, and data access is performed via LINQ to SQL. Application is hosted on a cluster of eight IIS 8.5 servers. Application is hosted behind a CDN with a Web Application Firewall. Data capture fields may be flagged as sensitive, and encrypted using AES-256-GCM when stored in the database.


Note: Violations of any of the assumptions listed below, or defects in the processes used are considered to be a vulnerability.

Passive – Local – User: TLS is required; both via redirect from HTTP to HTTPS, and via the Strict-Transport-Security header. Due to these mitigations, it is believed that an attacker monitoring a user’s traffic is not able to gain useful information.

Passive – Local – Server: The server is hosting in a secure, PCI-compliant facility. Firewall rules and switch configurations are audited on a quarterly basis; traffic flows are monitored in real-time by Network Administration; switch ports are set to connect to a single MAC only; unused switch ports are disabled; network access & event logs are monitored in real-time by Network Administration.

Passive – Upstream: Due to the use of TLS, it is believed that an upstream provider is unable to access useful information.

Passive – Global: Due to the use of TLS, it is believed that an upstream provider is unable to access useful information.

Active – Malicious User: The application takes advantage of Cross-Site Scripting and Cross-Site Request Forgery protections provided by the ASP.NET MVC framework; data access is performed via an ORM to eliminate SQL Injection attacks; each page includes permission checks to prevent vertical privilege escalation; a Campaign ID and User ID are stored server-side as part of the session to prevent horizontal privilege escalation.

Active – Malicious Device: As there is no ability to remotely detect if a user’s device has been compromised, this is considered out of scope.

Active – Advanced: In addition to the protections mentioned in the Active – Malicious User section, the application is scanned on a regular basis by commercial scanning tools, and a quarterly pentest is conducted to identify vulnerabilities.

Active – IT / SysAdmin: This scenario is considered to be out of scope for this document. IT controls are documented in a seperate document.

Active – Developer / DBA: All code changes are required to pass through a peer code review prior to being merged into the master branch of source control; developers do not have access to production servers; static code analysis and vulnerability scan is performed prior to deployments; deployments are performed by System Administrators after approval from Change Control group; database server access & event logs are monitored in Real-Time by System Administrators.

Active – Host: Due to other controls, this scenario is considered to be out of scope.

Active – Network Providers: Due to the use of TLS, it is believed that an upstream provider is unable to access useful information, or perform meaningful alteration to traffic.

In closing…

I hope that this will help you define not only a useful threat model for your application, but also come to a better understanding of what threats your application faces and how to address them.

When Hashing isn’t Hashing


Anyone working in application security has found themselves saying something like this a thousand times: “always hash passwords with a secure password hashing function.” I’ve said this phrase at nearly all of the developer events I’ve spoken at, it’s become a mantra of sorts for many of us that try to improve the security of applications. We tell developers to hash passwords, then we have to qualify it to explain that it isn’t normal hashing.

In practically every use-case, hashing should be as fast as possible – and cryptographic hashing functions are designed with this in mind. The primary exception to this is password hashing, instead of using one of the standard hashing functions such as SHA2 or SHA3, a key derivation function (KDF) is recommended as they are slower, and the performance (or cost) can be tuned based on the required security level and acceptable system impact. The recommended list of KDFs look something like this:

  • Argon2
  • scrypt
  • bcrypt
  • PBKDF2

When the security community is telling developers to hash passwords, what we really mean is to apply a key derivation function – with appropriate cost values. What this means is, when we use the term ‘hash’ with developers it could mean two very different things depending on the context, and they may well not be aware of that. With too many developers not understanding what hashing even is, relying on them to understand that the meaning changes depending on context is just setting them up for failure.

Encrypting Passwords

All too often, we hear discussions of encrypting passwords, and this often comes in one of two forms:

  • In a breach notification, it’s quite common to see a vendor say that passwords were encrypted, but when pressed for details, they reveal that they were actually hashed. This has often led to a great deal of criticism within the infosec echo chamber, though I’ve long felt that the term ‘encrypt’ was used intentionally, even though it’s incorrect. This is because the general public understands (generally) that ‘encrypt’ means that the data is protected – they have no idea what hashed means. I see this as a situation where public relations takes priority over technical accuracy – and to be honest, I can’t entirely disagree with that decision.
  • Those that know that cryptographic protection is / should be applied to passwords, but aren’t familiar with the techniques or terminology of cryptography. In these cases, it’s a lack of education – for those of us that work with cryptography on a daily basis, it’s easy to forget that we operate in a very complex arena that few others understand to any degree. Educating developers is critical, and there are many people putting a heroic level of effort into teaching anyone that will listen.

I point this out because anytime encrypting passwords is mentioned, the reaction is too frequently vitriolic instead of trying to understand why the term was used, or offering to educate those that are trying to do what’s right, but don’t know better.

Is hashing passwords wrong?

Obviously passwords should be processed through a key derivation function, but is it wrong to tell developers to hash passwords? By using a term that is context dependant, are we adding unnecessary confusion? Is there a better way to describe what we mean?

In 2013, Marsh Ray suggested the term ‘PASH’ – and that is one of many suggestions that have come up over the years to better differentiate hashing and password hashing. The movement to define a more meaningful term has been restarted by Scott Arciszewski, quite possibly the hardest working person in application security now; he has been leading a war on insecure advice on Stack Overflow, giving him a great insight into the cost of poor terminology.

If the security community switched to a term such as ‘pash’ to describe applying a KDF with an appropriate cost to a password, it would greatly simplify communication, and set clear expectations. As password hashing is a completely different operation from what hashing means in almost every other instance, it makes sense to call it something different.

To advance the state of application security, it’s important to ensure that developers understand what we mean – clear communication is critical. This is a topic that isn’t clear to developers, and thus requires somebody explain what hashing means in the context of passwords. Countless hours are invested it explaining how passwords should be handled, and that normal hashing and password hashing are different – this could be simplified with a single descriptive term.

Path forward

The challenge with this is coming to a consensus that a change is needed, and what term should be used. Obviously, there is no governing body for the community – a term is used, or not used. Personally, I feel a change is indeed needed, and I would back the use of ‘pash’ as suggested by Marsh Ray. I believe it’s a reasonably descriptive term, and is distinctive enough to clarify that it is different from normal hashing.

I would like to see broad discussion in the community on this topic, and hopefully a broad enough consensus is reached that the term can be well defined and used broadly. We need to do a better job of instructing developers, and clear terminology is a critical part of that.

Rance, Goodbye Friend

If you never had the oppertunity to meet Rance, known as David Jones to some, you don’t know what a friend you missed. Today, you lost the chance to find out.

He was truly something special – one of the most genuine, kind, and caring people I’ve ever met. I met him at the first security conference I ever attended – while I had always been somewhat involved with security work, I really wasn’t a member of the community, I was an outsider, and every word I said, I was painfully aware of that. Rance knew I was an outsider, and he did everything he could to make me feel welcome – within a couple days I had been introduced to everyone, and he treated me like an old friend.

Had it not been for Rance, for his kindness to a stranger, I’m not sure I would have become so active in the community.

There are a thousand other stories like this, of him going above and beyond at every opportunity – anyone you talk to that knew him has something similar to say. He was truly something special, a one of kind person that made the community better for all.

Of all that has been said about him, this, I think, is the most important: