A Full Vindication of the Measures of Security Practitioners, from the Calumnies of their Enemies; In Answer to A Letter, Under the Signature of A. Gwinn. Whereby His Sophistry is exposed, his Cavils confuted, his Artifices detected, and his Wit ridiculed; in a General Address To the public, And A Particular Address To the dedicated members of the security community. Veritas magna est & prœvalebit.
Friends and Colleagues,
It was hardly to be expected that any man could be so presumptuous as to openly controvert the equity, wisdom, and authority of the measures, adopted by the practitioners of information security: a group truly dedicated to the protection of business and individuals around the world! Whether we consider the characters of those so dedicated, who developed practices to protect; the number, and the dignity of those they protect, or the important ends for which they serve. But, however improbable such a degree of presumption might have seemed, we find there are some, in whom it exists. Attempts are daily making to diminish the influence of their decisions, and prevent the beneficial effects, intended by them. The impotence of such insidious efforts is evident from the general indignation they are treated with; so that no material ill-consequences can be dreaded from them. But lest they should have a tendency to mislead, and prejudice the minds of a few; it cannot be deemed altogether useless to bestow some notice upon them.
These words are taken, nearly verbatim save for some artistic license to apply to the controversy of the day, from the hand of Alexander Hamilton, from his letter of December 15, 1774: A Full Vindication of the Measures of the Congress, &c. If I may be so bold as to borrow these words to respond to another letter that risks misleading the uninformed and impugning on the dignity of a community dedicated to the public good. The letter, published by The Hill, is Our cybersecurity ‘industry best practices’ keep allowing breaches from the hand of Allen Gwinn, a professor at the Cox School of Business at SMU Dallas.
To say that his missive has missed the mark would be an understatement of monumental proportions. A grandiloquent communiqué on the practices of information security that fails to comprehend the reality of threats that exist today and the progress that has been made. A display of illusory superiority so substantial that hundreds of security professionals took time out of their day to observe some of the many errors it contains.
Perhaps, one could argue, the language here is excessively hyperbolic, but, I must insist, for a work such as this, a response must be in specie.
While some have suggested that this article was the work of a malicious actor, I will seek herein to provide fair comment and criticism. A fair and reasonable look at the points made and the flaws they contain.
Note: The following contains excerpts from the article noted above and is subject to copyright. This article is written as commentary and criticism and in accordance with the fair use doctrine.
The hacking at Colonial Pipeline is the latest in a series of breaches that have impacted a long-and-growing list of other businesses — all ambushed by some individual or group that managed to hack through cyber security “industry best practices.”
The first, and the most obvious, issue here is that the premise is founded entirely on the presumption that Colonial Pipeline was following established best practices for their industry. There are a number of issues here, but at a minimum:
- It was widely reported that during a security audit in 2018, they were found to have “atrocious” practices. While there may have been progress since then, it could be reasonably expected that they did not go from atrocious to best practices. In fact, most companies fall far from meeting that goal - it’s rare that a company truly lives up meeting all of them. From my time as a penetration tester, I can say with confidence, it’s exceedingly rare to not find issues.
- Next is the question of what best practices? In a fair criticism of the security community, Robert Graham pointed out that if you ask “any 10 ‘credentialed’ cybersecurity experts for their list of Top 10 Best Practices, you’ll get 13 lists with very little overlap.” This is true; it’s a fair point. It’s also a reflection of the specialization within the community; my specialty is different from most others, and thus I would give a different answer. This is why it’s not practical to use a ’top 10’ list from any one source and establish what actions should be taken.
- Best practices vary depending on the threat model; while some are essentially universal, there are significant differences based on industry. For example, an entertainment company and an oil pipeline have some risks in common, but there are many that are unique to them - and more important, the risks that come with each threat are different. To clearly state the best practices for a specific company, you must first understand their threat model; only then is it possible to adequately define what’s important for them.
It is only getting worse. Reports surface daily about new incidents involving prominent health care providers, government agencies or retailers hit by hackers — thus releasing millions or billions of pieces of sensitive information all over the dark web.
Let’s start with some history, which should help us to understand what’s changed. Ransomware dates back to at least 1989 with the AIDS Tojan, which required a payment of $189. If we fast forward to the mid-2000s, ransomware was becoming more popular, and the fee being extorted was up to $300 per machine - a decent payday for not much work. Jumping ahead to 2013, we see the introduction of CryptoLocker, which charged $400 per device. But CryptoLocker was different from others in a significant way, it provided proof that there was serious money to be made - estimated to be up to $27 million. This changed the game.
Colonial Pipeline reportedly paid $5 million to DarkSide, the group responsible for their infection - a portion of the reported $90 million that DarkSide has received from various victims. And DarkSide is just one player in the field, Ryuk is estimated to have pulled in $150 million, REvil made $100 million, Maze at $75 million. There are more.
What’s changed is that ransomware was largely untargeted in the past, and the operators didn’t care whom they hit. Now, it’s very much about finding companies with weak security and taking them for as much as possible. This change in the way they operate has made it far more profitable - and there’s genuine concern that some that would have joined the security industry have instead focused on crime as it is far more profitable.
What does all this mean? Attackers are more motivated than they were in the past; there’s substantially more financial incentive to commit crimes, attackers are getting better and more professional. As is so often the case, money drives people - and the money to be made in ransomware is enormous.
The solution, of course, is to stop the flow of money - insurance giant AXA took a step in this direction by no longer covering ransomware payments in their policies. As long as the money is there, attackers will be highly motivated.
Thus, when one argues it’s getting worse, one must look at the entire picture to understand everything that’s changed. Security isn’t getting worse, attackers are getting better and more motivated.
These impressively credentialed professionals are skilled in the art of tedium. They know all about audits. They can absolutely push paper.
There are many specialties within security, and security professionals come from many backgrounds - perhaps, here, we start to understand that Mr. Gwinn’s experience with security professionals has been limited to a particular type of compliance professional. Perhaps, just perhaps, the problem Mr. Gwinn speaks of isn’t with security best practices, but frustration with some in the field that are more focused on compliance than security. Maybe, we understand now that Mr. Gwinn has been deprived of exposure to the fascinating, passionate, and diverse collection of professionals that have made so much progress. It could be that this misguided rant is a reflection of his unfortunate experiences.
They can argue with developers as to why their job really does need to be more difficult.
As a security professional, and a leader, I have the authority to say no, to deny requests, to change policy in ways that make things more challenging. Importantly though, as a security professional, that’s not my job - my job is to solve problems, to find the right balance, to understand the needs of the business and individuals, and find a path to giving them what they need, all while ensuring security is maintained. I can be in the way, or I can help the business to operate both efficiently and securely.
When finding a balance, one must understand the threat model in play and how it applies - one must also clearly communicate reasoning and intent behind decisions, especially when they aren’t what someone wants to hear. I have found, in almost all cases, that when I explain the risk that comes with a decision, taking the time to ensure that everyone understands the implications, problematic ideas get withdrawn before I have to deliver bad news. Most developers want to do the right thing; they just need help in understanding what that is and why.
Clear reasoning, logical choices, useful communication is the difference between receiving a thank you versus creating the frustration we see in our Mr. Gwinn here.
And for when their security fortress breaks down? They can eventually come up with someone to blame. They can explain what the unaware user, whose computer was exploited in a way the user can’t understand, did wrong. They can identify and blame “the vendor” of a piece of equipment for a malfunction.
Most in the security field would agree, I dare say, that end-users are both the most important line of defense, as well as the weakest. Why is this? Because of two key factors:
- Users have a job to do, and security controls can’t prevent them from doing it. While security controls can restrict a great many things, it is often not possible to implement sufficient security controls to prevent a user from doing harm, without preventing them from accomplishing critical tasks. Thus, it is a constant balancing act to minimize risk while allowing them to work effectively.
- Users have free will. Philosophical debates aside, users are able to make independent decisions beyond the control of policy and training. This ability, when combined with the need to allow them to perform their job effectively, means that they are able to take actions that have a negative impact, no matter what is done by security professionals.
Users must be educated, must receive useful training, and must be trusted to make reasonable decisions; the reality is though that some make decisions that are not reasonable and fly in the face of the training and guidance that they have received. While the security industry should always strive to create more effective technical controls that do not interfere with a user’s ability to perform their duties, there are limits to what can be done.
I do not say this to completely disregard the point made here, as it is true that we should be able to prevent more issues than we do - though given that we do not have unlimited authority, budgets, or control, there are substantial limits to what can be done. Further, given that many security teams are understaffed and underfunded, most are simply doing the best they can with the resources that they have available to them. It is unfair and unreasonable to hold accountable a security team for the failure of executives to enable them to protect the business effectively.
The core problem is that “industry best practices” are not.
As noted above, the best practices recommended by experts are varied due to specialty and threat model - every entity has its own threat model, and only by understanding that, can you correctly identify the best practices that apply to a given entity and how they should be prioritized. An attempt to minimize and trivialize this factor, seeing “best practices” as a universal magic bullet, is unwise, to say the least.
The practices that are recommended today are the result of many years of experience, learning from failures and successes, understanding what works and what doesn’t. The advice of experts, actual experts, is not just a collection of vacuous soundbites born of ignorance - is it hard-won wisdom gained from years of experience. To discount this knowledge and wisdom would be a return to a time where security was little more than an illusion, and anyone who dared to look behind the curtain found anything and everything they were seeking.
It is foolish to see an increase in publicity driven by more aggressive attackers driven by excessive profits and think it means that things are only getting worse - it completely ignores the progress that has been made over the decades. There may have been a blissful ignorance to attacks being rarely discussed, but they did happen; absent notification laws and aggressive attackers seeking easy profits using a technique that has only matured in recent years, it is likely few would be talked about today. In decades past, most attacks were either not discussed or never even identified - perceived security due to ignorance is not a strategy that works today.
“Industry best practices,” for instance, dictate that network administrators should be boxed in administratively. They should not be able to see what is happening on workstations, servers or storage resources. Server administrators, likewise, should be administratively restricted from being able to monitor network information or anything else that is not directly related to one specific niche job function.
Why is it a bad idea to let one person have access to everything? Is there a risk they could steal secrets because they can see everything that happens? Is there a risk that they could attack the company and cause huge losses? Is there a chance that they could be the target of an attack, and allow an intruder to gain far more access or do more damage? Yes. All of these things have happened.
We have moved away from single points of failure as much as possible, as there’s a substantial risk to a single person having more access than they actually need to effectively do their job. The principle of least privilege is a vital security control that has a huge impact and is one of the key mitigating controls to prevent the very attacks that Mr. Gwinn is complaining about.
There are times that it’s inconvenient; there are times when you have to grab a colleague to work with on an issue - but it prevents countless incidents. Discarding an essential and effective security control in the name of better security is a fallacy of epic proportion.
These practices limit the opportunity for a technically skilled employee to identify anomalies — a key sign that someone may have breached security and be roaming around preparing to launch the next big cyber attack.
This statement misunderstands modern systems monitoring and the technical ability of skilled employees. A network administrator need not have privileged access to a workstation to understand what’s going on in the network; there are better ways for them to spot anomalies - and they do. The best network engineer I’ve worked with actively avoided privileged access to other systems, as he didn’t want to become a risk himself - but he still maintained a very clear understanding of what was happening on a large and complex international network, spotting anomalies quickly. The same argument can be made for server administrators; they don’t need access to sensitive networking resources to spot something unusual.
There is also the fact that modern systems and network monitoring systems all insight and detection without the need to give everyone privileged access. A properly secured system is reasonably easy to monitor for unusual or malicious activity without violating the principle of least privilege.
Implement a “one strike and you are out” hiring policy for information security employees. When they fail, do not let it happen twice.
While this is so absurd that I don’t believe it deserves the dignity of a response, I shall push myself to do so nonetheless.
First, let us consider what this would do, in practice:
- Security employees would have a strong incentive to hide incidents as long as possible, denying that events occurred and intentionally blinding their own ability to detect incidents. Willful ignorance would become the defining characteristic of the team.
- Security employees would have a strong incentive to blame others, for fear of losing their position.
- There would be a strong desire to put extreme restrictions in place with very aggressive policies, which would have a negative impact on the business and the ability of others to work effectively.
- The business would be unlikely to attract effective security employees, creating an endless cycle of breaches and turnover.
Implementing such a policy would create an excessively hostile and adversarial relationship between security and the rest of the company, and would result in security blocking as much activity as possible to minimize the risk to them. This would not, even for a second, achieve the desired outcome.
Also, never hire an information security employee who has ever worked for a firm that has had a security incident. Their “industry best practices” did not work for the previous employer, why would they work better for the next victim? These former employees bring disaster.
Why hire someone that knows how to respond when things go wrong, when you can hire someone that has no clue what to do? Even better, the person you’ve hired that has no idea what to do does know they will likely be fired. Somehow, I don’t see that leading to a quick and effective recovery.
Should our Mr. Gwinn have the awful fortune to find himself in the midst of a medical emergency, I assume he would first ensure that his doctor has never lost a patient, as clearly the doctor’s method of treating patients would only invite disaster. A good doctor clearly needs no experience in dealing with mistakes or the unexpected, as that won’t happen if they are doing their job correctly. When our Mr. Gwinn flies, I would assume he also screens the pilot, to ensure they’ve never had to make an emergency landing - why would you want someone like Sully Sullenberger flying your plane, when you could have a pilot that’s never made the mistake of letting an engine fail while flying.
The logic here is so horribly broken that it is inconceivable that someone entrusted with the title of professor could write these words.
Encouraging ignorance and placing hiding incidents above recovery would be a foolish direction for any company to take.
As far as “industry best practices,” try going against the grain. Return to the practices that were in place before ransomware, breaches and other information security disasters became commonplace.
Returning to the failed practices of the past, that the community is still trying to clean up, invites disaster. Longing for the days when breaches were kept secret, and many attacks were never even discovered invites disaster. Pretending that the threats that are driving the media coverage don’t actually exist, that invites disaster.
Following the advice offered in his letter invites disaster.
Much time has been wasted, much virtual ink has been spilled, many hours wasted by so many responding to this mistake of an article. There is a great deal wrong, and I could continue. The advice offered is uninformed and mistaken. While I’m sure that Mr. Gwinn had no malicious intent, I would strongly encourage anyone reading his article to disregard it in its entirety.
The path to better security isn’t going backward.