The tokens, called Gemini Dollars, can be sent from person to person on the Ethereum blockchain with the help of specialized programs called smart contracts. To confirm its tokens are actually backed by traditional dollars, Gemini released a report from an independent accounting firm. The company also released a separate audit report, focusing not on finances but on the underlying software code, from a New York security firm called Trail of Bits.
“The goal of the assessment was to discover flaws that could allow an attacker to perform actions meant only for the issuer, Gemini,” wrote Trail of Bits CEO Dan Guido in a letter released by Gemini, explaining that any issues found in the test were fixed.
Trail of Bits is one of several companies offering technical security audits for smart contracts that handle everything from initial coin offerings raising money for blockchain startups to complex digital marketplaces built atop blockchain networks. Smart contracts are specialized programs run by the computers that power blockchains, usually with the power to receive and distribute cryptocurrency or other digital tokens when certain conditions are met. And experts say writing them requires new ways of thinking that can trip up inexperienced programmers.
“THERE’S SOME BUGS THAT ARE EGREGIOUS”
All software can have bugs, but since smart contracts are often the only way to determine who owns valuable cryptographic assets, flaws in their code can be particularly disastrous. And naturally, if they’re found, they can be eagerly exploited by hackers looking to steal digital funds. Companies have raised more than an estimated $20 billion through ICOs this year alone, according to data from the ICO tracking company CoinSchedule.
“There’s some bugs that are egregious,” Guido tells Fast Company. “If you make them, not only are they highly severe—they’re also highly visible to someone looking at your smart contract code.”
The most famous such error was in the code for a decentralized, Ethereum-powered investment fund called The DAO, standing for “Distributed Autonomous Organization”. In 2016, hackers used bugs in its code to siphon off about $50 million in cryptocurrency, though the Ethereum blockchain itself was later tweaked to return the stolen money.
Since then, security experts have worked to find factors that can cause smart contracts to malfunction and develop tools to help automatically check code for errors. They’ve also conducted audits, often released publicly, of new smart contracts that can help reassure investors and end users that they won’t lose their money to a programming glitch.
Trail of Bits has released a number of open source tools to analyze and test programs written in Solidity, the programming language commonly used to craft smart contracts. The standard tools for developing Solidity programs aren’t as sophisticated as those for more established languages, so they can allow bugs to slip through until specialized software is deployed, says Guido.
“It lacks a lot of those safety guarantees that other modern languages have,” he says.
WHEN SMART CONTRACTS FALL PREY TO STUPID ERRORS
Smart contracts can fall prey to some of the same types of bugs that can affect other software, such as basic arithmetic errors or programmers accidentally reusing the same variable name for multiple values. But they can also be affected by special classes of errors: Limitations on computing power available to blockchain code can be exploited to trigger the denial of service attacks, overwhelming smart contracts with more data than they can process. Caps on the sizes of certain numeric values can lead to errors where overly large numbers wrap back to zero, similar to the infamous Y2K bug. That can potentially result in large accounts being reduced to a few pennies or even negative balances being treated as hugely positive.
Some unsafe code can be detected with automated analysis tools without much human intervention: If a contract allows any user to extract its funds, it’s probably a mistake, says Petar Tsankov, co-founder and chief scientist of ChainSecurity, a Swiss startup spun out from the prestigious technical university ETH Zurich. ChainSecurity has developed a tool called Securify, which can quickly spot and flag potential issues in Solidify code.
But other bugs are only visible as flaws within the context of what a contract is actually supposed to do, meaning the first phase of a security audit often involves sitting down with developers to understand exactly what their contracts are hoping to accomplish.
“Typically, there’s very informal documentation on what the contract is supposed to do,” says Tsankov.
Then typically comes a mix of human analysis and automated tests to determine if it’s possible to get the contract to violate its specifications. Trail of Bits has developed a tool called Echidna that can quickly execute smart contracts with a variety of inputs, looking for ways to get the code to misbehave. When bugs are found, security testers will flag them for developers and help ensure they’re resolved well before code is deployed on a live, public blockchain.
Security firms generally say their clients are getting better at writing secure smart contract code as they learn about common errors. It’s a pattern that’s been seen before in other corners of the tech industry including the web itself, as technologies mature and programmers share effective practices, says Zerouali.
But at the same time, crypto startups that at one point only needed audits for the contracts behind their initial coin offerings are now using their ICO revenue to build out more sophisticated offerings. And those include more intricate smart contracts that need to be audited for bugs of their own, says Tsankov.
“Now, they all start coming back to us,” he says. “The level of the complexity is very quickly rising.”