So here's what happened: Parity used to have a normal multisig wallet, where every user deploys their own contract and each one is a full copy of the code.
They decided it'd be nice if people could have a lower transaction fee when they deployed a new wallet. So they made one master contract that has all the code. Now when you deploy a new wallet, what you actually deploy is a stub that forwards function calls to the master contract, using a "delegatecall" which lets the master execute its functions in the context of the stub contract.
However, they didn't think through how they might want to change the master contract code in this new situation. In particular, they didn't remove the selfdestruct function. Self destruct is perfectly sensible when it's your own contract that you're not using anymore, but it's not so great when it's shared code used by lots of people.
They also forgot to initialize a function setting contract ownership. Someone came along and made themselves the owner, then called the selfdestruct. They posted about it on github, apparently unaware of the full impact of what they'd just done, which was to destroy the code used by all the stub contracts deployed since July 20. Now those stubs no longer have access to functions for withdrawing the ETH they contain.
This master/stub design was also the root cause of Parity's previous multisig hack. Apparently they didn't get a clue and pay for a fresh round of external audits, which I think would have easily caught this problem. In fact, at the end of a post-mortem of the previous hack, published on July 20, they complained that they lacked funds for such things:
In an enterprise or company, when it is growing from startup to market gorilla, there are often tales of screw ups like this that happened to some poor sysadmin or programmer. And the company develops a sort of genetic memory of what not to do. Like children, companies that learn the 'dangerous' things early before they can do real harm, grow up to be less likely to do something really stupid later in their existence.
But I don't know what vector could be used to pass this sort of cultural knowledge between ethereum contract writers from generation to generation. It needs a 'book of sins' that everyone can read and contribute to in order to insure that new contracts won't suffer the problems of the early ones.
In the "Known Attacks" section, they advocate using a mutex to prevent reentrant situations, but they're using a boolean, setting it to true before the operation and setting it to false afterwards. This is not secure -- it's possible for two threads to set it to true at the same time! You need to use an atomic concurrency operation like compare-and-swap [1] to implement a mutex.
That looks like it is headed in the right direction. I observe that in companies there are stories (often with particpant's names) which exemplify bad practices. Part of the motivation to do better seems to come from not wanting to be the person in one of these stories.
Back in the dual floppy drive days, I vividly remember getting the diskcopy a: b: right but accidentally placed the blank in a:.
I vividly remember doing it a second time some months later. But never again.
In my defense I was an idiot and was 8 or 9 years old, so fortunately it was just some shareware game disk that I was trying to copy for a friend only to lose my only copy.
The error made here is in an enormously more complex domain, but kind feels like they just accidentally low-level formatted an important diskette.
Yeah, I guess we all learned to write-protect the floppies the hard way... When you lose data you become a bit more paranoid about making mistakes. Fortunatelly floppies could be protected with a bit of tape.
I never understood why you would ever write-protect a floppy. Then again, I had a Mac 512k with a single floppy drive. (Insert Disk A; CRANK WHIRR GRIND SPIT; Insert Disk B; WHIRR WHIRR CRANK GRIND SPIT; Insert Disk A...)
I don't do ethereum audits but isn't the right thing to do here is to write your own wallet contract and have someone audit it? geeezzz people storing 10M$ ethers.. This is the analogous to the case of storing 1 million dollars in a vault with a 1$ lock.
I'd rather use a contract that has been around for a while storing major funds, which has several public, current audits. The Ethereum Foundation has a multisig which has been holding their funds for years, that's probably a good choice. Parity's audit was done before they made a major architectural change.
I do think there's a need for a much simpler standard multisig than the ones being used now.
That requires getting an ed25519 implementation in there with the ability to multisig into contracts. That's what a standard would be if there was to be one.
By "simpler multisig" I just meant a normal multisig contract with less functionality, where the keys still separately call contract functions and update state.
True multisig transactions like you're talking about are supposed to become possible with the next Ethereum upgrade.
Thank you for that.
Just to make one thing clear, the person who destroyed the contract was after people's money.
He posted a list of contract addresses in the github issue, most of them look like ICO wallets, with a transaction made the time before he destroyed the contract. The transaction was from him to his wallet trying to drain the wallets. After he failed to he went to the destroy function thinking it will propagate to the wallets and moving the funds to him ( this part is just a theory ) or he is just a jerk who wanted to see the world burn.
Frankly, the fact whether or not it was malicious doesn't matter. The original developers fucked up, and there is no genuine excuse for that (regardless of the fact whether or not he was able to steal those funds).
This probably could have been fixed with basic testing.
Just look at all the companies like Microsoft or Apple with millions to spare, or massive community efforts like Linux kernel, either with no shortage of means and resources to make their systems resistant to simple programming bugs.
But history clearly shows that no amount of processes, audits, static analysis or eyepairs help: bugs just keep on being found where ever security folks look for them, and the harder they look, the more they find. Upside is, these systems can be patched, which is comforting as clearly there just is no bug-free code.
Yet, people keep on pouring millions of dollars worth of virtual tokens to these experimental blockchain systems that are fundamentally designed so that any bug of certain class means the money is forever, irreversibly lost - as if these crypto contracts were some mythical new breed of software written by infallible Gods.
The saving grace of Ethereum contracts is that they're very short. Many are under a thousand lines of code, so it's not crazy expensive to pay expert auditors to review every line. It also looks like an ideal use of formal verification techniques, and there's a lot of work going into making that practical. Plus you can keep things a lot safer by writing your contracts in the simplest, most obviously-correct way possible.
The fact that you can do these things doesn't save you from people who don't. Parity's last audit happened before they made a major architectural change; if they'd gotten new audits this probably would have been caught. They also have the most complex wallet code I've seen, often dipping into assembly.
It's true that you can't be completely sure of avoiding bugs, but with decent practices I think it's less of a problem on Ethereum that it is in medical equipment, airliners, and nuclear reactors.
It's kind of like, do you think it's humanly possible to correctly encode the rules of chess in, say, 6502 assembly?
You probably won't do it on the first try. And you wouldn't immediately bet a million dollar on it. But is it possible?
Yes.
A multisig wallet shouldn't be more complicated.
Could you prove the correctness of the 6502 chess? Sure. Formal methods are not black magic. You need a semantics of the machine and a formalization of the rules.
Can you make mistakes in specifying? Yeah, duh. So combine it with peer review and throw in a bug bounty and fuzzing.
Chess is a nice example in this analogy because there are rules that people are less familiar with and might well forget about or implement incorrectly. For example, the complete castling rules are not trivial, like with the king not passing through a square currently attacked by an opposing piece
(Implementing the threefold repetition rule requires maintaining historical state of a different kind than any other chess rule!)
I've seen impressive hyperminimalist chess implementations that were missing these things, but that totally felt like real chess almost all of the time.
There are also some really obscure rules that came about to prevent what you might call exploits. For example, it had to be clarified that you’re not allowed to castle vertically (this could happen if you promoted the king’s pawn to a rook) and that you’re not allowed to promote pawns to a piece of your opponent’s color.
A perfect chess implementation from before those rule changes would no longer be correct!
It’s possible to prove that code perfectly implements a formal spec. It’s tough to prove that the formal spec perfectly describes what you want it to describe.
Actually easy. With any automated theorem prover you will get requests for additional proofs if any ambiguity or missing info is detected.
This is a very strong if. It means the rule cannot be reduced in plain logic.
The example of forward castling rule is a good one.
A missing definition of some en passant situations is caught too. (E.g. situations of check.)
Completely missing rule cannot be caught with these techniques.
You also need to ask the prover right questions.
Say: when does a chess game end. Proof requires proving the existence of a halting Oracle for any game state. Not quite easy but possible. To actually verify, you will have to provide reduction rules unless you happen to own a supercluster.
There is a way, reforming everything as an automatic math proof. Because even security is driven by capitalism sort term gains almost nobody does this.
This is why Rust is so promising. The problem at Apple / MS / Linux / etc is they are massive houses built on bare foundations. Everything traces its roots back to C, and back to exploitative vulnerabilities in the code. Rust itself is not close to safe since it depends on LLVM which itself can be a source of myriad bugs related to C++.
But its proving in my experience to be a much more practical foundation, and going forward there is a lot of value in that.
this isn't a code error -- it's a logic error. No amount of "safeness" in the language will save you from telling the language, without breaking the rules, from doing something that is "wrong".
For instance:
sudo rm -rf /
Every part of that is completely sound and correct. There are no buffer overflows, emory corruption, or anything. You wrote a completely correct command to do something (probably) really wrong.
>This is preventable if you specify fully what you actually want to do.
In this case it was fully specified that sudo should run rm with all privileges, which in turn will recursively deleted the root filesystem.
The entire execution is fully specified and will execute without fault. There is no mistake in `sudo rm -rf /`.
Logical faults may not be preventable with provers, such as when underlying assumptions are wrong.
For example, a prover could have verified the parity library as fully okay because it assumes that the initialization function will not be called without delegation. This underlying assumption X -!> Y (X cannot lead to Y) is wrong and X -> Y is the case but the prover only verified that Y -!> X.
Provers essentially duplicate your code, forcing you to express the solution and/or problem twice such that if you basically typo on the way, one of the pieces will complain about the other.
But it cannot and is incapable of preventing higher level mistakes.
Better languages can help to avoid certain classes of errors, but won't prevent bugs in the business logic of your code, so I'm not sure how this is relevant to contracts.
We do occasionally run into bugs between Rust and C++ semantics with LLVM, but they’ve been pretty minor, and LLVM has largely fixed them, to the point of adding instrinsincs to fix these issues.
Bugs happen no matter the language, even if LLVM was in Rust it would have them.
Has there ever once been a critical llvm vulnerability? I've seen the occasional bugs related to it, but they tend to crash the compiler rather than produce bad binaries, no?
Security-related compiler bugs very rare, but they do exist, at least in graphics shader compilers: http://www.doc.ic.ac.uk/~afd/homepages/papers/pdfs/2017/OOPS... . My impression is that many of these shader compilers are based on LLVM, but this is mostly conjecture. (I do recall a video game crashing on me with internal LLVM errors from the graphics driver.)
He claimed he tried it, thinking it can't go through. And it seems plausible, since you he was researching similar attacks from the past.
He had no reason to make the github issue afterwards, if he was a blackhat who knew what he was doing. His entire presentation doesn't seem like that. (Or even like "someone who is trying for no real reason to try to look innocent". He didn't have to be public at all.)
Just to make one thing clear, you're just assuming. The guy clearly stated that he just tried out different things and never really intended to steal something. If you give out an API make sure that I can't break it by calling it, otherwise it's your fault not mine. The guy seemed also pretty nervous in the gitter channel, he asked if he really broke something and if he gets arrested.
They decided it'd be nice if people could have a lower transaction fee when they deployed a new wallet. So they made one master contract that has all the code. Now when you deploy a new wallet, what you actually deploy is a stub that forwards function calls to the master contract, using a "delegatecall" which lets the master execute its functions in the context of the stub contract.
However, they didn't think through how they might want to change the master contract code in this new situation. In particular, they didn't remove the selfdestruct function. Self destruct is perfectly sensible when it's your own contract that you're not using anymore, but it's not so great when it's shared code used by lots of people.
They also forgot to initialize a function setting contract ownership. Someone came along and made themselves the owner, then called the selfdestruct. They posted about it on github, apparently unaware of the full impact of what they'd just done, which was to destroy the code used by all the stub contracts deployed since July 20. Now those stubs no longer have access to functions for withdrawing the ETH they contain.
This master/stub design was also the root cause of Parity's previous multisig hack. Apparently they didn't get a clue and pay for a fresh round of external audits, which I think would have easily caught this problem. In fact, at the end of a post-mortem of the previous hack, published on July 20, they complained that they lacked funds for such things:
https://paritytech.io/blog/the-multi-sig-hack-a-postmortem.h...