Why Is Unciphered Disclosing RANDSTORM Now?

That’s a question we’ve asked ourselves -- a lot -- in the last several weeks. This isn’t really our business. We help people who’ve locked themselves out of their cryptocurrency wallets recover access to their funds. To do that, we use red team techniques, including reverse engineering the tools our customers originally used to create their wallets. The goal is to gain access to the wallet for the owner. Sometimes, though, we find flaws that give access to a whole class of wallets, and we have to figure out what to do with our discovery.

That’s what happened here. Looking for flaws that would provide access to old wallets, we tested some of the tools used by pioneers in the cryptocurrency industry, and we found that a flaw in one of those tools opened the door to attacks on a lot of wallets, many of them still holding assets. 

With this discovery came a heavy responsibility, and some hard choices. We’re a small startup, and we had never confronted this problem before. This note is the story of what we did, and why.  We probably didn’t get everything right, and we welcome advice on what more we should have done. Because this is not the last time a security flaw in cryptocurrency wallets is going to be found. We’re writing about what we did so that next time the process of rescuing funds at risk will be smoother and better.

The simplest solution would have been to simply store the exploit away and use it whenever it would help one of our customers. We didn’t feel right doing that. We’re pretty good security researchers, but we’re not the only ones who could possibly find this flaw.  If we sat on the exploit and only used it for our business, sooner or later, criminals would find it too. They’d drain all the vulnerable wallets on the blockchain without warning.

Instead, we decided to use the responsible disclosure model that is common in software security testing.  In that model, the researcher who finds a flaw in software reports it to the company that wrote the software, gives them time (say 60 days) to patch the flaw, and then makes a public disclosure of the vulnerability so everyone can avoid it.

But we couldn’t use the software security disclosure model without changes. That’s because patching the software used to create the wallets wouldn’t provide much protection by itself. The flaw was already built into wallets created with the software, and it would stay there forever unless the funds were moved to a new wallet created with new software. All we could do was try to identify companies that were active in wallet creation back in the day, alert them to the risk, and ask them to warn any customers for whom they still had contact information.

Finding all of the companies we should contact was not easy. The flaw was built into the supply chain of a lot of wallet creation tools, so we had to look for companies and tools that might have relied on insecure code and then try to find who was still in the business. If we missed you, or if we called you with our hair on fire about something you later decided was not a crisis, we apologize.

Notifying the customers of firms that were affected by the flaw was not the end of the job. Most of the wallet holders were not going to get notice that way, and some of them didn’t believe the notices they got.  We provided proof of our ability to break into wallets to the companies sending the notices, but we asked them not to disclose all the details of the vulnerability. That’s because we knew the notices would start to show up on the internet.  Our nightmare was that criminals would respond by reconstructing our exploit and begin draining wallets before we could give a public warning of the vulnerability. So we left some bruised feelings at the companies we called, telling them they had to move much faster on notifying customers than they would have liked. In the end, everyone rose to the occasion, but it was an uncomfortable time. 

There’s one more way we haven’t followed the usual security disclosure practice, which would typically include proof of concept code to demonstrate that the flaw is real and dangerous. We think that only makes sense when security researchers aren’t putting innocent people in danger – that is, when the code is patched and no longer a threat.  With cryptocurrency wallet flaws, the threat lasts a long time. That’s why we haven’t provided full details on how the flaw we found can be exploited. 

We’re hoping that delaying the proof of concept will give the true owners of vulnerable wallets time to protect themselves by moving their funds. Bad guys are no doubt already at work trying to create their own proof of concept so they can recreate and implement the attack we found.  But we’re hoping that controlling some of the details will make it hard for them and give the honest owners a head start.

But please! Your head start may only be hours or days.  We can’t do more to protect you. Now you have to protect yourself. Move your money to a new wallet. Just as soon as you can.

Next
Next

Randstorm: You Can’t Patch a House of Cards