I don't understand why Threema stakeholders are responding so defensively to this. It doesn't work. They're not gaining anything by doing this. There is a smart way to handle academic findings against your protocol: by making the argument that formal analysis is making your system stronger, which it is.<p>Instead, what we seem to be getting is a bunch of mitigating handwaving that suggests the opposite thing: that when the vendor screws up their cryptosystem, they're going to do an internal assessment about whether the practical details are a big enough to deal to merit taking them seriously.<p>To start with: this isn't a paper "by a master student at ETH"; it's a research paper by Kien Tuong Truong and Matteo Scarlata, both grad students at ETH Applied Cryptography, and Kenny Paterson, who is one of the best known academic cryptographers on the Internet.<p>Then: it's true that Threema predates a lot of modern messaging cryptography --- it predates the Signal Protocol double ratchet, for instance. It does not, on the other hand, predate authenticated key exchanges. As the Threema paper points out and cites, OTR had a similar AKE vulnerability long before Threema; the 2005 OTR paper gives the desired property, missing in Threema, a name: "session independence".<p>But, more importantly, it's entirely besides the point whether Threema predates best practices in messaging cryptography. The point is: they're best practices for a reason. You don't get points for effort; your system either works or it doesn't. Secure messaging is a ruthlessly difficult domain to work in, and it should be: these systems are asking people to entrust life-or-death secrets to them (Threema is the official secure messenger of the Swiss military).<p>The vulnerabilities here simply are what they are:<p>1. Because the client/server protocol in Threema uses a hacked-up authenticated key exchange, rather than something from the literature, the loss of an ephemeral key destroys its security; it perhaps mightn't not have had ephemeral keys at all, since they weaken the security of the protocol.<p>2. Because there isn't any key separation between the protocols in the basic Threema protocol, you can encrypt end-to-end (person-to-person) messages and play them back in the client/server protocol to bypass authentication.<p>3. Because the end-to-end protocol didn't authenticate metadata, attackers can reorder and drop messages.<p>4. In part because the end-to-end protocol is simplistic (it has no forward security, let alone post-compromise security!), it has to do a gross nonce-tracking hack to prevent message replay, which means that Threema clients had to defensively save state to protect themselves from attackers, to which they would be susceptible if they ever reinstalled.<p>5. Again because of a lack of key separation, you could bounce the Threema registration protocol off of the end-to-end protocol and <i>forge authenticated messages</i> from users.<p>6. Because they designed a backup system for user comfort instead of resilience against attackers, an unlocked phone could be used for full account compromise.<p>Threema insists on spelling out all the reasons these attacks are difficult to carry out in practice. Who cares? The point is: don't have these problems. This is academic cryptography research, the point of which is to inform future generations of implementers and researchers about what does and doesn't work in protocol design. Taking potshots at the number of cores required to get the Threema E2E protocol to spoof a client/server login is a waste of time. It shouldn't be possible to carry out that attack with any number of cores, and the protocol change required to make that attack impossible is simple.<p>Everybody, most especially Threema, should be going out of their way to extract lessons from research like this, rather than throwing up smokescreens about it.<p>(We've got a podcast episode with this research team going out later today, if you want to hear more from the researcher's side.)