I thought this was a good post, but I wasn't impressed with the criticisms of other blog posts. Okay, perhaps I'm biased, because I wrote one of them, but how about I try to defend the other?<p>The Matasano post is here:<p><a href="http://matasano.com/articles/javascript-cryptography/" rel="nofollow">http://matasano.com/articles/javascript-cryptography/</a><p>Perhaps the most objectionable thing about the Matasano article is title. Otherwise it does a very good job of criticizing a particular way of engineering web cryptography that is, for lack of a better term, total bullshit. But is the approach criticized in the Matasano post used in the real world?<p>Let's try an experiment! Go to google.com and type in "encrypted chat"<p>If your results are similar to mine, one of the top 3 results will be "chatcrypt.com". Let's read the "How It Works?" page:<p>> Most people thinks that if a website uses a HTTPS connection (especially with the green address bar) then their "typed-in" informations are transmitted and stored securely. This is only partially true. The transmission is encrypted well, so no third party can sniff those informations, but there is no proof that the website owners will handle them with maximum care, not mentioning that the suitable laws can enforce anyone to serve stored data for the local authorities.<p>Okay, so this site attempts to implement end-to-end encryption in a web browser. Except... what's the problem? Oh, it looks like chatcrypt.com isn't served over HTTPS. In fact, if we try to visit the site over HTTPS, it doesn't work at all.<p>chatcrypt.com claims to keep your traffic secure using end-to-end cryptography implemented in JavaScript, except the JavaScript is being served in plaintext and is therefore easily MitMable.<p>Top 3 Google result for "encrypted chat"<p>Is the Matasano post that unreasonable? (besides the title) It pretty much describes that sort of site to a tee.
Wow.<p><i>A construction or implementation is secure if an adversary, given a certain level of power, is unable to achieve a given objective. The level of power an adversary is assumed to have and their ultimate objective is called the threat model.<p>If a new construction is secure under a new threat model that either increases the amount of power an adversary can have or makes the adversary's objective broader, the new construction is said to have a higher level of security.</i><p>This is what we need more in security discussions. So many discussions, here on HN but also, well, everywhere, are really misunderstandings about which threat model to assume. People get into hot-headed fights about whether some solution somewhere is or is not "secure", when really all they disagree about is which definition of "secure" to use.<p>Well done! I propose that security related blog posts take some time out to casually define these terms over and over again, for a while, until we can all just assume them known and be done with all the vague imprecise nonsense.
I suppose I'm expected to give a full-throated defense of the Matasano post, which I wrote, but I'm not going to. While I don't dislike the post <i>as much</i> as this author appear to, I don't much like it either. I wrote it in a single draft, all at once, as a sort of message board comment I'd write once and maybe in the future refer back to. I didn't promote it on HN and I'm not the reason it keeps getting cited.<p>None of this bickering changes a simple truth: when a web mail provider claims to provide "NSA-proof" end-to-end encryption, hosted in Switzerland just to be safe, using software that <i>you don't have to install on your computers at all</i>, then you need to assume that web mail provider can read your email, and so can anyone who can coerce that provider into doing something. If you believe that --- and you should --- then I don't care what you think about the rest of the Matasano article.
One problem with the "passive adversary" attack is that even if the nonce+HMAC protocol defeats the passive adversary, you as a user have no way of verifying whether or not your adversary is passive. Or whether they exist, or, indeed, anything about them, as in the real world, you don't get to pick your adversaries. The user needs a way to determine whether the connection is secure before they can trust it, because they can't (correctly) assume that only passive adversaries exist.<p>So, if that is the best in-browser crypto can do, then it is still basically useless, unless you get to choose your adversary. And "active adversary" software is off-the-shelf tech, not some sort of bizarre thing only the NSA has access to. Active adversary is the lowest baseline of attack worth talking about.
It's not clear to me if the author is <i>endorsing</i> the use of browser crypto in any particular scenario. Regardless, probably the most common reason for wanting browser crypto is to protect the data <i>before</i> it hits the server, thus protecting against a malicious or compromised server.<p>For example, consider a web-based mail client. You want to send an encrypted message, say via PGP, and you don't want the server to be able to read it, even if the server is evil. You'd <i>like</i> to be able to do the PGP encryption 100% in-browser, with no browser plugins or extensions necessary.<p>I think that's the most common category of use-case for browser crypto. Unfortunately, it's one where browser crypto plainly doesn't work. The whole point here is to defend against an evil server, but if the server is evil, <i>it will send you evil crypto JS.</i> TLS doesn't help you. Nobody's impersonating the server or altering the JS file in transit. You're getting an authentic copy of the JS file from the real server. It just happens to be an authentic copy of an <i>evil</i> JS file.<p>Given that, what <i>can</i> you do with browser crypto, practically speaking?
One aspect of in-browser functionality OP mentions is "offline". However, browsers are pretty cool in that they can mix offline and online. You can open a local html file and it can then make online requests. Alternatively, you can request an html file online that then can access local files.<p>This ability to mix offline and online content is something that I think has a lot of potential to improve client-side encryption. Specifically, client-side encryption coupled with an unhosted webapp[1].<p>I've been exploring this potential for my byoFS[2] project, and made an example end-to-end encrypted chat demo[3]. You can request the app anonymously (or even save it and open it locally). The app then lets the user connect an online datastore (e.g. Dropbox) to save the encrypted chats.<p>This separates who serves the anonymous static webapp and the authenticated datastore, and makes it much harder to target a javascript attack (the most common attack from the Snowden leaks).<p>[1] - <a href="https://unhosted.org/" rel="nofollow">https://unhosted.org/</a><p>[2] - <a href="https://github.com/diafygi/byoFS" rel="nofollow">https://github.com/diafygi/byoFS</a><p>[3] - <a href="https://diafygi.github.io/byoFS/examples/chat/" rel="nofollow">https://diafygi.github.io/byoFS/examples/chat/</a>
In a project I'm working on [1], I'm planning to provide a browser extension that verifies the source code is digitally signed and that it matches the source code published on GitHub. I believe this creates a pretty good security model for a web-based app, even more so than most desktop programs.<p>Some more information from the security page [2]:<p>The browser extension provides improved security by verifying the integrity of the files served by the server. The verification is done using two factors:<p>- Cold storage signature verification: In addition to SSL, static files (html/css/javascript) are signed using standard Bitcoin message signatures, with a private key that is stored offline and encrypted. This ensures that the content served from the webserver was not tampered with by a third party.<p>- Comparing against the code on GitHub repository: The source code from the GitHub repository is built on Travis-CI and the resulting hashes are published publicly on Travis's job page. The extension ensures that the content served by the webserver matches the open-source repository on GitHub.<p>If an attacker gains control over the web server, he still only has access to information the web server already knows (which is very little). To get sensitive information, he would have to modify the client-side code to send back more data to the server.<p>For an attacker to successfully mount such an attack against someone with the browser extension, he would have to:<p>- Gain access to the web server.<p>- Gain access to the personal computer of a developer with commit access to the GitHub repository. [3]<p>- Commit his changes to the public GitHub repository, where they can be seen by anyone. [3]<p>- Gain physical access to the offline machine with the private key and know the passphrase used to encrypt it.<p>[1] <a href="https://www.bitrated.com/" rel="nofollow">https://www.bitrated.com/</a><p>[2] <a href="https://www.bitrated.com/security.html#browser-extension" rel="nofollow">https://www.bitrated.com/security.html#browser-extension</a><p>[3] That's assuming that GitHub and Travis-CI are themselves secured. Gaining access to any of them would make those steps unnecessary.
Attesting software (i.e. JavaScript, even from third parties) might be possible if <a href="https://w3c.github.io/webappsec/specs/subresourceintegrity/" rel="nofollow">https://w3c.github.io/webappsec/specs/subresourceintegrity/</a> gains traction.
Why do people use styles like these that push all the content uncomfortably to the sides? 40% of the screen is dedicated to what? The blog title and a link home.<p>Edit: <a href="http://i.imgur.com/62l4zCG.png" rel="nofollow">http://i.imgur.com/62l4zCG.png</a> 500% better
Can any of you comment on my scheme described here: <a href="http://ashkash.github.io/ajaxcrypt/index.html" rel="nofollow">http://ashkash.github.io/ajaxcrypt/index.html</a>
This should resist even active adversaries:<p>- Statically encrypt and publish content on HTTP server
- Transmit these via HTTP to an iframe component at client browser
- transmit decryption and key routines using HTTPS.
- HTTP-iframe locally sends message to HTTPS-iframe via window.postMessage()
- HTTPS-iframe decrypts content (with pre-shared key) and renders it on page
Ten years from now, when web security is even more laughable and anemic than it already is, some of us are going to remember discussions like these where application developers at large ignored the warnings from the established crypto community. Some of us are old enough already to remember this pattern happening before.<p>I understand the strong reaction to the actions of the NSA, but all this is doing is providing the appearance of security while not making it any more difficult for adversaries like the NSA.
Interesting coincidence. I just wrote <a href="http://vnhacker.blogspot.com/2014/06/why-javascript-crypto-is-useful.html" rel="nofollow">http://vnhacker.blogspot.com/2014/06/why-javascript-crypto-i...</a>, in which I explain the threat model implied in the Matasano's article doesn't apply to most applications.
I think that the main reason that people keep wanting to do this is that web developers would like to work on this problem but don't have skills that are applicable immediately. It's going to require a great deal of learning on most of their parts that is at least as daunting as becoming a good front-end developer to use existing crypto libraries well...much less engineer new cryptosystems. That would be the difference between being able to code and being able to write a high quality optimizing compiler, for instance. With study and hard work you can use the fruits of the crypto community well...but you have to start by realizing where you are starting from. In browser javascript isn't just another programming language and runtime that's completely akin to c and the c runtime. The Matasano article does a great job of describing why.
No such thing as a secure keystore? He needs to look harder. Aside from hardware which is tamper proof... which exists in smart cards and TPM chips... most operating systems use file system ACL's. Yes running as "root" means you can get the keys... you have to protect them...