That's a fairly big problem I hadn't read about before, with great writing. But I don't think the proposed solutions are realistic.<p><pre><code> [...] all those web sites must use exactly the same format for
authentication challenges.
[...] the key cannot be used for any automated purpose [...]
[...] all applications which use the key for signing must include
such a context string, and context strings must be chosen to avoid conflicts.
[...] better than either of the above is to use a different key for each purpose.
</code></pre>
That's shifting protocol problems to standardization and human caution. It may work, but I'm skeptic.<p>Maybe a change in the cryptographic protocol can fix the problem? For example, instead of<p><pre><code> Server: server_value // domain + action + nonce + ...
Client: sign(server_value)
</code></pre>
we use<p><pre><code> Server: server_value
Client: sign(client_random) + sign(server_value XOR client_random)
</code></pre>
and the server verifies that the two signed components, when XOR'd, result in the same value sent. This way the client only signs random data but can still authenticate.<p>This is not supposed to be a final solution, and is likely to have flaws, but I think it's a better direction to pursue.
This was the reason that made me reject both protobuf and thrift for my own project. I ended up developing my own protocol which has to guarantee a normalized stream: there is only one way of encoding a set of data, any other way causes a error in the parser.<p>Here it is (still under development): <a href="https://bitbucket.org/binarno/goingthere" rel="nofollow">https://bitbucket.org/binarno/goingthere</a><p>Can be embedded into streams, but it has a "strict" flag that forces the parser to throw an error if unexpected data is found in the stream. Optional tags not specified in the schema simply cannot be there, and all the tags must be encoded following a specific order.<p>Still looking for a simple protocol that allows to have a normalized representation that is always the same for the same set of data; I hate to develop my own things and prefer to steal ready made things :-)
One problem with this article (that does not invalidate the main point that you should not reuse keys for different protocols) is using SSH authentication as an example.<p>In SSH what gets signed in publickey (and hostkey) authentication is not controlled by server (client). Instead of some random value provided by other side, signed data include session id that is derived from result of first DH key exchange and thus unpredictable to either side of connection.
Why does the server present any data to be signed at all? The client generates a string, I am Sam logging into Google and Time is Now, Signed Sam.<p>The server has the public key and verifies.