TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Review my Idiocy prevention without site-wide https

2 点作者 notphilatall超过 14 年前
I've been reading a lot of hype about idiocy and how long it will take people to implement site-wide https. Unless I'm missing something, it's already possible to avoid hijacking on the internet by combining a few existing approaches, and enforcing sequential requests[1] for privileged actions:<p>The client and server establish a secure connection using https, authenticating via regular ID and password. The server provides the client with a regular session ID, and a new session_signature_key which the client will hold in memory and never send back to the server.<p>Each server response includes an unsigned random 64 bit challenge with each request between client and server. The client will sign this value with its session_signature_key, and return this signature in its next request. The server barks if the signature does not match expected the challenge response from the user.<p>The server would obviously have to keep the user / session_sig_key / last challenge map in mem, but it seems easy enough.<p>[1] parallel requests should be doable as well, but I'm still thinking about it.

1 comment

gdl超过 14 年前
Maybe I'm missing something, but what are the advantages of this over simply enabling HTTPS for everything? Performance, maybe, but Google has been often-quoted these past couple days calling that a very small issue.<p>It sounds like you're still assuming HTTPS to start the connection, then transitioning to a different encryption / authentication scheme after that. It would still require the time and effort involved in making the switchover (both HTTPS and the new bits) on a large scale, and any new parts of the plan would need to be made to work with old browsers and operating systems on the client. And since most of the data would be unencrypted, a lot of potentially-sensitive data could still be sniffed.<p>So I think you're overcomplicating things. We already have a good system in place to handle this stuff, people are just too lazy / ignorant / indifferent / resisistant-to-change to make it standard. See also IPv6.