TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Encryption is less secure than we thought

268 点作者 qubitsam将近 12 年前

17 条评论

timtadh将近 12 年前
If you really want to understand this you should read the paper. However, I will try my best to summarize my understanding of what is going on. (Disclaimer: I am not a professional cryptographer, but I know something about crypto-complexity).<p>Many proofs in cryptography start with the following assumption: assume x is a bitstring of length N drawn from a uniformly random distribution of bit strings. This assumption is then used as the basis of the proof of the cryptographic construct.<p>For instance in a provably secure stream cipher you start with that assumption then proceed to show that if weak one-way-functions exist then you can construct a secure pseudo random number generator that is polynomially indistinguishable from a true random source. You then use your assumed random bit string, x, as the key to the psuedo random number generator to generate a one time pad to XOR with your message.<p>Since the secure psuedo random number generator is indistinguishable from a true random number generator (by a polynomial attacker) the message is secure (from a polynomial attacker). However, this is all predicated on the key being drawn from a uniform distribution. If the key is NOT drawn from a uniform distribution than there may be a potential speed-up available to the attacker.<p>What the paper shows is that often it is assumed that words drawn from an IID distribution of letters can be treated as if the distribution of words is uniform (given a sufficiently large word size). However, this distribution while very flat is not quite uniform. They show how this can be exploited using the &quot;Guesswork&quot; framework to come up with a tighter bound on the time it would take to guess a chosen word. This bound is much better than the traditional bound used.<p>What this means is that there needs to be more margin given than previously thought when dealing with keys derived from non-uniform distributions. Such keys include passwords.
评论 #6213863 未加载
评论 #6213208 未加载
评论 #6213709 未加载
评论 #6212084 未加载
评论 #6212938 未加载
评论 #6213475 未加载
评论 #6212198 未加载
signed0将近 12 年前
TLDR: <i>Bloch doubts that the failure of the uniformity assumption means that cryptographic systems in wide use today are fundamentally insecure. “My guess is that it will show that some of them are slightly less secure than we had hoped, but usually in the process, we’ll also figure out a way of patching them.”</i>
评论 #6211356 未加载
评论 #6212466 未加载
api将近 12 年前
I wonder about the practical effect of this in the Real World(tm).<p>When cryptographers talk about a &quot;break,&quot; they mean anything faster than brute force search. So a break of a 128-bit algorithm that allows key recovery in 2^115 is a &quot;break,&quot; but still completely impractical.<p>But as Bruce Schnier says: &quot;attacks only get better.&quot; So a break is a good reason to look for a fix or a better algorithm <i>now</i> before the crack widens.
评论 #6212812 未加载
Anderkent将近 12 年前
They quote three papers that supposedly make the mistake of using Shannon entropy to reason about secrets. They all seem to relate to the same algorithm, so it sounds to me (and our local crypto guy) more likely that there&#x27;s one particular field that is making this mistake, but it&#x27;s not a common mistake in crypto.<p>In my understanding Shannon entropy is not very useful for crypto reasoning for more trivial reasons. A password that has a half chance of being &quot;foo&quot;, and half chance of being completely random has good Shannon entropy, but it&#x27;s clearly not a good password.
JonnieCache将近 12 年前
Paper: <a href="http://arxiv.org/abs/1301.6356" rel="nofollow">http:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1301.6356</a>
评论 #6211574 未加载
tveita将近 12 年前
&gt; We establish that for source words originally constructed from an i.i.d. sequence of letters, as a function of word length it is exponentially easier to guess a word conditioned to be in the source’s typical set in comparison to the corresponding equipartition approximation.<p>Maybe I&#x27;m missing something, but this seems like a known and fairly trivial result.<p>I read that as basically saying that if you don&#x27;t pick the letters in your passwords from an uniform distribution, they are easier to guess.<p>I don&#x27;t see what that adds when people are already using Markov chains and dictionary attacks to brute force passwords, and I see no implication at all for secure keys chosen by good PRNGs.<p>I guess it could have some use in entropy estimation for PRNG state, but assuming uniform distribution of input seems like a naive mistake to make. Is there some application of this that I&#x27;m missing?
Tloewald将近 12 年前
If your input is &quot;perfectly&quot; random then the codebreaker has no chance of determining when he&#x2F;she has successfully decrypted it (regardless of how weak your encryption is). So the more random your unencrypted data, the harder it is to decrypt it once it is encrypted. It follows that less random data will be easier to decrypt. Now I can&#x27;t prove the shape of the intermediate function (e.g. might there be a sweet spot in the middle? &quot;Common sense&quot; suggests the relationship would be monotonic, but common sense is often wrong).<p>Incidentally, it&#x27;s also clear that if the encryption schema is large relative to the compressed data then it will also be harder to decrypt (of course if the encryption schema is public, e.g. a public key, then this is decidedly not true).<p>So, none of this should be terribly surprising. What it does suggest is that you should <i>compress</i> data before (or while) encrypting it, and the better your compression algorithm, the more entropy there will be in your data, and the more secure your encryption.
评论 #6211982 未加载
评论 #6211466 未加载
stcredzero将近 12 年前
From what I&#x27;ve seen, watching security and cryptography from the sidelines for the past 18 years, is that we programmers are still in the myopia of the early days of technology.<p>It took decades for programmers to come to the awareness of the importance of human factors and user interfaces to reach today&#x27;s levels. (Really, how useful is computing, if only a very few want to use it?) Yet we&#x27;re really just beginning to truly get it as a field. (Or as a &quot;not-quite-a-field,&quot; to give a nod to Alan Kay.)<p>It&#x27;s probably going to take just as long for us to figure out how to account for human factors and economics when it comes to security. Yet, if you are really savvy about security, it should leap out at you that <i>human factors and economics</i> are just about the biggest two determining factors in security.<p>The big problem is not what algorithm you use to encrypt A or B. It&#x27;s the employee&#x27;s and programmer&#x27;s lack of knowledge about security and key management. It&#x27;s the secretary who clicks on the phishing link. It&#x27;s management&#x27;s misconception that a dozen of BigCo&#x27;s employees with insufficient peer review can take on the reverse engineering resources of the entire Internet.<p>The big problems in security have to do with organizational awareness and economics. If you&#x27;re not accounting for those, you are misallocating your resources, probably by an order of magnitude.
Tichy将近 12 年前
The things the mention sound to me just like the standard things cryptographer worry about on a daily basis. I am not sure there is anything new there.
BetaCygni将近 12 年前
Isn&#x27;t this just a variant of the known plaintext attack?
评论 #6211126 未加载
评论 #6211056 未加载
Shorel将近 12 年前
This is just theory getting up to speed with current practice.<p>Password breakers (when having the hash result) do not assume that the passwords are random strings, they use several assumptions that greatly reduce the time they need to break the passwords.
tehwalrus将近 12 年前
This is an interesting and important result. I was fascinated by the Shannon entropy when I first heard of it in undergrad, and I&#x27;m now very tempted to go read all the other entropys people came up with too :)
Andome将近 12 年前
Well, the guessing probability is the logarithm of the min-entropy, not the Shannon entropy. As for Shannon entropy, it&#x27;s impossible to get SE with our current code generation because there will always be a small difference from theoretical limit. When the length of our codes get bigger the Shannon entropy&#x2F;uniformly distribution model breaks down and the Guesswork time isn&#x27;t as long as we once thought.
MichaelMoser123将近 12 年前
You can send a key for a pseudo random number generator in the start of the message, use the key to seed it, the then use the output of the pseudo random number generator and xor it with each byte of the message. that would be some kind of &#x27;salt&#x27; that would make the message more redundant.
评论 #6227583 未加载
Retric将近 12 年前
When they say the same number of zeros and ones at maximum enthropy that&#x27;s not actually true. The obvious case being a single bit file or even just an odd number of bits. Instead there is no bias in the file so each bit could just as likely be 1 or 0 which prevents efective compression.
评论 #6211272 未加载
评论 #6211216 未加载
评论 #6211143 未加载
may将近 12 年前
I&#x27;d be curious on cperciva&#x27;s take on this.
plg将近 12 年前
I bet non-compliance with best practices accounts for 10x the amount of &quot;less secure&quot; than anything else. (Like in birth control)