Posted 3 years ago:<p><a href="http://news.ycombinator.com/item?id=760917" rel="nofollow">http://news.ycombinator.com/item?id=760917</a><p>It looks like kapitalx appended a query param (s=1) in order to avoid the duplicate url check.
What happens if, on inequality, you have a 50% chance of doing one more comparison operation?<p>The way I figure, which may be incorrect, is the following:<p>You have x options (here it is 16). x^2 gives us 256 different options for this example. However, if it's only correct correct half of the time then we have to repeatedly cut down our search which results in series:<p>sum (x^2)/(2<i>n), n=1 to m<p>which is<p>(x^2 </i> H_m) / 2<p>Is this correct? Could someone explain how many random extra comparisons would be needed to thwart a timing attack?
You should never be doing a string compare on the exact string sent over the network anyway. You should be hashing it first. Hashing destroys any value in knowing how much of the substring you matched.
While I believe the author that it's measurable over the internet.. I'd love to see an example of the code on the attacking side, so I could test it for myself.<p>It just seems like it wouldn't be measurable, and I'd love to test it and prove myself wrong.
20µs == 20 microseconds == 20 * 10e-6 seconds. 100ns == 100 nanoseconds == 100 * 10e-9 seconds. 2 GHz == one clock cycle in 0.5 * 10e-9 seconds. And that's how long that one comparison would take; not accounting for branch prediction and other interferring stuff inside the processor. It's not an attack, it's a joke.