I haven't read either of the papers yet, but can anyone comment on what makes it difficult to output multiple bits. Naively, couldn't you gather weak random data from the two sources, get a single bit, then gather more weak data, and run the extractor again to get a second bit?
Can someone in the know post a conceptual description of roughly what's going on here?<p>The last time I read up on randomness, I was given to believe that it's not really an observable quantity - that is, a sequence of numbers is random only to the extent that nobody's found a pattern in them yet, and as such, the most rigorous way we have of testing strong RNGs is to run them through a battery of test for the sorts of patterns that are known to show up in weak RNGs. But that sounds far-removed from the situation the article describes, where this or that generator can be proven to be perfect or imperfect.<p>Is this the gap between theoretical analysis and real-world implementations, or am I misunderstanding something more fundamental?
I thought that entropy solved this problem: your "weakly" random numbers from the thermometer might have, say, 1.3 bits of entropy for every reading. So you assemble 100 readings and that gives you 130 bits of randomness, which you extract by putting your 100 readings through a cryptographic hash algorithm that outputs 130 bits.<p>Presumably I'm missing something. Can someone tell me what it is?