I’m a quantum dabbler so I’ll throw out an armchair reaction: this is a significant announcement.<p>My memory is that 256 bit keys in non quantum resistant algos need something like 2500 qubits or so; and by that I mean generally useful programmable qubits. To show a bit over 100 qubits with stability, meaning the information survives a while, long enough to be read, and general enough to run some benchmarks on is something many people thought might never come.<p>There’s a sort of religious reaction people have to quantum computing: it breaks so many things that I think a lot of people just like to assume it won’t happen: too much in computing and data security will change -> let’s not worry about it.<p>Combined with the slow pace of physical research progress (Schorrs algorithm for quantum factoring was mid 90s), and snake oil sales companies, it’s easy to ignore.<p>Anyway seems like the clock might be ticking; AI and data security will be unalterably different if so. Worth spending a little time doing some long tail strategizing I’d say.
They opened the API for it and I'm sending requests but the response always comes back 300ms before I send the request, is there a way of handling that with try{} predestined{} blocks? Or do I need to use the Bootstrap Paradox library?
> It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse<p>I see the evidence, and I see the conclusion, but there's a lot of ellipses between the evidence and the conclusion.<p>Do quantum computing folks really think that we are borrowing capacity from other universes for these calculations?
In the past five years I participated in a project (with Yosi Rinott and Tomer Shoham) to carefully examine the Google's 2019 "supremacy" claim. A short introduction to our work is described here: <a href="https://gilkalai.wordpress.com/2024/12/09/the-case-against-googles-claims-of-quantum-supremacy-a-very-short-introduction/" rel="nofollow">https://gilkalai.wordpress.com/2024/12/09/the-case-against-g...</a>. We found in that experiment statistically unreasonable predictions (predictions that were "too good to be true") indicating methodological flaws. We also found evidence of undocumented global optimization in the calibration process.<p>In view of these and other findings my conclusion is that
Google Quantum AI’s claims (including published ones) should be approached with caution, particularly those of an extraordinary nature. These claims may stem from significant methodological errors and, as such, may reflect the researchers’ expectations more than objective scientific reality.
I wonder if anyone else will be forced to wait on <a href="https://scottaaronson.blog/" rel="nofollow">https://scottaaronson.blog/</a> to tell us if this is significant.
The slightly mind blowing bit is detailed here:
> <a href="https://research.google/blog/making-quantum-error-correction-work/" rel="nofollow">https://research.google/blog/making-quantum-error-correction...</a><p>“the first quantum processor where error-corrected qubits get exponentially better as they get bigger”<p>Achieving this turns the normal problem of scaling quantum computation upside down.
> It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch.<p>Processing in multiverse. Would that mean we are inyecting entropy into those other verses? Could we calculate how many are there from the time it takes to do a given calculation? We need to cool the quantum chip in our universe, how are the (n-1)verses cooling on their end?
I really wish the release videos made things a ~tad~ bit less technical. I know quantum computers are still very early so the target audience is technical for this kind of release, but I can’t help wonder how many more people would be excited and pulled in if they made the main release video more approachable.
>the more qubits we use in Willow, the more we reduce errors, and the more quantum the system becomes<p>That's an EXTRAORDINARY claim and one that contradicts the experience of pretty much all other research and development in quantum error correction over the course of the history of quantum computing.
You learn a lot by what isn't mentioned. Willow had 101 qubits in the quantum error correction experiment, yet only mere 67 qubits in the random circuit sampling experiment. Why did they not test random circuit sampling with the full set of qubits? Maybe when turning on the full 101 set of qubits, qubits fidelity dropped.<p>Remember macroscopic objects have 10^23=2^76 particles, so until 76 qubits are reached and exceeded, I remain skeptical that the quantum system actually exploits an exponential Hilbert space, instead of the state being classically encoded by the particles somehow. I bet Google is struggling just at this threshold and they don't announce it.
I met julian touring UCSB as perspective grad students. We sat together at dinner and he was really smart, kind, and outgoing. Great to see him presenting this work!
The main part for me is reducing error faster as they scale. This was a major road-block, known as "below threshold”. That's a major achievement.<p>I am not sure about RCS as the benchmark as not sure how useful that is in practice. It just produced really nice numbers. If I had a few billions of pocket change around, would I buy this to run RCS really fast? -Nah, probably not. I'll get more excited when they factor numbers at a rate that would break public key crypto. For that would spend my pocket change!
Take the announcement with a grain of salt. From German physicist Sabine Hoffenfelder:<p>> The particular calculation in question is to produce a random distribution. The result of this calculation has no practical use.
>
> They use this particular problem because it has been formally proven (with some technical caveats) that the calculation is difficult to do on a conventional computer (because it uses a lot of entanglement). That also allows them to say things like "this would have taken a septillion years on a conventional computer" etc.
>
> It's exactly the same calculation that they did in 2019 on a ca 50 qubit chip. In case you didn't follow that, Google's 2019 quantum supremacy claim was questioned by IBM pretty much as soon as the claim was made and a few years later a group said they did it on a conventional computer in a similar time.<p><a href="https://x.com/skdh/status/1866352680899104960" rel="nofollow">https://x.com/skdh/status/1866352680899104960</a>
Link to the actual article: <a href="https://www.nature.com/articles/s41586-024-08449-y" rel="nofollow">https://www.nature.com/articles/s41586-024-08449-y</a>
What benchmark is being referred here?<p>>>Willow performed a standard benchmark computation in under five minutes that would take one of today’s fastest supercomputers 10 septillion (that is, 1025) years — a number that vastly exceeds the age of the Universe
With this plus the weather model announcement. I’m curious what people think about the meta question on why corporate labs like Google DeepMind etc seem to make more progress on big problems than academia?<p>There are a lot of critiques about academia. In particular that it’s so grant obsessed you have to stay focused on your next grant all the time. This environment doesnt seem to reward solving big problems but paper production to prove the last grant did something. Yet ostensibly we fund fundamental public research precisely for fundamental changes. The reality seems to be the traditional funding model create incremental progress within existing paradigms.
Am I oversimplifying in thinking that they’ve demonstrated that their quantum computer is better than at simulating a quantum system than a classical computer?<p>In which case, should I be impressed? I mean sure, it sounds like you’ve implemented a quantum VM.
Do Americans still want to breakup the big US tech companies like Google? With proper regulation it feels like their positive externalities, like this, is good for humanity.
This is yet another attempt to posit NISQ results (Noisy Intermediate Scale Quantum) as demonstrations of quantum supremacy. This does not allow us to do useful computational work; it's just making the claim that a bathtub full of water can do fluid dynamic simulations faster than a computer with a bathtub-full-of-water-number-of-cores can do the same computation.<p>If history is any guide we'll soon see that there are problems with the fidelity (the system they use to verify that the results are "correct") or problems with the difficulty of the underlying problem, as happened with Google's previous attempt to demonstrate quantum supremacy [1].<p>[1] <a href="https://gilkalai.wordpress.com/2024/12/09/the-case-against-googles-claims-of-quantum-supremacy-a-very-short-introduction/" rel="nofollow">https://gilkalai.wordpress.com/2024/12/09/the-case-against-g...</a> -- note that although coincidentally published the same day as this announcement, this is talking about Google's previous results, not Willow.
Some of these results have been on the arxiv for a few months (<a href="https://arxiv.org/abs/2408.13687" rel="nofollow">https://arxiv.org/abs/2408.13687</a>) -- are there any details on new stuff besides this blog post? I can't find anything on the random circuit sampling in the preprint (or its early access published version).
> It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch.<p>Can someone explain to me how he made the jump from "we achieved a meaninful threshold in quantum computing performance" to "The multiverse is probably real."
> Willow performed a standard benchmark computation in under five minutes that would take one of today’s fastest supercomputers 10 septillion years — a number that vastly exceeds the age of the Universe.<p>What computation would that be?<p>Also, what is the relationship, if any, between quantum computing and AI? Are these technologies complementary?
Newbie's question: how far is the RCS benchmark from a more practical challenge such as breaking RSA?<p>The article concludes by saying that the former does not have practical applications. Why are they not using benchmarks that have some?
>Willow’s performance on this benchmark is astonishing: It performed a computation in under five minutes that would take one of today’s fastest supercomputers 1025 or 10 septillion years. If you want to write it out, it’s 10,000,000,000,000,000,000,000,000 years. This mind-boggling number exceeds known timescales in physics and vastly exceeds the age of the universe. It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch.<p>A much simpler explanation is that your benchmark is severely flawed.
Genuinely curious: does this make US regulators second-guess breaking up Google? Having a USA company be the first to develop quantum computing would be a major national security advantage.
Trying to understand how compute happens in Quantum computers. Is there a basic explanation of how superposition leads to computing?<p>From chatgpt, "with n qubits a QC can be in a superposition of 2^n different states. This means that QCs can potentially perform computations on an exponential number of inputs at once"<p>I don't get how the first sentence in that quote leads to the second one. Any pointers to read to understand this?
Quantum mechanics is a computational shortcut that makes our simulation cost-effective. Mass adoption of chips like these is going to make the particular situation we live in unprofitable for hosts, resulting in the firey and dramatic end of the world for us. Simulating ancestors is fun, but not after your cloud bill skyrockets. Thank you, Google, for bringing about the apocalypse.
So one of the interesting comparisons between Quantum computing vs classical in the video: 5 mins vs 10^25 years. So are there any tradeoffs or specific cases in which the use cases for Quantum computing works or is this generic for "all" computing use cases? if later then this will change everything and would change the world.
Interesting; it might be time for me to load up a quantum simulator and star learning how to program these things.<p>I've pushed that off for a long time since I wasn't completely convinced that quantum computers actually worked, but I think I was wrong.
Any chance people will actually start reversing SHA1 hashes in the next few years? Is there any quantum algorithm for reversing one-way functions like that? (I mention SHA1 because they can find collissions.)
Relatives were asking for a basic explainer. Here's a good one by Hannah Fry: <a href="https://youtu.be/1_gJp2uAjO0" rel="nofollow">https://youtu.be/1_gJp2uAjO0</a>
Notice how the most interesting part, the image with their quantum computing roadmap, has too low a resolution to read the relevant text at the bottom. Come on Google.
This is a great technical achievement. It gives me some hope to see that the various companies are able to invest into what is still very basic science, even if it were mostly as vanity projects for advertising purposes.<p>Quantum computing will surely have amazing applications that we cannot even conceive of right now. The earliest and maybe most useful applications might be in material science and medicine.<p>I'm somewhat disappointed that most discussions here focus on cryptography or even cryptocurrencies. People will just switch to post-quantum algorithms and most likely still have decades left to do so. Almost all data we have isn't important enough that intercept-now-decrypt-later really matters, and if you think you have such data, switch now...<p>Breaking cryptography is the most boring and useless application (among actual applications) of quantum computing. It's purely adversarial, merely an inconsequential step in a pointless arms race that we'd love to stop, if only we could learn to trust each other. To focus on this really betrays a lack of imagination.
This is weird. I got this pop up halfway through reading:<p>> After reading this article, how has your perception of Google changed?
Gotten better
Gotten worse
Stayed the same
there is so much skepticism on quantum computing that instead of inflated marketing words one should always start by what the biggest problems are, how they are not still solved yet, and then introduce what the new improvement is.<p>Otherwise there is no knowing if the accomplishment is really significant or not.
>It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch.<p>Makes sense, or doesn't it? What's your take on the multiverse theory?
'It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch.'<p>Wait... what? Google said this and not some fringe crackpot?
Imagine your civilization develops quantum computing technology and it's for... advertising.<p>"What is their mission? Cure cancer? Eliminate poverty? Explore the universe? No, their goal: to sell another fucking Nissan." --Scott Galloway
I don’t usually say this often, unlike most here but this is actually a huge achievement in quantum computing.<p>Yet another example as to why Google is essentially not going anywhere or ‘dying’ as most have been proclaiming these days.
I don't want to judge people by their cover, but I want to confess to having those feelings right now.<p>In this day and age, I feel an immediate sense of distrust to any technologist with the "Burning Man" aesthetic for lack of a better word. (which you can see in the author's wikipedia profile from an adjacent festival -> <a href="https://en.wikipedia.org/wiki/Hartmut_Neven" rel="nofollow">https://en.wikipedia.org/wiki/Hartmut_Neven</a>, as well as in this blog itself with his wristbands and sunglasses -> <a href="https://youtu.be/l_KrC1mzd0g?si=HQdB3NSsLBPTSv-B&t=39" rel="nofollow">https://youtu.be/l_KrC1mzd0g?si=HQdB3NSsLBPTSv-B&t=39</a>)<p>In the 2000's, any embracement of alternative culture was a breath of fresh air for technologists - it showed they cared about the human element of society as much as the mathematics.<p>But nowadays, especially in a post-truthiness, post-COVID world, it comes off in a different way to me. Our world is now filled with quasi-scientific cults. From flat earthers to anti-vaxxers, to people focused on "healing crystals", to the resurgence of astrology.<p>I wouldn't be saying this about anyone in a more shall we say "classical" domain. As a technologist, your claims are pretty easily verifiable and testable, even on fuzzy areas like large language models.<p>But in the Quantum world? I immediately start to approach the author of this with distrust:<p>* He's writing about multiverses<p>* He's claiming a quantum performance for something that would take a classical computer septillions of years.<p>I'm a layman in this domain. If these were true, should they be front page news on CNN and the BBC? Or is this just how technology breakthroughs start (after all the Transformer paper wasn't)<p>But no matter what I just can't help but feel like the author's choices harm the credibility of the work. Before you downvote me, consider replying instead. I'm not defending feeling this way. I'm just explaining what I feel and why.