Despite all the hype men in AI at least there are products you can use right now and get value out of.<p>Quantum computing has been at the same game for at least a decade longer, and nothing useful has ever emerged from the monthly announcements of breakthroughs. I’m not saying they won’t get there eventually but it’s all “greater fool” at this point
I was under the impression that Microsoft didn’t say anything incorrect, because upon closer inspection, they didn’t actually say anything of substance - the wording carefully danced around the topic to make it seem as though more had been accomplished in reality, rather than mathematically.
Microsoft's "quantum computer" just like any other "quantum computer" currently in existence (at least publicly known) can accomplish exactly one and only one task: generate random numbers.<p>Any other claim "lacks evidence" because it's pure BS fancy-sounding enough to generate continued funding (source: 2nd and 3rd hand across industry and academia).<p>I find the downvotes on this comment (and many similar ones I made in the past) amusing. Quantum folks (industry and academia), please don't take this personally. I genuinely want quantum computing to become real at some point (just like I want fusion to become real), but today is not that day and you know it :)
I work in system engineering at a quantum computing company... when I read announcements and new papers on quantum computing, qubit technologies or implementations of quantum computers, I loosely apply these thresholds.<p>threshold one: The authors/company report data on qubit coherence times, single-qubit and two-qubit gate error, state prep and measurement error, etc. The more rigorous data, the better. If they report data from different qubits on the same device or across multiple devices, even better. But without this sort of data, any assertions or forward looking statements about the utility of a device or quantum computing approach are "pure marketing". To me... the Microsoft paper and announcement do not meet threshold one.<p>Threshold two is performing some sort of useful bench-marking calculation that requires repeated use of multiple qubits in concert. Quantum volume calculation is one such benchmark. It much easier to get great qubit results from a small test device (a hero device) than a larger system. It is tough to make a blanket statement about all QC technologies but system noise levels and calculation error failure modes scale with # of qubits... so be able to achieve high fidelity two-qubits gates repeatedly in a deep ciruit using 20-50 qubits is much, much more difficult and impressive than demonstrations with 1-4 qubits. To me, number of qubits is almost irrelevant if those qubits are not useful together... example: if a company reports 100+ qubits on a device but can't pass a quantum volume 12 or 16 calculation, then I will reserve judgement about the utility of that QC approach. There is engineering development value to scaling number of qubits (like figuring out how to orchestrate massively parallel qubit control at scale) while also working on improving qubits performance metrics... but these two development streams have to converge: lot of qubits AND high fidelity. Demonstrating high fidelity at low qubit count and high qubit counts separately doesn't mean that high fidelity, high qubit counts will be achievable.
wait. anyone swallowed those videos as something real? you can't be serious.<p>i honestly thought it was a cheeky joke or parody of older google quantum computer announcements.
As a rule of thumb in QC, largely ignore (or heavily discount) announcements made by academicians (at universities or industry), with CS, complexity theory theoretical physics or maths backgrounds. The more (in)famous they are (Aaroson), the less likely they've made any real progress.<p>Only experimental solid state physicists and a few EE types know how to build actual circuits and why they don't work.