... and couldn't get the correct answers for LiH. Interesting that the article didn't mention this.<p>From the paper:<p>> For lithium hydride, LiH, we were not able to reproduce closely the ground state energy with the currently available hardware. When accounting for 3 orbitals and using a scaling factor of r = 4, we already had to use 1558 qubits, which is a large fraction of available qubits. To summarize: the investigated method in general works, but it might be difficult to apply it to larger systems.
I appreciated the rudimentary presentation of the capabilities and limitations of the machine and calling out the connectivity of the qubits versus a universal quantum machine which would have full connectivity between all the bits.<p>I’d be curious is there a simple formula for calculating the “effective universal qubits” of the D-Wave?<p>2,048 indeed sounds like a lot of qubits based on my extremely limited knowledge of quantum, but with only ~6k connections versus fully connected which would be n(n-1)/2 = ~2mil is it just a marketing gimmick?<p>Why is it useful to push the bit count so high if the connectivity is so limited?
>For beginners, he says, an actual D-Wave device isn’t even necessary.<p>I find this somewhat surprising.<p>If you think of AI code designed for GPUs, there I can see "yeah you can practice on a CPU". It'll suck but it'll work.<p>For quantum tech the entire sales pitch is that it's fundamentally different...doing what's near impossible on conventional hardware.<p>Yes I realise he's talking about the library so annealing on a CPU I guess but still seems like a very strange comment in this context.
Wouldnt these potential customers of D-Wave be better off just buying a HPC cluster with lots of CPU's, GPU's, and FPGA's? Probably more opportunities for hardware reuse, too.