That does not look like a 10 micron scan...<p>Edit:<p>Here is the source for the claim: <a href="https://www.reddit.com/r/OpenScan/comments/gfottc/10_micron_accuracy_with_the_new_pi_camera/" rel="nofollow">https://www.reddit.com/r/OpenScan/comments/gfottc/10_micron_...</a><p>It seems this might be the theoretical error that you can get when the system can identify a feature perfectly, and when it gets a usable reflection from that point at enough angles.<p>But this is not the case for most points on any real object (i.e. not a chalk-coated gauge block), so you're definitely not going to get a <i>model</i> where the maximum error is 10 microns.<p>The results are certainly impressive, but hackster.io is taking liberties with that headline. It's not a realistic accuracy and the author of the project doesn't really seem to be making that claim.
So the whole project that I can find documentation for seems to be about soldering together a couple stepper drivers (rather than just using a scrap 3d printer controller which everyone has sitting around) and a ring light, where's the photogrammetry workflow part?
What I don't get is why this isn't easy to do with a smartphone. A little motorized turntable with a stand for your phone and a calibration object covered in tracking symbols to use for adjusting the input from the camera. Why is that so impossible?
> The sample scan provided by Megel to demonstrate the scanner's capabilities took, he claims, less than an hour of wall time and just four clicks of user interaction — though the processing requires the in-beta cloud platform or a more powerful host PC, with the Raspberry Pi unable to provide enough compute itself.<p>Is there an actual reason for this? Given the Raspberry Pi is a computer too, shouldn't it be able to compute the results as well, only slower than a proper PC?