Hi, I'm an accomplished 3d printer-- not an expert in 3d scanning, but did work with the stuff and one of the inventors of one of the first point-stitching algorithms.<p>First problems with 3d scanning are many. One, its just not very accurate. Two, it cannot capture complex geometries such as anything that is occluded. Three, size. You could need a different system for different scales of items to be scanned.<p>In almost every case the answer is probably photogrammetry. There's an open source project called "OpenScan" which will allow you to build your own turntable/camera system which can take regular photos so you can do a solve.<p>Before doing this however, I highly suggest you get the best camera you can borrow/beg/steal and go to town with Meshroom. It will cost you a few hours of time/compute power but is otherwise free. There are dozens of free and paid photogrammetry apps whose results vary. I like Meshroom personally.<p>The reason this technology is underdeveloped is that 3d modeling is cheap and efficient and relatively easy to learn. I was baffled by 3D modeling for most of my life, but once I committed to learning, it wasn't a big deal. The issue is, there are at least half a dozen different types of modeling and tooling, and visual effects are often entwined with all of this. If I may suggest, if you head down this route, use Fusion 360 or Solidworks for mechanical stuff (products, machines, etc.), and Blender or Z-Brush for flowing character things.<p>Unless your items are artistic in nature and need to be preserved exactly-- you're far better off simply modeling them. As weird as it sounds to say-- your models will be more accurate than the 3D scans. A pair of calipers and a micrometer are way more accurate than <i>current</i> sensors.<p>Even the most accurate 3D scanning I am aware of (at a prosumer level) will need its mesh edited to make sense. So you're already back to having to learn some form of 3d modeling anyways.<p>I do believe in 10 years everything will be different-- the big new iPhones have fairly interesting sensors in them but they're only really good for like room level scales. Not very good at small scale things. I imagine as the tech progresses we will see a "sensor-fusion" tech develop for the LIDAR stuff just like we have for the cameras-- IE that it works a multiple focal lengths with the input from those sensors being combined.<p>Also, if I am out of touch here I'd <i>love</i> to be wrong-- so don't hesitate to blow me up.