The title of the webpage is LIDAR 3d scanner.
I don't have an IPhone to try the app, so based on the videos and the information I could gather online.<p>It seems they have multiple operating modes. One mode use rgb image only and standard photogrammetry like meshroom. In the other mode they use the information from the LIDAR sensor to help with depth and camera registration, with some sort of RGB-D SLAM algorithms.<p>Looking for point-cloud fusion algorithms and the pcl library should get you started.<p>There is probably some deep-learning involved but it's not yet end-to-end deep learning, in particular this is not NERFies.<p>If you look at the recent publications of their Senior computer vision engineer, <a href="https://scholar.google.com/citations?view_op=list_works&hl=en&hl=en&user=ywtRolwAAAAJ" rel="nofollow">https://scholar.google.com/citations?view_op=list_works&hl=e...</a> you will find the following "A Learned Stereo Depth System for Robotic Manipulation in Homes" <a href="https://arxiv.org/pdf/2109.11644.pdf" rel="nofollow">https://arxiv.org/pdf/2109.11644.pdf</a> it should get you some approximate idea of how it work.