So at what point do any of these tools build a naive model of 3d and projective geometry. I don't think when people look at pictures all they are doing is extracting 2d features because in my head I am imagining some kind of 3d space and placing things in them according to how the picture shows them. One obvious reason pictures with weird angles and perspectives are hard to understand is because I can't properly orient myself in the made up picture world.<p>Proper AI or deep learning or whatever it is that is the fad these days should account for this kind of model building that brains are good at. Looking at 2d pictures and only extracting 2d features doesn't feel like any kind of model building but more like really good data mining but deep data mining doesn't sound as sexy.
There seems now to be a cottage industry producing fuzzy, non-technical articles about machine learning and particularly deep learning. I guess we are now in the rapid inflation phase of the hype cycle.