The key here is really complementary use of ‘what humans are good at’ and ‘what machines are good at’.<p>In this case, it’s fair to say the machine, by analyzing pixels, can’t figure out perspective very well. The human can do that just fine, given an interface mechanism.<p>The machine is good at detecting edges and seeing similarity between pixels. Given hints from the human that ‘this point is within an object’ and here is the perspective, the machine can infer the limits of the object based on edges/colors and project it into 3 dimensions. Amazing.
I'm not a HN etiquette stickler, and I'm not accusing anyone of any foul play, but the actual YouTube video was submitted 17 hours prior to this post: <a href="https://news.ycombinator.com/item?id=6358080" rel="nofollow">https://news.ycombinator.com/item?id=6358080</a><p>This is just in case you want to throw a few upvotes their way for being first. This also illustrates that late night (PDT/UTC -8) posts don't get a whole lot of votes and proper timing is crucial to getting lots of votes.
What I was thinking all along: "Oh come on! It can't be this perfect, show me where it fails."
And they did!<p>This is indeed magic. I'm so happy to live in this age, and be part of the "Sorcerers' Guild".
The paper is not out yet, but you can read the abstract here:<p><a href="http://www.faculty.idc.ac.il/arik/site/3Sweep.asp" rel="nofollow">http://www.faculty.idc.ac.il/arik/site/3Sweep.asp</a>
If you marked shadows and associated them with their source, could you then recover the light source(s) and be able to remove the baked shadows and recast them in real time?<p>Also, with the shiny objects, could you specify the material properties and have it "back out" the reflection such that the reflection was recomputed as you moved the shape around?
WOW.<p>Forget the Photoshop stuff, this needs to be integrated with 3D printing <i>immediately</i>.<p>Spit out a design file into Tinkercad[1] for some minor adjustments and BAM, you've made a printable 3D model.<p>[1] <a href="https://tinkercad.com/" rel="nofollow">https://tinkercad.com/</a>
I want this + i❤sketch now, but unfortunately I suspect that jumping up and down and shouting isn't likely to help.<p><a href="http://www.dgp.toronto.edu/~shbae/ilovesketch.htm" rel="nofollow">http://www.dgp.toronto.edu/~shbae/ilovesketch.htm</a>
most impressive thing for me about this demo is how good the shape detection is (seems way better than magnetic lasso in Photoshop), and how they brought different pieces of separate technologies together to such a fluid experience. And how the presenter sounds about 12.<p>These guys/girls know what they're doing.
This seems quite similar to this presented in 2011:
<a href="https://www.youtube.com/watch?v=hmzPWK6FVLo" rel="nofollow">https://www.youtube.com/watch?v=hmzPWK6FVLo</a><p><a href="http://www.kevinkarsch.com/publications/sa11.html" rel="nofollow">http://www.kevinkarsch.com/publications/sa11.html</a>
It looks so simple, yet my limited understanding of image processing tells me this requires a ton of research and technology. The pace of innovation is staggering!
I am skeptical, although I remain hopeful that my skepticism is misplaced. The "software" somehow seems to know what pattern of colors should exist on the other side of the object. Can someone explain to us how this aspect of the software works?
Is there a reason many of these crazy image processing technologies never seem to have actual demos or releases? The only exception I can think of it the "smart erase" idea, which has been implemented in Photoshop as well as Gimp.
A lot of cool rendering/modeling research seems amazingly well-suited for the film industry and this is a perfect example ... besides the obvious applications in making CGI versions of real-world scenes, you can just imagine the director saying "oh no, that lamp is in the wrong location in all that footage... move it (without reshooting)!"<p>I wonder if it's just a coincidence, or whether the mega-bucketloads of money the film industry throws at CGI are a major factor in funding related research even in academia?
Question for the entrepreneurs: how would one monetize such a cool algorithm? I come across plenty of cool stuff like this, but without any idea how they can solve real problems.
Also awesome is that it handles the background replacement so well. This could also be used to just remove an ugly lamp post, telephone pole, etc from an otherwise good photo. (assuming you can remove objects and resave the image)<p>Edit: I am aware that Photoshop has some of this available. I've not played with it so I don't know how they compare.
This is amazing. My first thought is this could allow F1 teams to get a much better idea of what new packages their competitors are bringing to races early on just by looking at photos and video footage and modelling the new parts.
This indeed is very impressing and I see the how much work passion is into this project.But I still have to say it almost only about round or cylindrical objects, there is still a long way to go
Is it too much to hope that this tech will be implemented in a program that's within an "average" user's budget? (i.e. non-enterprise).
I think this is really impressive. Do you think it will be years before this actually gets used in public 3D modelling tools?<p>I vote for this to be used with 3D printer