TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

‘2001: A Space Odyssey’ rendered in the style of Picasso

473 点作者 cjdulberger将近 9 年前

27 条评论

mockery将近 9 年前
This is cool, but the frame-to-frame variance is distracting. I really want to see this reimplemented with temporal constraints a-la this paper:<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=Khuj4ASldmU" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=Khuj4ASldmU</a>
评论 #11877602 未加载
评论 #11875110 未加载
评论 #11879580 未加载
评论 #11876158 未加载
评论 #11886053 未加载
评论 #11875916 未加载
mgraczyk将近 9 年前
I remember watching an interview with the creators of South Park in which they described the transition from animating using cardboard cutouts to a system with CorelDraw and other pieces of software which helped speed up the process. The bulk of the efficiency improvement came from carefully defining all the frequently used objects (characters, houses) once with movable components, and reusing those objects in the per-episode animation pipeline.<p>I can easily imagine an animation system like the one presented here enabling another massive improvement in animation efficiency. In the same way animation software allowed South Park to reuse pre-drawn objects, a deep learning system could enable south park to carefully define the entire drawing style just once, then generate complete episodes based on simple story boards and animation directives. Fortunately, South Park already has a significant amount of training data available, specifically every South Park episode yet produced.
评论 #11878133 未加载
评论 #11874701 未加载
评论 #11874691 未加载
nsimoneaux将近 9 年前
&quot;It means nothing to me. I have no opinion about it, and I don&#x27;t care.&quot;<p>On the first moon landing, quoted in The New York Times, (1969-07-21).<p><a href="https:&#x2F;&#x2F;en.wikiquote.org&#x2F;wiki&#x2F;Pablo_Picasso" rel="nofollow">https:&#x2F;&#x2F;en.wikiquote.org&#x2F;wiki&#x2F;Pablo_Picasso</a><p>Curious about his feelings regarding this work. (I find it beautiful.)
评论 #11874342 未加载
评论 #11876481 未加载
评论 #11878141 未加载
评论 #11886055 未加载
stepvhen将近 9 年前
I have two opinions: 1) I don&#x27;t think cubism transfers well into a motion picture format, 2) I think these experiments, as they are currently, attempt to merge two styles and end up with neither, and nothing novel in its place; there is little Kubrick or Picasso in the final piece.<p>I think it&#x27;s superficial and doesn&#x27;t do either source justice.
评论 #11874475 未加载
评论 #11875192 未加载
评论 #11874301 未加载
评论 #11874326 未加载
评论 #11874379 未加载
评论 #11876895 未加载
评论 #11877102 未加载
评论 #11886045 未加载
jjcm将近 9 年前
I remember when The Scanner Darkly came out there was a lot of talk about how they achieved the style of the film. Some of it was automated, but a lot still had to be hand done. I wonder if using deep learning systems we could achieve the same effect that film had with nearly zero human interaction.<p>For those that haven&#x27;t seen the movie, here&#x27;s the trailer: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=TY5PpGQ2OWY" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=TY5PpGQ2OWY</a>
评论 #11875386 未加载
fractallyte将近 9 年前
Possibly the finest painting software currently available is Synthetik&#x27;s Studio Artist (<a href="http:&#x2F;&#x2F;synthetik.com&#x2F;" rel="nofollow">http:&#x2F;&#x2F;synthetik.com&#x2F;</a>). Compared to Adobe&#x27;s powerhouse software, it&#x27;s relatively unknown, but that doesn&#x27;t make it any less innovative.<p>It uses an algorithmic &#x27;paint synthesizer&#x27; to generate brushes (with hundreds of presets) and auto-paint canvases, and is designed for animation (rotoscoping) as well as static artwork. The output can be reminiscent of the style of the movie &#x27;A Scanner Darkly&#x27;, but the software is hugely flexible. Here are a couple of rather amazing examples: <a href="http:&#x2F;&#x2F;studioartist.ning.com&#x2F;video&#x2F;auto-rotoscoped-dancers" rel="nofollow">http:&#x2F;&#x2F;studioartist.ning.com&#x2F;video&#x2F;auto-rotoscoped-dancers</a> and <a href="http:&#x2F;&#x2F;studioartist.ning.com&#x2F;video&#x2F;dance-styles-animation-reel" rel="nofollow">http:&#x2F;&#x2F;studioartist.ning.com&#x2F;video&#x2F;dance-styles-animation-re...</a><p>Also, unlike most other &#x27;painterly&#x27; software, the graphics are resolution independent - meaning that they can be scaled up to any size without loss of detail.
Udik将近 9 年前
There is something that escapes me regarding this very cool neural style transfer technique. One would expect it to need <i>at least three</i> starting images: the one to transform, the one used as a source for the style, and a non-styled version of the source. This last one should give the network hints on how to transform the unstyled version in the styled one. For example, what does a straight line end up being in the style? Or how is a colour gradient represented? Missing this, it seems that the neural network should be able to recognize objects in the styled picture, and derive the transformation applied based on a previous knowledge of how they would normally look like. But of course the NN is not advanced enough to do that. Can someone explain me roughly how does this work?
评论 #11875253 未加载
评论 #11874796 未加载
评论 #11875152 未加载
评论 #11876668 未加载
shiro将近 9 年前
It certainly has a wow factor, but once you pass the initial impression, it&#x27;s interesting that the brain starts recognizing the content (motion of characters and objects) separately from the visual style, and even starts applying negative cubism filter so that we won&#x27;t actually see the visual style anymore. (In other words, the brain treats those applied style as noise.)<p>It could be a way to exploit the mismatch of content and style as certain form of expression; but it may be more interesting if we can modify the temporal structure as well.
yxlx将近 9 年前
Like someone said about this on &#x2F;r&#x2F;programming:<p>&gt;Pretty tight that computers can drop acid now.<p>Anyway, here&#x27;s a direct link to the video for mobile users: <a href="https:&#x2F;&#x2F;vimeo.com&#x2F;169187915" rel="nofollow">https:&#x2F;&#x2F;vimeo.com&#x2F;169187915</a>
habosa将近 9 年前
The big changes frame-to-frame certainly add to the &quot;trippiness&quot; but I&#x27;d love to see this where the value function (or whatever it&#x27;s called for ML) prioritizes reducing the frame-to-frame diff so that I could actually watch the full length movie like this.
slr555将近 9 年前
I am much more of an artist than a technology person and the rendering inconsistency the author refers to is one of the coolest aspects of the video. This is the kind of happy accident that gives work originality and makes it more than a slavish copy. Reminds me of Link Wray putting a pencil through the speaker cone of his amplifier.
2bitencryption将近 9 年前
I kind of want someone to do the same thing with a &quot;NON-neural network&quot; Picasso filter, like the ones in Photoshop and similar image editing programs. I want to compare how much the neural network&#x27;s understanding of Picasso&#x27;s style adds to the work (I imagine it&#x27;s a lot, because this looks incredible).
评论 #11874734 未加载
jamesrom将近 9 年前
A whole new debate about copyright is around the corner.
评论 #11874346 未加载
jamesdwilson将近 9 年前
Serious question: how is this different than one of the many photoshop filters that could be applied iteratively to each frame?
评论 #11874413 未加载
elcapitan将近 9 年前
&quot;Poetry is what gets lost in translation&quot;, &quot;Art is what gets lost in machine learning&quot;.<p>I think it&#x27;s interesting that it&#x27;s possible to create basically filters from existing images, but then applying those filters to large amounts of images (like in this movie) quickly loses the novelty effect and is just as boring as any photoshop or gimp filter became in the 90s after seeing it 3 times.<p>When I look at Picassos actual pictures, I am astonished and amazed with every new one I get to see. With these pictures, I get more and more bored with every additional image.
评论 #11875687 未加载
ggchappell将近 9 年前
Cool.<p>It needs some kind of averaging with nearby frames (or whatever), to avoid the constant flicker in areas of more or less solid color.
评论 #11874249 未加载
onetwotree将近 9 年前
Neural style transfer is extremely fun to play with.<p>If you have a system with a recent-ish graphics card (I&#x27;m doing fine with my GTX 970), put a linux on it and check out the many GitHub projects that implement this stuff (some of the tools will only work on linux).<p>It&#x27;s a great way to start learning about deep learning and GPU based computation , which are starting to look like very good things to have on your resume.<p>Plus, you get to make cool shit like this that you can actually show to your friends. I&#x27;m getting more interested in the text generation stuff as well too - I&#x27;d love to make a Trump speech generator :-)
golergka将近 9 年前
Can someone knowledgeable estimate, how far are we from rendering this in 60 frames per second? Can&#x27;t wait to try it as a post-processing layer in game development.
评论 #11874632 未加载
评论 #11874407 未加载
auggierose将近 9 年前
Awesome. Just the black monolith should stay black :-)
评论 #11875595 未加载
tunnuz将近 9 年前
&quot;Oh my God, it&#x27;s full of layers.&quot;
TrevorJ将近 9 年前
Would be interesting to see if they could reduce the temporal noise without compromising the overall effect.
评论 #11874490 未加载
6stringmerc将近 9 年前
Not trying to over-state my qualifications to make the following claim, but I&#x27;m pretty sure Kubrick would have hated this. And, as such, have it destroyed.
stcredzero将近 9 年前
Is it just me, but have all forms of art simply melded with self-promotion? (Melded in the sense found in the movie &quot;The Fly.&quot;)
rorygreig将近 9 年前
I wonder how long it takes to render each frame.<p>Eventually with fast enough GPUs you could render a video game in this style, now that I would like to see.
jdblair将近 9 年前
This is amazing. That said it doesn&#x27;t have the distorted perspective I think is a hallmark of Picasso&#x27;s work.
rurban将近 9 年前
His <a href="http:&#x2F;&#x2F;bhautikj.tumblr.com&#x2F;tagged&#x2F;drumpf" rel="nofollow">http:&#x2F;&#x2F;bhautikj.tumblr.com&#x2F;tagged&#x2F;drumpf</a> is much better. Donald Drumpf as sausage
kodfodrasz将近 9 年前
So basically you take someone else&#x27;s work. Run it with some content (someone else&#x27;s work also), post it, and wow innovation.<p>Actually in the last year myriads of similar things were created, and this is simply boring.<p>This is as interesting as a random tumblr reblog. May be curious, but lacks any sense of achievement or originality.
评论 #11874570 未加载