TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Video generation models as world simulators

361 点作者 linksbro大约 1 年前

38 条评论

empath-nirvana大约 1 年前
I think people might be missing what this enables. It can make plausible continuations of video, with realistic physics. What happens if this gets fast enough to work _in real time_.<p>Connect this to a robot that has a real time camera feed. Have it constantly generate potential future continuations of the feed that it&#x27;s getting -- maybe more than one. You have an autonomous robot building a real time model of the world around it and predicting the future. Give it some error correction based on well each prediction models the actual outcome and I think you&#x27;re _really_ close to AGI.<p>You can probably already imagine different ways to wire the output to text generation and controlling its own motions, etc, and predicting outcomes based on actions it, itself could plausibly take, and choosing the best one.<p>It doesn&#x27;t actually have to generate realistic imagery or imagery that doesn&#x27;t have any mistakes or imagery that&#x27;s high definition to be used in that way. How realistic is our own imagination of the world?<p>Edit: I&#x27;m going to add a specific case. Imagine a house cleaning robot. It starts with an image of your living room. Then it creates a image of your living room after it&#x27;s been cleaned. Then it interpolates a video _imagining itself cleaning the room_, then acts as much as it can to mimic what&#x27;s in the video, then generates a new continuation, then acts, and so on. Imagine doing that several times a second, if necessary.
评论 #39392953 未加载
评论 #39392806 未加载
评论 #39392921 未加载
评论 #39398272 未加载
评论 #39393016 未加载
评论 #39404783 未加载
评论 #39393476 未加载
评论 #39392302 未加载
评论 #39392465 未加载
评论 #39392326 未加载
评论 #39393878 未加载
评论 #39398006 未加载
评论 #39399268 未加载
评论 #39398070 未加载
评论 #39398124 未加载
评论 #39397323 未加载
评论 #39392926 未加载
评论 #39392966 未加载
评论 #39392196 未加载
SushiHippie大约 1 年前
I like that this one shows some &quot;fails&quot;, and not just the top of the top results:<p>For example, the surfer is surfing in the air at the end:<p><a href="https:&#x2F;&#x2F;cdn.openai.com&#x2F;tmp&#x2F;s&#x2F;prompting_7.mp4" rel="nofollow">https:&#x2F;&#x2F;cdn.openai.com&#x2F;tmp&#x2F;s&#x2F;prompting_7.mp4</a><p>Or this &quot;breaking&quot; glass that does not break, but spills liquid in some weird way:<p><a href="https:&#x2F;&#x2F;cdn.openai.com&#x2F;tmp&#x2F;s&#x2F;discussion_0.mp4" rel="nofollow">https:&#x2F;&#x2F;cdn.openai.com&#x2F;tmp&#x2F;s&#x2F;discussion_0.mp4</a><p>Or the way this person walks:<p><a href="https:&#x2F;&#x2F;cdn.openai.com&#x2F;tmp&#x2F;s&#x2F;a-woman-wearing-a-green-dress-and-a-sun-hat-taking-a-pleasant-stroll-in-Antarctica-during-a-winter-storm.mp4" rel="nofollow">https:&#x2F;&#x2F;cdn.openai.com&#x2F;tmp&#x2F;s&#x2F;a-woman-wearing-a-green-dress-a...</a><p>Or wherever this map is coming from:<p><a href="https:&#x2F;&#x2F;cdn.openai.com&#x2F;tmp&#x2F;s&#x2F;a-woman-wearing-purple-overalls-and-cowboy-boots-taking-a-pleasant-stroll-in-Mumbai-India-during-a-beautiful-sunset.mp4" rel="nofollow">https:&#x2F;&#x2F;cdn.openai.com&#x2F;tmp&#x2F;s&#x2F;a-woman-wearing-purple-overalls...</a>
评论 #39393094 未加载
评论 #39393974 未加载
评论 #39393034 未加载
评论 #39395401 未加载
评论 #39392343 未加载
评论 #39394381 未加载
评论 #39392317 未加载
modeless大约 1 年前
&gt; Other interactions, like eating food, do not always yield correct changes in object state<p>So this is why they haven&#x27;t shown Will Smith eating spaghetti.<p>&gt; These capabilities suggest that continued scaling of video models is a promising path towards the development of highly-capable simulators of the physical and digital world<p>This is exciting for robotics. But an even closer application would be filling holes in gaussian splatting scenes. If you want to make a 3D walkthrough of a space you need to take hundreds to thousands of photos with seamless coverage of every possible angle, and you&#x27;re still guaranteed to miss some. Seems like a model this capable could easily produce plausible reconstructions of hidden corners or close up detail or other things that would just be holes or blurry parts in a standard reconstruction. You might only need five or ten regular photos of a place to get a completely seamless and realistic 3D scene that you could explore from any angle. You could also do things like subtract people or other unwanted objects from the scene. Such an extrapolated reconstruction might not be completely faithful to reality in every detail, but I think this could enable lots of applications regardless.
评论 #39395378 未加载
评论 #39394376 未加载
nopinsight大约 1 年前
AlphaGo and AlphaZero were able to achieve superhuman performance due to the availability of perfect simulators for the game of Go. There is no such simulator for the real world we live in (although pure LLMs sort of learn a rough, abstract representation of the world as perceived by humans.) Sora is an attempt to build such a simulator using deep learning.<p><pre><code> “Our results suggest that scaling video generation models is a promising path towards building general purpose simulators of the physical world.” </code></pre> General, superhuman robotic capabilities on the software side can be achieved <i>once such a simulator is good enough</i>. (Whether that can be achieved with this approach is still not certain.)<p>Why superhuman? Larger context length than our working memory is an obvious one, but there will likely be other advantages such as using alternative sensory modalities and more granular simulation of details unfamiliar to most humans.
评论 #39392918 未加载
评论 #39392515 未加载
guybedo大约 1 年前
I think it&#x27;s Ylecun who stated a few times that video was the better way to train large models as it&#x27;s more information dense.<p>The results really are impressive. Being able to generate such high quality videos, to extend videos in the past and in the future shows how much the model &quot;understands&quot; the real world, objects interaction, 3D composition, etc...<p>Although image generation already requires the model to know a lot about the world, i think there&#x27;s really a huge gap with video generation where the model needs to &quot;know&quot; 3D, objects movements and interactions.
iliane5大约 1 年前
Watching an entirely generated video of someone painting is crazy.<p>I can&#x27;t wait to play with this but I can&#x27;t even imagine how expensive it must be. They&#x27;re training in full resolution and can generate up to a minute of video.<p>Seeing how bad video generation was, I expected it would take a few more years to get to this but it seems like this is another case of &quot;Add data &amp; compute&quot;(TM) where transformers prove once again they&#x27;ll learn everything and be great at it
data-ottawa大约 1 年前
I know the main post has been getting a lot of reaction, but this page absolutely blew me away. The results are striking.<p>The robot examples are very underwhelming, but the people and background people are all very well done, and at a level much better than most static image diffusion models produce. Generating the same people as the interact with objects is also not something I expected a model like this to do well so soon.
lairv大约 1 年前
I find it wild that this model does not have explicit 3D prior, yet learns to generate videos with such 3D consistency, you can directly train a 3D representation (NeRF-like) from those videos: <a href="https:&#x2F;&#x2F;twitter.com&#x2F;BenMildenhall&#x2F;status&#x2F;1758224827788468722" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;BenMildenhall&#x2F;status&#x2F;1758224827788468722</a>
评论 #39391986 未加载
评论 #39395304 未加载
评论 #39391959 未加载
评论 #39391829 未加载
评论 #39392036 未加载
pedrovhb大约 1 年前
That&#x27;s an interesting idea. Analogous to how LLMs are simply &quot;text predictors&quot; but end up having to learn a model of language and the world to correctly predict cohesive text, it makes sense that &quot;video predictors&quot; also have to learn a model of the world that makes sense. I wonder how many orders of magnitude further they have to evolve to be similarly useful.
anonyfox大约 1 年前
If they would allow this (maybe a premium+ model) they could soon destroy the whole porn industry. not the websites, but the (often abused) sex workers. Everyone could describe that fetish they are into and get it visualized instantly without the need of physical human suffering to produce these videos.<p>I know its a delicate topic people (especially in the US) don&#x27;t want to speak about at all, but damn, this is a giant market and could do humanity good if done well.
评论 #39401168 未加载
评论 #39400174 未加载
zone411大约 1 年前
Video will be especially important for language models to grasp physical actions that are instinctive and obvious to humans but not explicitly detailed in text or video captions. I mentioned this in 2022:<p><a href="https:&#x2F;&#x2F;twitter.com&#x2F;LechMazur&#x2F;status&#x2F;1607929403421462528" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;LechMazur&#x2F;status&#x2F;1607929403421462528</a><p><a href="https:&#x2F;&#x2F;twitter.com&#x2F;LechMazur&#x2F;status&#x2F;1619032477951213568" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;LechMazur&#x2F;status&#x2F;1619032477951213568</a>
dang大约 1 年前
Related ongoing thread:<p><i>Sora: Creating video from text</i> - <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=39386156">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=39386156</a> - Feb 2024 (1430 comments)
GaggiX大约 1 年前
The Minecraft demo makes me think that soon will be playing games directly from the output of one of these models, unlimited content.
评论 #39394345 未加载
评论 #39392688 未加载
koonsolo大约 1 年前
Yesterday I was watching a movie on Netflix and thought to myself, what if Netflix generated a movie based on what I want to see and what I like.<p>Plus, it could generate it in real time and take my responses into account. I look bored? Spice it up, etc.<p>Today such a thing seems closer than I thought.
binary132大约 1 年前
Maybe this says more about me than about the technology, but I found the consistency of the Minecraft simulation super impressive.
评论 #39395244 未加载
chankstein38大约 1 年前
This is the second Sora announcement I&#x27;ve seen. Am I missing how I can play with it? The examples in the papers are all well and good but I want to get my hands on it and try it.
proc0大约 1 年前
I don&#x27;t know if there is research into this, didn&#x27;t see it mentioned here, but this is the most probable path to something like AI consciousness and AGI. Of course it&#x27;s highly speculative but video to world simulation is how the brain evolved and probably what is needed to have a robot behave like a living being. It would just do this in reverse, video input to inner world model, and use that for reasoning about the world. Extremely fascinating, and also scary this is happening so quickly.
myth_drannon大约 1 年前
Should I short all the 3d tool s&#x2F;movies&#x2F;vfx companies?
评论 #39392311 未加载
评论 #39394402 未加载
评论 #39394713 未加载
colesantiago大约 1 年前
Damn, even minecraft videos being simulated, this is crazy to see from OpenAI.<p>Edit, changed the links to the direct ones!<p><a href="https:&#x2F;&#x2F;cdn.openai.com&#x2F;tmp&#x2F;s&#x2F;simulation_6.mp4" rel="nofollow">https:&#x2F;&#x2F;cdn.openai.com&#x2F;tmp&#x2F;s&#x2F;simulation_6.mp4</a><p><a href="https:&#x2F;&#x2F;cdn.openai.com&#x2F;tmp&#x2F;s&#x2F;simulation_7.mp4" rel="nofollow">https:&#x2F;&#x2F;cdn.openai.com&#x2F;tmp&#x2F;s&#x2F;simulation_7.mp4</a>
评论 #39393560 未加载
评论 #39391980 未加载
pmontra大约 1 年前
The video with the two MTBs going downhill: it seems to me that the long left turn that begins a few second into the video is way too long. It&#x27;s easy to misjudge that kind of things (try to draw a road race track by looking at a single lap of it) but it could end up below the point where it started, or too close to it to be physically realistic. I was expecting to see a right turn at any moment but it kept going left. It could be another consequence of the lack of real knowledge about the world, similar to the glass shattering example at the end of the article.
htrp大约 1 年前
&gt; We empirically find that training on videos at their native aspect ratios improves composition and framing. We compare Sora against a version of our model that crops all training videos to be square, which is common practice when training generative models. The model trained on square crops (left) sometimes generates videos where the subject is only partially in view. In comparison, videos from Sora (right)s have improved framing.<p>Every cv preprocessing pipeline is in shambles now.
vunderba大约 1 年前
The improvement to temporal consistency given that the length of these generated videos is 3 to 4 times longer than anything else on the market (runway, pika, etc) is truly remarkable.
sjwhevvvvvsj大约 1 年前
This is insanely good but look at the legs around 16 seconds in, they kinda morph through each other. Generally the legs are slightly unnerving.<p>Still, god damn.
评论 #39393105 未加载
danavar大约 1 年前
While the Sora videos are impressive, are these really world simulators? While some notion of real-world physics probably exists somewhere within the model, doesn’t all the completely artificial training data corrupt it?<p>Reasoning, logic, formal systems, and physics exist in a seemingly completely different, mathematical space than pure video.<p>This is just a contrived, interesting viewpoint of the technology, right?
评论 #39393242 未加载
评论 #39393035 未加载
newswasboring大约 1 年前
This is a totally silly thought, but I still want to get it out there.<p>&gt; Other interactions, like eating food, do not always yield correct changes in object state<p>Can this be because we just don&#x27;t shoot a lot of people eating? I think it is general advice to not show people eating on camera for various reasons. I wonder if we know if that kind of topic bias exists in the dataset.
anirudhv27大约 1 年前
What makes OpenAI so far ahead of all of these other research firms (or even startups like Pika, Runway, etc.)? I feel like I see so many examples of fields where progress is being made all across and OpenAI suddenly swoops in with an insane breakthrough lightyears ahead of everyone else.
pellucide大约 1 年前
I am a newbie to this area. Honest questions:<p>Is this generating videos as streaming content e.g. like a mp4 video. As far as I can see, it is doing that. Is it possible for AI to actually produce the 3d models?<p>What kind of compute resources are required to produce the 3d models.
评论 #39394975 未加载
jk_tech大约 1 年前
This is some incredible and fascinating work! The applications seem endless.<p>1. High quality video or image from text 2. Taking in any content as input and generating forwards&#x2F;backwards in time 3. Style transformation 4. Digital World simulation!
exe34大约 1 年前
The current development of AI seems like speed run of Crystal Society in terms of their interaction with the world. The only thing missing is the Inner Purpose.
neurostimulant大约 1 年前
Where&#x27;s the training data come from? Youtube?
lbrito大约 1 年前
Okay, The Matrix can&#x27;t be too far away now.
评论 #39395144 未加载
评论 #39393169 未加载
blueprint大约 1 年前
&gt; Our results suggest that scaling video generation models is a promising path towards building general purpose simulators of the physical world.<p>so they&#x27;re gonna include the never-before-observed-but-predicted Unruh effect, as well? and other quantum theory? cool..<p>&gt; For example, it does not accurately model the physics of many basic interactions, like glass shattering.<p>... oh<p>Isn&#x27;t all of the training predicated on visible, gathered data - rather than theory? if so, I don&#x27;t think it&#x27;s right to call these things simulators of the physical world if they don&#x27;t include physical theory. DFT at least has some roots in theory.
评论 #39394721 未加载
liuliu大约 1 年前
Wow, this is really just scale-up DiT. We are going to see tons of similar models very soon.
yakito大约 1 年前
Does anyone know why most of the videos are in slow motion?
评论 #39401950 未加载
tokai大约 1 年前
Ugh, AI generated images everywhere is already annoying enough. Now we&#x27;re gonna have these factitious videos clogging up everything, and I&#x27;ll have to explain my old neighbor that Biden did infact not eat a fetus again and again.
评论 #39404123 未加载
advael大约 1 年前
People are obviously already pointing out the errors in various physical interactions shown in the demo videos, including the research team themselves, and I think the plausiblity of the generated videos will likely improve as they work on the model more. However, I think the major reason this generation -&gt; simulation leap might be harder leap than they think is actually a plausibility&#x2F;accuracy distinction. Generative models are general and versatile compared to predictive models, but they&#x27;re intrinsically learning an objective that assesses its extrapolations on spatial or sequential (or in the case of video, both) plausibility, which has a lot more degrees of freedom than accuracy. In other words, the ability to create reasonable-enough hypotheses for what the next frame or the next pixel over could end up not being enough. The optimistic scenario is that it&#x27;s possible to get to a simulation by narrowing this hypothesis-space enough to accurately model reality. In other words, it&#x27;s possible that this is just something that could fall out of the plausibility being continuously improved, like the subset of plausible hypotheses shrinks as the model gets better, and eventually we get a reality-predictor, but I think there are good reasons to think that&#x27;s far from guaranteed. I&#x27;d be curious to see what happens if you restrict training data to unaltered camera footage rather than allowing anything fictitious, but the least optimistic possibility is that this kind of capability is necessary but not sufficient for adequate prediction (or slightly more optimistically, can only do so with amounts of resolution that are currently infeasible, or something).<p>Some of the reasons the less optimistic scenarios seem likely is that the kinds of extrapolation errors this model makes are of similar character to those of LLMs: extrapolation follows a gradient of smooth apparent transitions rather than some underlying logic about the objects portrayed, and sometimes seems to just sort of ignore situations that are far enough outside of what it&#x27;s seen rather than reconcile them. For example, the tidal wave&#x2F;historical hall example is a scenario unlikely to have been in the training data. Sure, there&#x27;s the funny bit at the end where the surfer appears to levitate in the air, but there&#x27;s a much larger issue with how these two contrasting scenes interact, or rather fail to. What we see looks a lot more like a scene of surfing superimposed via photoshop or something on a still image of the hall, as there&#x27;s no evidence of the water interacting with the seats or walls in the hall at all. The model will just roll with whatever you tell it to do as best it can, but it&#x27;s not doing something like modeling &quot;what would happen if&quot; that implausible scenario played out, and even doing it poorly would be a better sign for this doing something like &quot;simulating&quot; the described scenario. Instead, we have impressive results for prompts that likely strongly correspond to scenes the model may have seen, and evidence of a lack of composition in cases where a particular composition is unlikely to have been seen and needs some underlying understanding of how it &quot;would&quot; work that is visible to us
bawana大约 1 年前
SORA.....the entire movie industry is now out of a job.
评论 #39397518 未加载
RayVR大约 1 年前
If there&#x27;s one thing I&#x27;ve always wanted, it&#x27;s shitty video knockoffs of real life. Can&#x27;t wait to stream some AI hallucinations.
评论 #39394766 未加载
评论 #39400070 未加载