TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Show HN: Revideo – Create Videos with Code

298 点作者 hkonsti11 个月前
Hey HN! We’re building Revideo (<a href="https:&#x2F;&#x2F;github.com&#x2F;redotvideo&#x2F;revideo">https:&#x2F;&#x2F;github.com&#x2F;redotvideo&#x2F;revideo</a>), an open source framework for programmatic video editing.<p>Revideo lets you create video templates in Typescript and render them with dynamic inputs through an API. It also comes with a &lt;Player &#x2F;&gt; component that lets you preview your projects in the browser and integrate video editing functionality into web apps.<p>The project is useful for anyone who wants to build apps that automate certain video editing tasks. A lot of companies in the space build their own custom stack for this, like Opus (<a href="https:&#x2F;&#x2F;www.opus.pro&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.opus.pro&#x2F;</a>), which automatically creates highlight videos from podcasts, or Clueso (<a href="https:&#x2F;&#x2F;www.clueso.io&#x2F;">https:&#x2F;&#x2F;www.clueso.io&#x2F;</a>), which lets you create stutter-free product walkthroughs with AI voiceovers.<p>Revideo is based on the HTML Canvas API and is forked from Motion Canvas (<a href="https:&#x2F;&#x2F;github.com&#x2F;motion-canvas&#x2F;motion-canvas">https:&#x2F;&#x2F;github.com&#x2F;motion-canvas&#x2F;motion-canvas</a>), a tool that lets you create canvas animations. While Motion Canvas is intended by its maintainer to exclusively be a standalone application [1], we have turned Revideo into a library that developers can integrate into their apps, while specifically focusing on video use cases. To support this, we have, among other things, added the ability to do headless rendering, made video rendering much faster and added support for syncing and exporting audio.<p>We’re excited about programmatic video editing because of the possibility to automate content creation with AI. One of our users is building StoriesByAngris (<a href="https:&#x2F;&#x2F;storiesbyangris.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;storiesbyangris.com&#x2F;</a>), which lets you create video-based RPG stories from language prompts. Other users are marketing-tech companies that help their customers generate and A&#x2F;B test different versions of video ads.<p>We started to work on video tooling because we ourselves explored a bunch of product ideas in the space of AI-based video creation earlier this year. For example, we built apps that automatically create educational short videos and tinkered with apps that let you create memes.<p>While building these products, we were frustrated with the video editing frameworks we used: Moviepy (<a href="https:&#x2F;&#x2F;github.com&#x2F;Zulko&#x2F;moviepy">https:&#x2F;&#x2F;github.com&#x2F;Zulko&#x2F;moviepy</a>), which we used initially, doesn’t work in the browser, so we’d often have to wait minutes for a video to render just to test our code changes. Remotion (<a href="https:&#x2F;&#x2F;github.com&#x2F;remotion-dev&#x2F;remotion">https:&#x2F;&#x2F;github.com&#x2F;remotion-dev&#x2F;remotion</a>), which we switched to later, is pretty good, but we didn’t want to rely on it as it is not FOSS (source-available only).<p>We had already followed Motion Canvas for some time and really liked it, so we thought that extending it would get us to something useful much faster than building an animation library from scratch. We initially tried to build Revideo as a set of Motion Canvas plugins, but we soon realized that the changes we were making were too drastic and far too complex to fit into plugins. This is why we ultimately created a fork. We’re unsure if this is the right way to go in the long term, and would prefer to find a way to build Revideo without feeling like we’re dividing the community - if you have experience with this (keeping forks with complex changes in sync with upstream) or other suggestions on how to solve this, we’d love your input.<p>Our current focus is improving the open source project. In the long term, we want to make money by building a rendering service for developers building apps with Revideo.<p>We’d love to hear your feedback and suggestions on what we can improve! You can find our repo at <a href="https:&#x2F;&#x2F;github.com&#x2F;redotvideo&#x2F;revideo">https:&#x2F;&#x2F;github.com&#x2F;redotvideo&#x2F;revideo</a>, and you can explore example projects at <a href="https:&#x2F;&#x2F;github.com&#x2F;redotvideo&#x2F;examples">https:&#x2F;&#x2F;github.com&#x2F;redotvideo&#x2F;examples</a><p>[1] “Motion Canvas is not a normal npm package. It&#x27;s a standalone tool that happens to be distributed via npm.” - <a href="https:&#x2F;&#x2F;github.com&#x2F;orgs&#x2F;motion-canvas&#x2F;discussions&#x2F;1015">https:&#x2F;&#x2F;github.com&#x2F;orgs&#x2F;motion-canvas&#x2F;discussions&#x2F;1015</a>

34 条评论

KhoomeiK11 个月前
Interesting—LangChain seemed kinda like unnecessary abstractions in natural language (since everything is just string manipulations), but with AI video, there&#x27;s so many different abstractions that I&#x27;d need to handle (images, puppeting, facegen, voicegen, etc).<p>Seems like there might be room for a &quot;LangChain for Video&quot; in this space...
评论 #40649409 未加载
Beefin11 个月前
Hey this is really cool, we just launched our video embedding model: <a href="https:&#x2F;&#x2F;learn.mixpeek.com&#x2F;vuse-v1-release&#x2F;" rel="nofollow">https:&#x2F;&#x2F;learn.mixpeek.com&#x2F;vuse-v1-release&#x2F;</a><p>I wonder if there&#x27;s opportunities for collaboration. It seems we&#x27;re the only cloud-agnostic video embedding model that allows users to own their embeddings.<p>Here&#x27;s a reverse video search tutorial: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=H92cEhG9uMI&amp;ab_channel=Mixpeek" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=H92cEhG9uMI&amp;ab_channel=Mixpe...</a>
franciscop11 个月前
Looks nice! Might want to disable pixel snapping for the text resizing since it yanks a little bit (Firefox on Mac at least).<p>I made an experiment in a similar style a while ago, but I decided it was too difficult to keep going as a &quot;tiny&quot; side project so never really released anything beyond a demo that you can see here:<p><a href="https:&#x2F;&#x2F;francisco.io&#x2F;demo&#x2F;terminal&#x2F;" rel="nofollow">https:&#x2F;&#x2F;francisco.io&#x2F;demo&#x2F;terminal&#x2F;</a>
评论 #40656277 未加载
thenorthbay11 个月前
Interesting stuff! Which use cases do you think developers will use you most for?<p>There could be really interesting abstractions that people might build on top of this. Like automatically creating and animating infographics, making background sounds, or video cutting and recycling. If you spin this 100x further an entire video creation studio might emerge.<p>Which parts of Video Infrastructure do you want to build first? Which other higher-level parts could be built by you or users? Where could this go?
评论 #40651770 未加载
sansseriff11 个月前
How does Jacob (aarthificial, creator of motion-canvas) feel about this? Will you compensate or include him in some way? I understand the license is MIT so you can do what you want. Just seems like it would be polite to maintain a good relationship with him and other motion-canvas maintainers.
评论 #40652022 未加载
mattdesl11 个月前
Looks great. How are you encoding the video into MP4? Ffmpeg with wasm? Or WebCodecs?<p>I’ve struggled to find a pure client-side encoder that is as fast, lightweight and high quality (in terms of lossiness) as what I had going with mp4-h264[1]. I suspended the project out of legal concern but it seems like the patents are finally starting to run their course, and it might be worth exploring it again. I’ve been able to use it to stream massive 8k near-pixel-perfect MP4s for generative art (where quality is more important than file size), compared to WebCodecs which always left me with a too-lossy result.<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;mattdesl&#x2F;mp4-h264">https:&#x2F;&#x2F;github.com&#x2F;mattdesl&#x2F;mp4-h264</a>
评论 #40656474 未加载
mvoodarla11 个月前
Congrats on the launch! I work at Sieve (<a href="https:&#x2F;&#x2F;www.sievedata.com&#x2F;">https:&#x2F;&#x2F;www.sievedata.com&#x2F;</a>). We do a bunch of stuff with AI and video. Excited to check this out :)
评论 #40651482 未加载
mike31fr11 个月前
&gt; Remotion is pretty good, but we didn’t want to rely on it as it is not FOSS (source-available only).<p>Noob question : How would you explain in the simplest form the difference between FOSS and source-available. In other words, what does Remotion do not have that would make it FOSS?
评论 #40656963 未加载
评论 #40656948 未加载
pavi241011 个月前
I see that Revideo uses generator functions which seems intuitive to me as it linearizes frame sequences wrt time as the function yields.<p>How does this compare to Remotion^ which uses &quot;React&quot; mental model?<p>^: <a href="https:&#x2F;&#x2F;remotion.dev" rel="nofollow">https:&#x2F;&#x2F;remotion.dev</a>
评论 #40649330 未加载
hubraumhugo11 个月前
Just curious, are you the founders of <a href="https:&#x2F;&#x2F;haven.run" rel="nofollow">https:&#x2F;&#x2F;haven.run</a> (YC S23)? I noticed that the Linkedin company page now redirects to Revideo.<p>Would you mind sharing a bit about your pivot? I always find these stories interesting!
评论 #40650603 未加载
matsemann11 个月前
Is it possible to render and export the video in browser, preferably faster than playback speed?<p>Use case is a service where people can upload certain data and I use that to generate a video. Let&#x27;s say I gave you the option to make a speed gauge video, that display the values you input, one after another, for a second each. If you upload 60 values, that will be a minute video. But if you upload your speed each second for an hour, that will be an hour long video. But should ideally not take an hour to render. Unfortunately I&#x27;ve seen most browser based tools can&#x27;t render faster than playback. So would have the user watch the whole video to actually download it.
rikroots11 个月前
I love mucking around with canvases and videos, so I will certainly be checking this out!<p>On a selfish note, as a canvas library developer&#x2F;maintainer, I do have questions around your choice of Motion Canvas: what attracted you to that library in particular (I&#x27;m assuming it&#x27;s the Editor, but could be wrong)?<p>On a broader note, my main interest in canvas+video center around responsive, interactive and accessible video displays in web pages. Have you had any thoughts on how you&#x27;d like to develop Revideo to support these sorts of functionalities?
评论 #40656211 未加载
creativenolo11 个月前
This looks like lots of fun.<p>I’ve only skimmed the docs and nothing jumped out on this: would it be possible to use a 3d canvas context? For example, integrate a dynamic three.js layer&#x2F;asset into the video?
评论 #40652231 未加载
epiccoleman11 个月前
This is really cool, I love this sort of thing.
评论 #40651320 未加载
ashia11 个月前
Looks promising - I&#x27;ve been using Shotstack&#x27;s visual editor to create video templates but keep running into limitations. Looks like Revideo has an &quot;editor&quot; that allows previews but not edits? Is editing through the GUI on the roadmap?
评论 #40649998 未加载
评论 #40650808 未加载
probson11 个月前
This looks very cool! I have built a project using remotion to bake in subtitles with some effects to a video from a .srt file, but this approach looks nicer and FOSS is amazing so I&#x27;ll have a go at porting it. Thanks!
评论 #40655769 未加载
earlyriser11 个月前
I have used Revideo for a personal project and I really like what you&#x27;re doing.
评论 #40650151 未加载
Loiro11 个月前
Looks like a cool tool. Will play around a bit, thanks for sharing!
rjeli11 个月前
Very cool! I assume it uses WebCodec VideoEncoder to encode in browser, maybe with a wasm ffmpeg fallback? How reliable&#x2F;easy to use have you found that?
评论 #40649807 未加载
albert_e11 个月前
Great idea.<p>When text-to-code capabilities of LLMs become more mature, libraries like these are going to create a lot of novel uses cases and opportunities.
评论 #40656502 未加载
andrewstuart11 个月前
Are there commercial use cases for this?
评论 #40650309 未加载
darepublic11 个月前
The thing I am dubious about with many of these AI tools is having fine control over the details.
popalchemist11 个月前
Does it work with vue&#x2F;vite? I am really hoping someone will make such a solution some day.
评论 #40652078 未加载
simonbarker8711 个月前
How does this compare to MoviePy beyond the JSX like syntax and being JS?
评论 #40651241 未加载
fanfanfly11 个月前
Does Revideo equivalent with Pymovie or it has some advantages?
评论 #40656434 未加载
SquidJack11 个月前
How to build a simple video editor like spilt video like that
评论 #40656270 未加载
andrewstuart11 个月前
How is this different to &#x2F; better than remotion.dev?
评论 #40650713 未加载
akio1011 个月前
Really nice - need to try it with a few hobby use cases.
评论 #40650656 未加载
LittleOtter11 个月前
That&#x27;s so cool!!!Thanks for your wonderful job!
hobofan11 个月前
I can&#x27;t be the only one that has assumed an affiliation with Retool based on the &quot;Re-&quot; prefix and similar logo, even though there doesn&#x27;t seem to be any.
评论 #40655949 未加载
liampulles11 个月前
How does this compare to VapourSynth or AviSynth?
评论 #40650842 未加载
Jayakumark11 个月前
Does it support Lottie graphic templates ?
评论 #40657093 未加载
fiehtle11 个月前
Is this like langchain but for video?
评论 #40649525 未加载
bobosha11 个月前
python support?
评论 #40649350 未加载