TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Show HN: Hear what you see: Neural net to convert video to audio

1 点作者 muxamilian大约 3 年前
WYSIWHY is a neural network that transforms an input video into an audio sequence in real time. It does so by compressing each image using an autoencoder and interpreting the resulting code as a frequency range.<p>This could potentially be useful for the visually impaired, helping with indoor navigation. It could also be used to transform an infrared or ultraviolet video to sound and thus enable one to perceive otherwise invisible colors.<p>In a usual autoencoder, the elements of the code vector are independent of each other. This makes subtle differences between adjacent elements of the code vector hard to perceive for humans. The resulting audio sequence would sound like indistinguishable white noise. For WYSIWYH thus an encoder is used which hierarchically structures the code, meaning that large differences in the input image result in large differences in the output code. This also results in adjacent elements of the code vector being correlated. This makes the autoencoder’s code more friendly to human perception.

1 comment

basicplus2大约 3 年前
Always wanted to hear the old silent movies..