TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

A voice separation model that distinguishes multiple speakers simultaneously

78 点作者 venmul将近 5 年前

9 条评论

wenc将近 5 年前
This is known as Blind Source Separation [1], and it&#x27;s been a field of study for decades. The specific problem here seems to be the &quot;cocktail party problem&quot;, where you want to isolate a single speaker (or in this case 5?) in a room full of conversations.<p>When I was in grad school, I knew an EE research group in the building next to mine working on this problem using ICA (independent components analysis) -- this was ca 2004, before the resurgence of deep learning. Even with ICA useful results could be obtained.<p>The results of the FB work [2] with RNNs are pretty impressive (audio samples).<p>[1] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Signal_separation" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Signal_separation</a><p>[2] <a href="https:&#x2F;&#x2F;enk100.github.io&#x2F;speaker_separation&#x2F;" rel="nofollow">https:&#x2F;&#x2F;enk100.github.io&#x2F;speaker_separation&#x2F;</a>
评论 #23812516 未加载
评论 #23803930 未加载
boublepop将近 5 年前
I feel that they are underplaying just how big a deal this would be in hearing aids. It’s not just a case of “slightly better noice filtering” for some it is the difference between being able to go to social events or not. For a large group of people using hearing aids the cocktail party effect means they can’t hear anything at all in social settings, so they avoid them completely because of the negative effects that come from everyone assuming your able to follow group conversations when your in fact sitting in your own little bubble only able to pick up what’s going on when someone semi-yells directly at you.<p>In any case the box you’d be selling them this product in wouldn’t say “better sound” it would say: “Get back your ability to attend and enjoy parties, enjoy group conversations and socialize unencumbered”. That’s a huge quality of life improvement.<p>You still have the issue of how to figure out which voices to boost and which to reduce, but I’d expect that to be simpler issue of using multiple receivers and directional detection.
yodon将近 5 年前
Facebook&#x27;s work on separating multiple sources in an audio stream is fundamentally different from prior ICA-based methods of Blind Source Separation [0] in ways that are both interesting and seem to be part of a broader trend at FB Research.<p>ICA-based BSS requires at least n microphones to separate n sources of sound. This work does the separation with one microphone.<p>What makes this more broadly interesting is FB Research has separately developed the capability to reconstruct full 3D models from single image photos[1].<p>Both of these reconstruct-from-single-sensor problems are MUCH harder than their associated reconstruct-from-multiple-sensors variants (ICA in the case of audio, stereo separation or photogrammetry in the case of video) so they aren&#x27;t efforts one undertakes casually.<p>The obvious motivation for this single-sensor approach is augmenting existing video and audio clips, most of which are single camera, single microphone (or very closely spaced stereo microphones with minimal separation), and all of which people have already uploaded massive numbers to Facebook.<p>The more interesting motivation could be that FB (Oculus) is widely believed to be developing next generation AR or VR glasses. Most of the discussion around AR&#x2F;VR headsets focuses on the displays, but if you wanted to keep both your physical size and hardware parts cost to an absolute minimum, one of the things you&#x27;d want to minimize is your sensor count.<p>FB Research seems to have a strong interest in things that reduce the number of sensors required to provide high grade AR&#x2F;VR experiences and that make it possible to explore pre-existing conventional media in spatialized 3D contexts.<p>[0] <a href="https:&#x2F;&#x2F;en.m.wikipedia.org&#x2F;wiki&#x2F;Independent_component_analysis" rel="nofollow">https:&#x2F;&#x2F;en.m.wikipedia.org&#x2F;wiki&#x2F;Independent_component_analys...</a><p>[1] <a href="https:&#x2F;&#x2F;ai.facebook.com&#x2F;blog&#x2F;facebook-research-at-cvpr-2020&#x2F;" rel="nofollow">https:&#x2F;&#x2F;ai.facebook.com&#x2F;blog&#x2F;facebook-research-at-cvpr-2020&#x2F;</a>
thaumasiotes将近 5 年前
This is a really interesting problem to work on. A couple obvious points:<p>1. This is a task that humans must do <i>all the time</i>. It&#x27;s very important in all kinds of different circumstances.<p>2. This is also a task that humans find very difficult. It&#x27;s not like recognizing someone by their face, where humans do it effortlessly but struggle to describe how. We frequently fail at this.<p>Combining (1) and (2), and the assumption that this task has been just as important historically as it still is now, we might conclude that this is a <i>really hard problem</i> and AI is unlikely to reach the level of performance we might hope for.<p>And if AI quickly jumps to superhuman levels of performance, that too would have many interesting implications.
评论 #23801610 未加载
ComputerGuru将近 5 年前
Not an expert in this domain but I&#x27;m not sure this can be done (well) without a physical component.<p>Recent studies have shown that we can consciously and subconsciously physically manipulate the position and directionality of our outer ear and some of the mechanics in the inner ear to &quot;zero in&quot; on noises and affect the frequency response of the ear. Our ears move imperceptibly when we look from side to side to synchronize what we hear with what we see. Try listening to one person in a busy room is saying then try doing the same while looking somewhere else.<p>There is hardware actively filtering out interfering sounds based on location and frequency, then there&#x27;s the wetware that further processes the incoming signals and attempts to strip unwanted noise. I don&#x27;t believe the second can be effectively done without a feedback loop to the first.
评论 #23804080 未加载
Yhippa将近 5 年前
The &quot;Why it matters&quot; section is interesting. Cynically I&#x27;m trying to think of commercial uses of this for FB. I&#x27;m thinking if you built a device that you could put into public places, restaurants, or stores:<p>* People could order food from their table without summoning a server. I guess some restaurants have tablets or other devices at their table but it seems to break immersion if you&#x27;re enjoying your company.<p>* In a big box store someone could come help you where you are without having to have workers roam the store and then you hope you run into someone.<p>* Fingerprint people in public or private for targeted advertising.
iandanforth将近 5 年前
The assumption that this is possible come from our ability to isolate voices in a crowd by paying attention to one or more of them. However our ability to do so rests on two important factors that don&#x27;t exist in these datasets. 1. We have two ears to allow for sound localization and 2. The sounds we distinguish are colocated in space allowing us to use ambient information for disambiguation.<p>This means that the problem being solved here is <i>harder</i> than the natural problem we have evolved and learned to solve.<p>This is both impressive and possibly problematic. Some feature of training in a goal directed fashion in naturalistic environments could be essential for higher quality speaker isolation, or it might not matter at all. The multiplicity of models phenomenon tells us there are likely many solutions to this problem.
评论 #23803949 未加载
fredmonroe将近 5 年前
i&#x27;m excited that FB develops and shares their research and simultaneously terrified of what they will do with it given past behavior<p>its very disconcerting - i feel this way everytime i use pytorch - which i love
atum47将近 5 年前
Nice, now Facebook can spy on several people at once.