TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Meta AI announces Massive Multilingual Speech code, models for 1000+ languages

705 点作者 crakenzak将近 2 年前

38 条评论

crakenzak将近 2 年前
Code: <a href="https:&#x2F;&#x2F;github.com&#x2F;facebookresearch&#x2F;fairseq&#x2F;tree&#x2F;main&#x2F;examples&#x2F;mms">https:&#x2F;&#x2F;github.com&#x2F;facebookresearch&#x2F;fairseq&#x2F;tree&#x2F;main&#x2F;exampl...</a><p>Blog Post: <a href="https:&#x2F;&#x2F;ai.facebook.com&#x2F;blog&#x2F;multilingual-model-speech-recognition&#x2F;" rel="nofollow">https:&#x2F;&#x2F;ai.facebook.com&#x2F;blog&#x2F;multilingual-model-speech-recog...</a><p>Paper: <a href="https:&#x2F;&#x2F;research.facebook.com&#x2F;publications&#x2F;scaling-speech-technology-to-1000-languages&#x2F;" rel="nofollow">https:&#x2F;&#x2F;research.facebook.com&#x2F;publications&#x2F;scaling-speech-te...</a><p>Languages coverage: <a href="https:&#x2F;&#x2F;dl.fbaipublicfiles.com&#x2F;mms&#x2F;misc&#x2F;language_coverage_mms.html" rel="nofollow">https:&#x2F;&#x2F;dl.fbaipublicfiles.com&#x2F;mms&#x2F;misc&#x2F;language_coverage_mm...</a>
评论 #36035795 未加载
评论 #36035462 未加载
qwertox将近 2 年前
I would like to use stuff like this as a side-project. Buy a Nvidia Geforce GPU and stick it into my 24&#x2F;7 server and play around with it in my free time, to see what can be done.<p>The issue with all these AI models is that there&#x27;s no information on which GPU is enough for which task. I&#x27;m absolutely clueless if a single RTX 4000 SFF with its 20GB VRAM and only 70W of max power usage will be a waste of money, or really something great to do experiments on. Like do some ASR with Whisper, images with Stable Diffusion or load a LLM onto it, or this project here from Facebook.<p>Renting a GPU in the cloud doesn&#x27;t seem to be a solution for this use case, where you just want to let something run for a couple of days and see if it&#x27;s useful for something.
评论 #36037388 未加载
评论 #36040303 未加载
评论 #36036189 未加载
评论 #36036310 未加载
评论 #36036493 未加载
评论 #36037870 未加载
评论 #36036282 未加载
评论 #36037370 未加载
评论 #36038344 未加载
评论 #36037868 未加载
评论 #36040389 未加载
archon1410将近 2 年前
ASR: &quot;Automatic Speech Recognition&quot;; also known as &quot;Speech to Text&quot; (STT)<p>TTS: &quot;Text to Speech&quot;<p>LID: &quot;Language Identification&quot;<p>In case anyone else was confused about what the acronyms mean.
评论 #36038116 未加载
armatav将近 2 年前
Imagine if we used these types of models for like 500 years and it locked their vocabulary in time, disallowing any further language blending; then somehow the servers turned off and nobody could communicate across language barriers anymore.<p>Someone should write that down in some sort of short-story involving a really tall structure.
评论 #36038752 未加载
评论 #36039632 未加载
评论 #36057378 未加载
eigenvalue将近 2 年前
I just wanted to test out the TTS locally on a powerful Ubuntu 22.04 machine, but the process for setting it up seems pretty broken and poorly documented. After 20 minutes of trying I finally gave up since I couldn&#x27;t get the VITS dependency to build (despite having a fully updated machine with all required compilers). It seems like they never really bother to see if the stuff works on a fresh machine starting from scratch. Somehow for my own projects I&#x27;m always able to start from a fresh git clone and then directly install everything using this block of code:<p>``` python3 -m venv venv source venv&#x2F;bin&#x2F;activate python3 -m pip install --upgrade pip python3 -m pip install wheel pip install -r requirements.txt ```<p>But whenever I try using these complicated ML models, it&#x27;s usually an exercise in futility and endless mucking around with conda and other nonsense. It ends up not being worth it and I just move on. But it does feel like it doesn&#x27;t need to be like this.
评论 #36038322 未加载
评论 #36041816 未加载
评论 #36036896 未加载
评论 #36041093 未加载
评论 #36039200 未加载
m3kw9将近 2 年前
The problem with all these model releases is they have no demos or even video of it working. It’s all just download it and run it, like it’s an app.
评论 #36035911 未加载
评论 #36035916 未加载
评论 #36035770 未加载
omneity将近 2 年前
I was super excited at this, but digging through the release [0] one can see the following [1]. While using Bible translations is indeed better than nothing, I don&#x27;t think the stylistic choices in the Bible are representative of how people actually speak the language, in any of the languages I can speak (i.e. that I am able to evaluate personally).<p>Religious recordings tend to be liturgical, so even the pronunciation might be different to the everyday language. They do address something related, although more from a vocabulary perspective to my understanding [2].<p>So one of their stated goals, to enable people to talk to AI in their preferred language [3], might be closer but certainly a stretch to achieve with their chosen dataset.<p>[0]: <a href="https:&#x2F;&#x2F;about.fb.com&#x2F;news&#x2F;2023&#x2F;05&#x2F;ai-massively-multilingual-speech-technology&#x2F;amp&#x2F;" rel="nofollow">https:&#x2F;&#x2F;about.fb.com&#x2F;news&#x2F;2023&#x2F;05&#x2F;ai-massively-multilingual-...</a><p>[1]: &gt; These translations have publicly available audio recordings of people reading these texts in different languages. As part of the MMS project, we created a dataset of readings of the New Testament in more than 1,100 languages, which provided on average 32 hours of data per language. By considering unlabeled recordings of various other Christian religious readings, we increased the number of languages available to more than 4,000. While this data is from a specific domain and is often read by male speakers, our analysis shows that our models perform equally well for male and female voices. And while the content of the audio recordings is religious, our analysis shows that this doesn’t bias the model to produce more religious language.<p>[2]: &gt; And while the content of the audio recordings is religious, our analysis shows that this doesn’t bias the model to produce more religious language.<p>[3]: &gt; This kind of technology could be used for VR and AR applications in a person’s preferred language and that can understand everyone’s voice.
评论 #36037458 未加载
评论 #36040674 未加载
pruthvishetty将近 2 年前
This looks huge. Anyone know how this compares with Whisper in terms of quality and speed?
评论 #36035598 未加载
echelon将近 2 年前
&gt; The MMS code and model weights are released under the CC-BY-NC 4.0 license.<p>Huge bummer. Prevents almost everyone from using this and recouping their costs.<p>I suppose motivated teams could reproduce the paper in a clean room, but that might also be subject to patents.
评论 #36035561 未加载
评论 #36035477 未加载
评论 #36035469 未加载
评论 #36036369 未加载
rvz将近 2 年前
So many so-called overnight AI gurus hyping about their snake-oil product and screaming about &#x27;Meta is dying&#x27; [0] and &#x27;It is over for Meta&#x27; but little of them actually do research in AI and drive the field forward and this once again shows that Meta has always been a consistent contributor to AI research, especially in vision systems.<p>All we can just do is take, take, take the code. But this time, the code&#x27;s license is CC-BY-NC 4.0. Which simply means:<p>Take it, but no grifting allowed.<p>[0] <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=31832221" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=31832221</a>
评论 #36035621 未加载
评论 #36040152 未加载
OkGoDoIt将近 2 年前
According to [1] on the accompanying blog post, this brings the Whisper 44.3 WER down to 18.7, although it’s unclear to me how much better this is at primarily English speech recognition. I’d love to see a full comparison of accuracy improvements as well as a proper writeup of how much more power it takes to run this in production or on mobile vs something like whisper.<p>[1]: <a href="https:&#x2F;&#x2F;scontent-sjc3-1.xx.fbcdn.net&#x2F;v&#x2F;t39.8562-6&#x2F;346801894_261088246476193_7395499802717483754_n.png?_nc_cat=103&amp;ccb=1-7&amp;_nc_sid=6825c5&amp;_nc_ohc=vHCf-i1COVcAX9ryJQX&amp;_nc_ht=scontent-sjc3-1.xx&amp;oh=00_AfAXj-l6r2rNadAc_0aMqQTpcUS_FrXzoO9Otxx_XglqXg&amp;oe=6471A113" rel="nofollow">https:&#x2F;&#x2F;scontent-sjc3-1.xx.fbcdn.net&#x2F;v&#x2F;t39.8562-6&#x2F;346801894_...</a>
freediver将近 2 年前
Come to think about it, Meta is a much better name for an AI company than a VR company.
评论 #36038044 未加载
评论 #36035543 未加载
reaperman将近 2 年前
I assume this &quot;competes&quot; directly with <a href="https:&#x2F;&#x2F;sites.research.google&#x2F;usm&#x2F;" rel="nofollow">https:&#x2F;&#x2F;sites.research.google&#x2F;usm&#x2F;</a> -- would be cool to see side-by-side benchmarks sometime! Maybe I should make those. I requested access to USM but have not been granted any access yet.
评论 #36035622 未加载
PufPufPuf将近 2 年前
1107 languages but no Czech or Slovak? Many languages with way fewer speakers made it to the list. I wonder what we did to Meta...
评论 #36044457 未加载
og_kalu将近 2 年前
Meta on a roll. any demo on how good the text to speech is ?
评论 #36035522 未加载
评论 #36040247 未加载
cleverwebble将近 2 年前
Wow, I didn&#x27;t even know there was 7,000 documented languages in the world!
评论 #36035463 未加载
评论 #36035844 未加载
sarabande将近 2 年前
I&#x27;m trying to use this on a 3M mp3 file to test ASR with language code deu, CPU only, and I keep getting this error -- are there limits to the MMS inference?<p><pre><code> File &quot;fairseq&#x2F;data&#x2F;data_utils_fast.pyx&quot;, line 30, in fairseq.data.data_utils_fast.batch_by_size_vec assert max_tokens &lt;= 0 or np.max(num_tokens_vec) &lt;= max_tokens, ( AssertionError: Sentences lengths should not exceed max_tokens=4000000 Traceback (most recent call last): File &quot;&#x2F;home&#x2F;xxx&#x2F;fairseq&#x2F;examples&#x2F;mms&#x2F;asr&#x2F;infer&#x2F;mms_infer.py&quot;, line 52, in &lt;module&gt; process(args) File &quot;&#x2F;home&#x2F;xxx&#x2F;fairseq&#x2F;examples&#x2F;mms&#x2F;asr&#x2F;infer&#x2F;mms_infer.py&quot;, line 44, in process</code></pre>
评论 #36041615 未加载
rllearneratwork将近 2 年前
Real <i>Open</i> AI lab.
评论 #36038743 未加载
feim_2022将近 2 年前
Wonder how this compares with deepgram offering. has anyone used&#x2F;tried&#x2F;compared or even read enough literature to compare. The WER rates showed in deepgram are still better than the largest MMS and the specific use case based fine-tuned models (zoom meetings, financial calls etc) probably make a bigger difference. WDYT ?
neycoda将近 2 年前
Great, now the Terminators will be barking orders at each other in languages I can&#x27;t understand.
sacnoradhq将近 2 年前
FYI: Another round of massive layoffs at Meta this Wednesday. Stay-at-home lockdown ensues.
dsrtslnd23将近 2 年前
I tried the english TTS example and the result is quite underwhelming (compared to bark or polly&#x2F;azure-tts ). It sounds like TTS systems one or two decades ago. Would those language-specific TTS models need to be finetuned?
EvgeniyZh将近 2 年前
Most of languages support only LID (language identification) task. Still impressive
ripvanwinkle将近 2 年前
Anyone know what hardware it takes to run this? Asking as an enthusiastic newbie
评论 #36036087 未加载
vlugorilla将近 2 年前
Would it be possible to have this in somethin &quot;a la&quot; whisper.cpp?
评论 #36037613 未加载
Simon_O_Rourke将近 2 年前
Maybe one of those models has figured out a way to tell Zuck that the whole Metaverse concept is nonsense, hopefully it&#x27;ll be graceful about letting him down.
lairv将近 2 年前
I don&#x27;t have much knowledge about TTS models, is it possible&#x2F;affordable to fine-tune those models on your own voice ?
gagabity将近 2 年前
I just want to translate a movie language audio from one to another, whats the easiest way to do this at home?
2Gkashmiri将近 2 年前
i checked the language coverage for &quot;kashmiri&quot;<p><a href="https:&#x2F;&#x2F;www.ethnologue.com&#x2F;language&#x2F;kas&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.ethnologue.com&#x2F;language&#x2F;kas&#x2F;</a><p>&quot;Himachal Pradesh state:&quot;<p>this is obviously wrong so i dont know what else is wrong
leke将近 2 年前
I was kind of hoping for Interlingue, but was surprised to not even see Esperanto on the list.
评论 #36035808 未加载
评论 #36036245 未加载
alienlid将近 2 年前
if I have an extra MBP 16&quot; 2020 hanging around 16GB Ram, Quadcore i-7.... can I run this? I&#x27;d like to try TTS capabilities! LMK if you&#x27;ve got any guides or instructions online I can checkout:)
oars将近 2 年前
Some interesting links and tools in this thread, e.g. datasette.io
sigstoat将近 2 年前
can any of these models be coerced into just doing straight phonetic transcription? like spitting out IPA?
richard___将近 2 年前
meta is doing more for open ai than openai
评论 #36035901 未加载
评论 #36035751 未加载
评论 #36035685 未加载
评论 #36035729 未加载
评论 #36038109 未加载
评论 #36035726 未加载
throwme_123将近 2 年前
fairseq is a fairly missed naming opportunity. C-3PO would have been better.
评论 #36039753 未加载
stanislavb将近 2 年前
I still hate the name “meta”. You?
kaycey2022将近 2 年前
Why is Meta open sourcing their AI work like this? Is it because they don&#x27;t have a great reputation even among tech companies?
评论 #36038444 未加载
评论 #36038390 未加载
egberts1将近 2 年前
Does it do American Sign Language, US fifth largest language?<p>I didn’t think so.
评论 #36035753 未加载
评论 #36035977 未加载
评论 #36038914 未加载