TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Voder Speech Synthesizer

253 点作者 CyborgCabbage将近 2 年前

18 条评论

bradrn将近 2 年前
A short explanation as to how this works:<p>The voice can be modeled using two main components. The vocal chords are a periodic source of sound, which is then filtered by the mouth and tongue to produce vowel sounds [0]. The filter can be modeled as a set of band-pass filters, each of which let through a specific band of frequencies — these are called ‘formants’ in acoustic phonetics. Different vowel sounds are produced by combining formants at different pitches in a systematic way [1]. You can hear this yourself by very slowly moving your mouth from saying an ‘eeeee’ sound to an ‘ooooo’ sound: if you listen carefully, you can hear one formant changing pitch while the others stay the same. (I like [2] as an intro to this kind of stuff.)<p>The ‘voder’ works by having one key for each possible frequency band-pass filter. Pressing multiple keys adds the resulting sounds, producing an output sound with distinct formants. If you use the right formants, the resulting sound is very similar to that produced by a human mouth saying a specific vowel! Software such as the vowel editor in Praat [3] take it further, by allowing selection of formants from a standard vowel chart.<p>[0] Consonantal sounds are a bit more complicated, since they tend to involve various different noise sources and transient disturbances of the sound. For instance, &#x2F;ʃ&#x2F; (the ‘sh’ sound) is noise of a lower frequency than &#x2F;s&#x2F;. I can’t work out how Harper produced the difference between those two sounds in the video — it seems to be impossible to do this with the live demo. In fact, any sort of pitch control seems to be impossible in the demo.<p>[1] This is how overtone singing and throat singing works! Selectively amplifying one formant gives the impression that you’re singing that note as the same time as the ‘base’ pitch. In fact, if you do that, your vocal cords are producing a pitch plus all its overtones, while your mouth is enhancing one overtone while filtering out all the others.<p>[2] <a href="https:&#x2F;&#x2F;newt.phys.unsw.edu.au&#x2F;jw&#x2F;voice.html" rel="nofollow noreferrer">https:&#x2F;&#x2F;newt.phys.unsw.edu.au&#x2F;jw&#x2F;voice.html</a><p>[3] <a href="https:&#x2F;&#x2F;www.fon.hum.uva.nl&#x2F;praat&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.fon.hum.uva.nl&#x2F;praat&#x2F;</a> — probably also available from your favourite Linux distro!
评论 #36773013 未加载
评论 #36772959 未加载
jvm___将近 2 年前
Someone was selling a vocoder on eBay, so they made a video of the vocoder describing it&#x27;s own selling features.<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=5kc-bhOOLxE">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=5kc-bhOOLxE</a>
评论 #36777857 未加载
kypro将近 2 年前
This is quite off topic, but it reminded me of something I have been thinking about recently – perhaps at the limit all highly capable narrow AI systems must become generally intelligent.<p>I was thinking about the complexity of expression in TTS voice synthesizers recently and it struck me just how difficult a problem that is.<p>To be as expressive as a human the AI model would need to fully &quot;understand&quot; the context of what is being said. Consider how a phrase like &quot;I hate you&quot; can be said in a loving way between friends sharing a joke at each others expense, vs being said with anger or in sadness.<p>It got me wondering if all sufficiently complex problems require models to be generally intelligent – at least in the sense that they have deep, nuanced models of the world.<p>For example, perhaps for a self-driving car to be as &quot;good&quot; as a human it actually needs to generally intelligent in that it needs to understand that it&#x27;s appropriate to drive differently if it is in an emergency situation vs a leisurely weekend drive through a scenic part of town. When driving through my city after 8PM on the weekend I tend to drive slower and more cautiously because I know drunk people often walk out in front for my car – would a good self-driving car not need to understand these nuances of the world too?<p>This is interesting because it highlights just how important the element human understanding is in to accurately convey expression in a voice synthesizer. While I&#x27;d argue modern voice synthesizers have been more intelligible than this for some time the expressiveness of this machine has probably only been recently been rivalled by state of the art AI models.
评论 #36772447 未加载
评论 #36777783 未加载
评论 #36774652 未加载
JKCalhoun将近 2 年前
I was skeptical that you could even <i>type</i> an intelligible phonetic &quot;She saw me&quot; with only two phonemes let alone give it the rise and fall demonstrated.<p>I&#x27;ve played with the SP0256 speech synthesis IC and found constructing intelligible words challenging even with all the phonemes available on that silicon.<p>This extended video has me thinking it probably was legit though:<p><a href="https:&#x2F;&#x2F;youtu.be&#x2F;TsdOej_nC1M" rel="nofollow noreferrer">https:&#x2F;&#x2F;youtu.be&#x2F;TsdOej_nC1M</a>
bsza将近 2 年前
Wolfgang von Kempelen (creator of the fake chess automaton known as the Turk) made a similar thing in the 18th century. [0] It had multiple reeds tuned to the same frequency - conceptually similar to the Voder. It might not be coincidence that Bell Labs developed this, given that Bell himself had also made attempts to improve the design, which is how he ended up inventing the telephone.<p>[0] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Wolfgang_von_Kempelen%27s_speaking_machine" rel="nofollow noreferrer">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Wolfgang_von_Kempelen%27s_spea...</a>
joezydeco将近 2 年前
The Voder was part of a much larger Bell Labs project, one that eventually developed into one of the first unbreakable encrypted telephony systems used in World War II.<p><a href="https:&#x2F;&#x2F;99percentinvisible.org&#x2F;episode&#x2F;vox-ex-machina&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;99percentinvisible.org&#x2F;episode&#x2F;vox-ex-machina&#x2F;</a>
slmnsmk将近 2 年前
Re: the author of that -<p>Hey I know the person who made this!<p>Thanks for sharing, it really was a labor of love. I remember Griffin being super excited about how it turned out. They are really passionate about the worlds fair!
lacrimacida将近 2 年前
The intonation is very good in a way that modern speech synthesizers don’t get quite right.
评论 #36771690 未加载
userbinator将近 2 年前
Another vocal-tract-model synth that showed up on HN a while ago: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=18912628">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=18912628</a>
jcpst将近 2 年前
I have wanted to try one of these- the playable soft-synth is great
lbriner将近 2 年前
Something somebody told me was that it seems really amazing but without the host prompting the listener as to the phrase, &quot;she saw me&quot;, most of the time you wouldn&#x27;t know what it was saying.<p>I heard a sample of &quot;Say, good afternoon radio audience&quot;, then the Voder produces something very similar, but listen to it without the prompt and you would have to guess what it meant.<p>A Derren Brown kind of trick :-)
mwcampbell将近 2 年前
I first heard the Voder as the first sample on the Klatt Record [1]. Unfortunately, there it&#x27;s credited solely to Homer Dudley; neither Bell Telephone Laboratory nor women like Helen Harper who operated the machine were mentioned.<p>[1]: <a href="http:&#x2F;&#x2F;www.festvox.org&#x2F;history&#x2F;klatt.html" rel="nofollow noreferrer">http:&#x2F;&#x2F;www.festvox.org&#x2F;history&#x2F;klatt.html</a>
评论 #36776616 未加载
JoeDaDude将近 2 年前
I&#x27;ve been interested in how these were actually played. If anyone has access to the material used to train the operators, I&#x27;d love to hear about it.<p>BTW, there was one fellow who built one, something I&#x27;d like to try someday. See his recreation here:<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=gv9m0Z7mhXY">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=gv9m0Z7mhXY</a>
colanderman将近 2 年前
The recently released Soma Terra synthesizer contains a key-per-formant synthesis mode which operates like the Voder: <a href="https:&#x2F;&#x2F;somasynths.com&#x2F;terra&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;somasynths.com&#x2F;terra&#x2F;</a> (Ctrl+F &quot;Voder&quot; in the manual)
Minor49er将近 2 年前
This is a cool page. I like the interactive synthesizer, but the unvoiced noise is too sharp. It sounds like white noise rather than pink noise or similar which would be more accurate to how humans sound
fbdab103将近 2 年前
What is the best available open source option today for TTS?
chaosprint将近 2 年前
Interesting! thanks for sharing.<p>I have added this to my feature list for <a href="https:&#x2F;&#x2F;glicol.org" rel="nofollow noreferrer">https:&#x2F;&#x2F;glicol.org</a><p>the source code looks fairly straightforward. very cool<p>```js function makeFormantNode(ctx, f1, f2) { const sinOsc = ctx.createOscillator(); sinOsc.type = &#x27;sawtooth&#x27;; sinOsc.frequency.value = 110; sinOsc.start();<p><pre><code> const bandPass = ctx.createBiquadFilter(); bandPass.type = &#x27;bandpass&#x27;; bandPass.frequency.value = (f1 + f2) &#x2F; 2; bandPass.Q.value = ((f1 + f2) &#x2F; 2) &#x2F; (f2 - f1); const gainNode = ctx.createGain(); gainNode.gain.value = 0.0; sinOsc.connect(bandPass); bandPass.connect(gainNode); gainNode.connect(ctx.destination); return { start() { gainNode.gain.setTargetAtTime(0.75, ctx.currentTime, 0.015); }, stop() { gainNode.gain.setTargetAtTime(0.0, ctx.currentTime, 0.015); }, panic() { gainNode.gain.cancelScheduledValues(ctx.currentTime); gainNode.gain.setTargetAtTime(0, ctx.currentTime, 0.015); }, };</code></pre> }<p>function makeSibilanceNode(ctx) { const buffer = ctx.createBuffer(1, NOISE_BUFFER_SIZE, ctx.sampleRate); const data = buffer.getChannelData(0); for (let i = 0; i &lt; NOISE_BUFFER_SIZE; ++i) { data[i] = Math.random(); }<p><pre><code> const noise = ctx.createBufferSource(); noise.buffer = buffer; noise.loop = true; const noiseFilter = ctx.createBiquadFilter(); noiseFilter.type = &#x27;bandpass&#x27;; noiseFilter.frequency.value = 5000; noiseFilter.Q.value = 0.5; const noiseGain = ctx.createGain(); noiseGain.gain.value = 0.0; noise.connect(noiseFilter); noiseFilter.connect(noiseGain); noiseGain.connect(ctx.destination); noise.start(); return { start() { noiseGain.gain.setTargetAtTime(0.75, ctx.currentTime, 0.015); }, stop() { noiseGain.gain.setTargetAtTime(0.0, ctx.currentTime, 0.015); }, panic() { noiseGain.gain.cancelScheduledValues(ctx.currentTime); noiseGain.gain.setTargetAtTime(0, ctx.currentTime, 0.015); }, };</code></pre> }<p>function initialize() { audioCtx = new (window.AudioContext || window.webkitAudioContext)(); audioNodes[&#x27;a&#x27;] = makeFormantNode(audioCtx, 0, 225); audioNodes[&#x27;s&#x27;] = makeFormantNode(audioCtx, 225, 450); audioNodes[&#x27;d&#x27;] = makeFormantNode(audioCtx, 450, 700); audioNodes[&#x27;f&#x27;] = makeFormantNode(audioCtx, 700, 1000); audioNodes[&#x27;v&#x27;] = makeFormantNode(audioCtx, 1000, 1400); audioNodes[&#x27;b&#x27;] = makeFormantNode(audioCtx, 1400, 2000); audioNodes[&#x27;h&#x27;] = makeFormantNode(audioCtx, 2000, 2700); audioNodes[&#x27;j&#x27;] = makeFormantNode(audioCtx, 2700, 3800); audioNodes[&#x27;k&#x27;] = makeFormantNode(audioCtx, 3800, 5400); audioNodes[&#x27;l&#x27;] = makeFormantNode(audioCtx, 5400, 7500); audioNodes[&#x27; &#x27;] = makeSibilanceNode(audioCtx); } ```
评论 #36773868 未加载
zzzeek将近 2 年前
only a woman could operate the machine yet it was built to create a man&#x27;s voice
评论 #36779552 未加载