TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Analyzing latent embedding capacity in Tacotron, Google's seq2seq TTS model

6 pointsby daisystantonalmost 6 years ago
Research paper: https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1906.03402<p>Audio examples: https:&#x2F;&#x2F;google.github.io&#x2F;tacotron&#x2F;publications&#x2F;capacitron&#x2F;<p>Capacitron is the Tacotron team&#x27;s most recent contribution to the world of expressive end-to-end speech synthesis (e.g., transfer and control of prosody and speaking style). Our previous Style Tokens and prosody transfer work implicitly controls reference embedding capacity by modifying the encoder architecture, thereby targeting a trade-off between text-specific transfer fidelity and text-agnostic style generality. Capacitron treats embedding capacity as a first class citizen by targeting a specific value for the representational mutual information via a variational information bottleneck.<p>We also show that by modifying the stochastic reference encoder to match the form of the true latent posterior, we can achieve high-fidelity prosody transfer, text-agnostic style transfer, and natural-sounding prior samples in the same model. The modified encoder also addresses the pitch range preservation problems we observed during inter-speaker transfer in our past work.<p>Lastly, we show the capacity of the embedding can be decomposed hierarchically, allowing us to control the amount of sample-to-sample variation for transfer use cases.<p>To appreciate the results fully, we recommend listening to the audio examples in conjunction with reading the paper.

1 comment

PaulHoulealmost 6 years ago
When you post it this way we can&#x27;t click on the link.<p>Just post a link to the paper and then we can discuss it.