TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Federated finetuning of Whisper on Raspberry Pi 5

90 pointsby danieljanesover 1 year ago

4 comments

filterfiberover 1 year ago
I don't think the article mentions it, how well does the rpi 4 and 5 do for inference with whisper especially v3?
评论 #38294921 未加载
评论 #38294783 未加载
评论 #38294759 未加载
ulnarkresstyover 1 year ago
How would this actually work in practice? Do I ask the user to utter specific words then train on that? How is it different from the traditional speech recognition that I need to &#x27;train&#x27; to work better on my voice?<p>The Holy Grail would be to train the model while using it, without any friction. I don&#x27;t think these methods support that though.
评论 #38294837 未加载
评论 #38295642 未加载
saqadriover 1 year ago
This is cool. This might be a silly question, but what are the scenarios where it&#x27;s useful for fine-tuning on the edge with small devices? I get inference on the edge, and curious about metrics on that for Whisper, but isn&#x27;t it better to fine-tune on beefier infrastructure and then deploy it for inference on the edge?
评论 #38295342 未加载
评论 #38296863 未加载
评论 #38303345 未加载
Havocover 1 year ago
I’m guessing this will also help with thick accents?
评论 #38294883 未加载