TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Fast inference of OpenAI's Whisper on Rockchip processors

2 pointsby kevemanover 1 year ago

2 comments

smpanaroover 1 year ago
What&#x27;s an example use case for something like this? &quot;At the edge&quot; makes me think offline but are you generating audio at anything faster than real time in that case?<p>Would be curious to see an even lower cost&#x2F;lower power option. Seems this one is $120-170.
评论 #37303308 未加载
kevemanover 1 year ago
The tiny.en Whisper model transcribes speech at 30x real-time speeds on an Orange Pi 5.