TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

How to label audio datasets automatically with Edge Impulse and Hugging Face [video]

2 pointsby furtiman10 months ago

1 comment

furtiman10 months ago
In this video I talk about a plugin that I made in Edge Impulse (platfrom for building edge AI).<p>The general approach can be applied anywhere and standalone though - use a foundational audio classifier to look at your unlabeled audio dataset! This approach is two-fold: first - we give the model a few sampels of an event or sound type we are looking for and determine to which class from the ones the model knows it belongs. It can be though of as encoding our sound events in the model &quot;language&quot;.<p>After that we give larger samples that may or may not contain these events and tell the model to only react when those classes identified earlier are identified.<p>This way we can explore large audio datasets quickly. There is a good chance that some classifications are not exactly right - but it gives you a subset of you audio to actually take a look at - instead of having to listen though all of it!