TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Turning petabytes of raw video data into a high-quality ML dataset

3 pointsby mvoodarlaover 3 years ago

1 comment

bitcoinmaximaover 3 years ago
I really love the article, but there is a bit of a mistake in this article. (Still reading, updating comment as I go)<p>&gt; You have 10 cameras recording footage 24&#x2F;7 at 30 FPS, 1920x1080 resolution. They’re all configured to send video clips back to the cloud in 10-minute intervals. A standard compression setup results in ~1 TB of video stored per day or 27 million individual frames .<p>Incorrect, I generate about 130MB files for 10 minutes segments on a camera of exactly those specs. Given that there are 1440 minutes in a day, that results in about 19 GB of data per day. The author assumes that there is always a unique frame but that&#x27;s not true.<p>It depends on how much movement there is in the frame and the GOP interval.<p>H264 compression will not generate new frames if nothing has changed. There is an interval called the GOP interval which enforces that say every 2 seconds a keyframe is sent, meaning a complete raw frame is sent. This is to ensure that new viewers can catch on waiting max the GOP interval.
评论 #29753401 未加载