TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

AVA: A Finely Labeled Video Dataset for Human Action Understanding

44 点作者 hurrycane超过 7 年前

2 条评论

SloopJon超过 7 年前
From the download page:<p>&gt; The AVA dataset contains 192 videos split into 154 training and 38 test videos. Each video has 15 minutes annotated in 3 second intervals, resulting in 300 annotated segments.<p>So basically this is a couple of CSV files annotating 192 videos, which are hosted on YouTube. ava_train_v1.0.csv is about 7 MB.
评论 #15522619 未加载
lifeisstillgood超过 7 年前
the most interesting thing i found was &quot;We use movies as the source of AVA&quot;.<p>while the datasets will only grow, movies are not realistic - they are by design faked, acted, well lit etc. While that is probably the best thing to do with a starting set i am waiting for the CNN&#x2F;RNN to start saying (much like the early black female standford researcher who was not identified as human face) that person is not walking - i know walking, it&#x27;s just like John Cleese.
评论 #15518845 未加载