TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

IG65M-PyTorch: video models pre-trained on over 65M Instagram videos

10 点作者 danieljh超过 5 年前
Think: resnet+imagenet but for videos: https:&#x2F;&#x2F;github.com&#x2F;moabitcoin&#x2F;ig65m-pytorch<p>We ported the r(2+1)d model (from CVPR 2018, see https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1711.11248) and the weights pre-trained by Facebook Research on over 65 million Instagram videos (from CVPR 2019, https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1905.00561) to PyTorch and released architecture, weights, tools for conversion, and feature extraction example.<p>The official Facebook Research codebase can be found at https:&#x2F;&#x2F;github.com&#x2F;facebookresearch&#x2F;vmz These models and pre-trained weights are immensly powerful e.g. for fine-tuning on action recognition tasks or extracting features from 3d data such as videos.<p>We hope the PyTorch models and weights are useful for folks out there and are easier to use and work with compared to the goal driven, caffe2 based, research&#x27;y official code base.

暂无评论

暂无评论