TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Deep Dive into the Vision Transformers Paper

40 点作者 gschoeni超过 1 年前

2 条评论

gschoeni超过 1 年前
We have a reading club every Friday where we go over the fundamentals of a lot of the state of the art techniques used in Machine Learning today. Last week we dove into the &quot;Vision Transformers&quot; Paper from 2021 where the Google Brain team benchmarked training large scale transformers against ResNets.<p>Though it is not groundbreaking research as of this week, I think with the pace of AI it is important to dive deep into past work and what others have tried! It&#x27;s nice to take a step back and learn the fundamentals as well as keeping up with the latest and greatest.<p>Posted the notes and recap here if anyone finds it helpful:<p><a href="https:&#x2F;&#x2F;blog.oxen.ai&#x2F;arxiv-dives-vision-transformers-vit&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;blog.oxen.ai&#x2F;arxiv-dives-vision-transformers-vit&#x2F;</a><p>Also would love to have anyone join us live on Fridays! We&#x27;ve got a pretty consistent and fun group of 300+ engineers and researchers showing up.
评论 #38494685 未加载
vlovich123超过 1 年前
I wonder if overlapping the patches would improve accuracy further as a way to kind of anti alias the data learned &#x2F; inferred. In other words, if position 0 is 0,0 - 16,16 and position 1 is 16,0 - 32,16 instead we used 12,0-28,16 for position 1 where it overlaps 4 pixels of the previous position. You’d have more patches &#x2F; it would be more expensive compute wise, but it might dealias any artificial aliasing that the patches create during both training and inference.
评论 #38495851 未加载