TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Deep Dive into the Vision Transformers Paper

40 pointsby gschoeniover 1 year ago

2 comments

gschoeniover 1 year ago
We have a reading club every Friday where we go over the fundamentals of a lot of the state of the art techniques used in Machine Learning today. Last week we dove into the &quot;Vision Transformers&quot; Paper from 2021 where the Google Brain team benchmarked training large scale transformers against ResNets.<p>Though it is not groundbreaking research as of this week, I think with the pace of AI it is important to dive deep into past work and what others have tried! It&#x27;s nice to take a step back and learn the fundamentals as well as keeping up with the latest and greatest.<p>Posted the notes and recap here if anyone finds it helpful:<p><a href="https:&#x2F;&#x2F;blog.oxen.ai&#x2F;arxiv-dives-vision-transformers-vit&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;blog.oxen.ai&#x2F;arxiv-dives-vision-transformers-vit&#x2F;</a><p>Also would love to have anyone join us live on Fridays! We&#x27;ve got a pretty consistent and fun group of 300+ engineers and researchers showing up.
评论 #38494685 未加载
vlovich123over 1 year ago
I wonder if overlapping the patches would improve accuracy further as a way to kind of anti alias the data learned &#x2F; inferred. In other words, if position 0 is 0,0 - 16,16 and position 1 is 16,0 - 32,16 instead we used 12,0-28,16 for position 1 where it overlaps 4 pixels of the previous position. You’d have more patches &#x2F; it would be more expensive compute wise, but it might dealias any artificial aliasing that the patches create during both training and inference.
评论 #38495851 未加载