TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Contrastive Self-Supervised Learning

97 点作者 ankeshanand超过 5 年前

5 条评论

fxtentacle超过 5 年前
They kind of slip it under the rug that for the PASCAL VOC tests, unsupervised was only used as pre-training and then followed by supervised training before evaluation. That&#x27;s the difference between &quot;this course will teach you Spanish&quot; and &quot;this course is a good preparation to do before you start your actual Spanish course&quot;.<p>Also, while it is laudable that they attempt to learn slow higher-level features, the result of contrastive loss functions is still very much detail-focussed, it just is so in a translationally invariant way.<p>A common problem for image classification is that the AI will learn to recognize high-level fur patterns, as opposed to learning the shape of the animal. Using contrastive loss terms like in their example will drive the network towards having the same features vector for adjacent pixels, meaning that the fur pattern detector needs to become translation-invariant. But the contrastive loss term will NOT prevent the network from recognizing the fur, rather than the shape, as is claimed in this article.
评论 #22215554 未加载
jph00超过 5 年前
There&#x27;s a lot to like in this article, but I don&#x27;t quite agree with the setup. I think it&#x27;s better to think of &quot;contrastive&quot; approaches as being orthogonal to basic self-supervised learning methods - they represent an additional piece you can add to your loss function that results in very significant improvements. This approach can be combined with existing self-supervised pretext tasks.<p>I&#x27;ve discussed these ideas here, for those that are interested in learning more: <a href="https:&#x2F;&#x2F;www.fast.ai&#x2F;2020&#x2F;01&#x2F;13&#x2F;self_supervised&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.fast.ai&#x2F;2020&#x2F;01&#x2F;13&#x2F;self_supervised&#x2F;</a>
评论 #22214641 未加载
评论 #22213802 未加载
allovernow超过 5 年前
Great post. For an ML engineer HN can be a goldmine sometimes! I&#x27;ve gotten a bunch of ideas for work from submissions. The pace at which ML is expanding is phenomenal. No doubt in part thanks to the open nature of arxiv. As the sum of so many centuries of achievement, it really makes me proud to be human...and I&#x27;m excited to watch as it changes the world.
bobosha超过 5 年前
Great write up, I especially liked the section of Contrastive Predictive Coding, I think that&#x27;s going to be the next iteration of ML.
评论 #22214174 未加载
pequalsnp超过 5 年前
I hadn’t heard of this before. Cool. Going to share this with my team on Monday.