TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

“Less than one”-shot learning

94 点作者 monsieurpng超过 4 年前

10 条评论

alquemist超过 4 年前
The title is misleading. The core technique still uses 60,000 images from MNIST, but &#x27;distills&#x27; them into 10 images that contain the information from the original 60,000. The 10 &#x27;distilled&#x27; images look nothing like digits. Learning a complex model from 10 (later reduced to 2) &#x27;distilled&#x27; number arrays <i>is</i> an interesting research idea, but it has little to do with reducing the size of the input dataset. Arguably the heavy lifting part of the learning process moved from training the model to generating the distilled dataset. There is also some unconvincing discussion around synthetic datasets, though it remains fully unclear how these synthetic datasets have anything to do with real world scenarios.<p>&gt; In a previous paper, MIT researchers had introduced a technique to “distill” giant data sets into tiny ones, and as a proof of concept, they had compressed MNIST down to only 10 images.
评论 #24820676 未加载
评论 #24820868 未加载
评论 #24820669 未加载
评论 #24821678 未加载
评论 #24820955 未加载
评论 #24820858 未加载
评论 #24820928 未加载
评论 #24820670 未加载
nharada超过 4 年前
Direct link to paper: <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2009.08449.pdf" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2009.08449.pdf</a><p>Interesting paper, although the headline is of course sensational. The crux of the paper is that by using &quot;soft labels&quot; (for example a probability distribution rather than one-hot), it&#x27;s possible to create a decision boundary that encodes more classes than you have examples. In fact, only two examples can be used to encode any finite number of classes.<p>This is interesting because it means that, in theory, ML models should be able to learn decision spaces that are far more complex than the input data has traditionally been thought to encode. Maybe one day we can create complex, generalizable models using a small amount of data.<p>As written, this paper does not provide much actionable information. The problem is a toy problem, and is far from being useful in &quot;modern&quot; AI techniques (especially things like deep learning or boosted trees). The paper also is not practical in the sense that in real life you don&#x27;t know what your decision boundary should look like (that&#x27;s what you learn after all), and there&#x27;s no obvious way to know which data to collect to get a decision boundary you want.<p>In other words, this paper has said &quot;this representation is mathematically possible&quot; and is hoping that future work can actually make it useful in practice.
评论 #24821332 未加载
codelord超过 4 年前
The title is click-bait. This has been known for several years[1], the technique has little practical value, and the assertion that you can learn from no data is completely false and misleading. The training data was compressed to a few examples. To the journalist: it&#x27;s OK not to maximize for click-bait when you write an article. [1]: <a href="https:&#x2F;&#x2F;www.ttic.edu&#x2F;dl&#x2F;dark14.pdf" rel="nofollow">https:&#x2F;&#x2F;www.ttic.edu&#x2F;dl&#x2F;dark14.pdf</a>
评论 #24820969 未加载
评论 #24821226 未加载
iandanforth超过 4 年前
&quot;carefully engineered their soft labels&quot; is the same thing as training the network. Just because you encode information outside of the weights doesn&#x27;t mean you&#x27;re not encoding information from training data.<p>It&#x27;s like saying here&#x27;s the ideal partitioning scheme, memorize this.
echelon超过 4 年前
I&#x27;ve started to view <i>Technology Review</i> as a PR puff piece for MIT. They often overstate claims or leave out critical details.<p>As an example, the media lab is still citing innovation with deep fakes, claiming entirely novel results people are shocked to see. They hype their own researchers even though there are kids on YouTube who that have been making similar content up to a year prior to Technology Review&#x27;s publication.<p>I suspect they do the same with fields I&#x27;m less familiar with.
评论 #24821015 未加载
ummonk超过 4 年前
Am I correct that they don&#x27;t use a test-train split for generating these distilled images? Until you test on new images outside of what is inputted to the distiller, it seems to be a way to just overfit specific images, probably by combining unique elements of each into a single composite image. There are plenty of classical signal processing ways to do this (including just building a composite patchwork quilt).
etaioinshrdlu超过 4 年前
I like to think of this as adversarial training data. Adversarial inputs in general trick a NN to producing a specific output -- Adversarial training data tricks the NN into learning specific weights.<p>Note that the distilled data is not even from the same &quot;domain&quot; of input data any more. They&#x27;re basically adversarial inputs.
m101010超过 4 年前
if I understand correctly the key benefit would be that models could be trained on smaller datasets and therefore reduce the time spent computing the models?<p>I am not convinced that this time saving is more than the time spent to engineer the combined and synthesised data.
supernova87a超过 4 年前
Well, I&#x27;m sure nothing will be societally objectionable about this!
breck超过 4 年前
&gt; ...very different from human learning. A child often needs to see just a few examples of an object, or even only one, before being able to recognize it for life.<p>I see this a lot. It&#x27;s completely wrong. I&#x27;m not trying to pick on the author here, I think 95%+ of people share this misunderstanding of deep learning.<p>If you see &quot;only one&quot; horse, say for even a second, you really are seeing a huge number of horses, from various angles, with various shades of lighting. The motions of the horse; the motions of your head (even if slight); the undulations of the light; are generating a much larger number of basically augmented training data. If you look at a horse for a minute it could be the equivalent of training on 1 million images of a horse. I&#x27;m not sure the exact OOM, but it&#x27;s certainly orders of magnitude more than &quot;one&quot; horse.<p>(Relatedly: Some people say there is an experiment you can conduct at home to see the actual images your brain is training on).
评论 #24821964 未加载
评论 #24821352 未加载