TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Adversarial Reprogramming of Neural Networks

98 点作者 paufernandez将近 7 年前

4 条评论

cs702将近 7 年前
A trained deep neural net can be viewed as -- indeed, it is -- a program that accepts some input data and produces some output.<p>The idea here, at a very high level, is to use other neural nets that (a) accept <i>different</i> input data and learn to &#x27;adversarially embed it&#x27; into the input data accepted by the first neural net, and (b) extract from the output of the first neural net the actual output desired by the attacker... without ever touching the first neural net.<p>The authors demonstrate adversarial neural nets that target a deep convnet trained to classify ImageNet data. They are able alter this convnet&#x27;s function from ImageNet classification to counting squares in an image, classifying MNIST digits, and classifying CIFAR-10 images... without ever touching the convnet.<p>Great work.
Erlich_Bachman将近 7 年前
What are some of the deep&#x2F;philosophical reasons for why it is so easy to construct an adversarial example for a contemporary vanilla deep CNN network? Why do individual pixels get so much power about what the network decides to classify its output as?<p>During training, each input sample is basically driving each weight to be adjusted to classify itself correctly, while also classifying other inputs correctly. The way the network should thus see its world, is that it can only be one of those samples. That the world cannot be anything else. Thus, it should be the easiest way, computationally-wise, for the network to learn the things that differ the most between the classes. In an adversarial example, those couple of different pixels cannot possibly be what is even mathematically much more different for that class, compared to other classes.<p>How does this happen? It it is easy to understand why it would be easy to fool a network that looks for a leopard couch by an image of a real leopard, because leopard colors and texture is what the network actually was looking for during training. The patterns of the fooling picture were in the input. Given that such a network is only a gross simplification of a real brain, it is easy to see that it can be fooled. But just some pixels? The network was not looking for those pixels during training. It was not optimized to look for them. Why would it ever treat them as having that much information? Does it optimize for random things more highly than for the actual classification result that affects its weights. Does it have so many pixels that it asserts random importantness to them, so that out of millions, there is always 1 or 2 that happen to decide so much about the overall result?<p>Is it because the network looks for the combination of certain parameters, and treats the exactness of a combination as the most important factor, more important than its global context? So that the combination of the adversarially-modified pixels look like having the most exact ratios between each other, even though their ratios compared to the rest of the pixels is not on par at all - and the network decides that the most exact combination has the most information? Then, why isn&#x27;t this easily combated by regularization and stuff like dropout?
评论 #17449849 未加载
评论 #17449728 未加载
评论 #17452512 未加载
评论 #17452445 未加载
cozzyd将近 7 年前
Can&#x27;t wait to adversarially modify pedestrian crossing signs to stop signs.
评论 #17449636 未加载
评论 #17451056 未加载
squidbot将近 7 年前
Is this effectively brainwashing?
评论 #17452407 未加载