TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Attacking machine learning with adversarial examples

308 点作者 dwaxe超过 8 年前

16 条评论

Dowwie超过 8 年前
&quot;attackers could target autonomous vehicles by using stickers or paint to create an adversarial stop sign that the vehicle would interpret as a &#x27;yield&#x27; or other sign&quot;<p>yeah, this article needs to go to the top of HN and stay there for a while
评论 #13663804 未加载
评论 #13663566 未加载
评论 #13663220 未加载
评论 #13663724 未加载
评论 #13664920 未加载
评论 #13664460 未加载
评论 #13663521 未加载
评论 #13663827 未加载
pakl超过 8 年前
Adversarial examples are just one way to prove that deep learning (deep convolutional nets) fail at generalizable vision. It&#x27;s not a security problem, it&#x27;s a fundamental problem.<p>Instead, ask yourselves why these deep nets fail after being trained on huge datasets -- and why even more data doesn&#x27;t seem to help.<p>The short answer is that mapping directly from static pixel images to human labels is the wrong problem to be solving.<p>Edit: fixed autocorrect typo
评论 #13665780 未加载
评论 #13666077 未加载
评论 #13666121 未加载
scythe超过 8 年前
I&#x27;m actually wondering how much the no-free-lunch theorem for data compression affects adverserial examples. A neural network can be conceptualized as an extremely efficient compression technique with a very high decoding cost[1]; the NFLT implies that such efficiency must have a cost. If we follow this heuristic intuitively we&#x27;re led to the hypothesis that an ANN needs to expand its storage space significantly in order to prevent adversarial examples from existing.<p>[1] -- consider the following encoding&#x2F;decoding scheme: train a NN to recognize someone&#x27;s face, and decode by generating random images until one of them is recognized as said face. If this works then the Kolmogorov complexity of the network must exceed the sum of the complexities of all &quot;stored&quot; faces.
danbruc超过 8 年前
So what features are those networks actually learning? What are thy looking for? They can not be much like features used by humans because the features used by humans are robust against such adversarial noise. I am also somewhat tempted to say that they can also not be to different from the features used by humans because otherwise, it seems, they would not generalize well. If they just learned some random accidental details in the trainings set, they would probably fail spectacularly in the validation phase with high probability but they don&#x27;t. And we would of course have a contradiction with the former statement.<p>So it seems that there are features quite different from the features used by humans that are still similarly robust unless you specifically target them. And they also correlate well with features used by humans unless you specifically target them. Real world images are very unusual images in the sense that almost all possible images are random noise while real world images are [almost] never random noise. And here I get a bit stuck, I have this diffuse idea in my head that most possible images do not occur in the real world and that there are way more degrees of freedom into direction that just don&#x27;t occur in the real world but this idea is just too diffuse so that I am currently unable to pin and write down.
评论 #13665778 未加载
评论 #13665808 未加载
zitterbewegung超过 8 年前
There was a presentation at defcon 2016 about another software package that attacked other deep learning models. See<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=JAGDpJFFM2A" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=JAGDpJFFM2A</a><p><a href="https:&#x2F;&#x2F;github.com&#x2F;cchio&#x2F;deep-pwning" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;cchio&#x2F;deep-pwning</a>
spott超过 8 年前
Are there any examples of these kinds of adversarial patterns that don&#x27;t look like noise?<p>While it is pretty easy to add noise to another image, it isn&#x27;t exactly easy to do it to a real object. The noise wouldn&#x27;t remain the same as you change perspective with respect to the sign, which would likely change its effectiveness.
评论 #13663858 未加载
jseip超过 8 年前
I can&#x27;t see that image without thinking of Snow Crash. This is almost literally Snow Crash for neural nets.
L_226超过 8 年前
Anyone want to make a mobile app that emits &#x27;noisy&#x27; light so when you use your phone in public CCTV facial recognition fails?*<p>I&#x27;d be interested to know if this is a viable concealment strategy. It might only be effective at night or low light situations, so sunlight doesn&#x27;t wash out the noise. It would be pretty subtle to use as well, how many people do you see walking around with their noses stuck to a screen?<p>* For research purposes only, of course.
Terribledactyl超过 8 年前
I&#x27;ve also been interested in using adversarial examples in extracting sensitive info from models. Both extracting unique info from the training set (doesn&#x27;t seem feasible but I can&#x27;t prove it) or doing a &quot;forced knowledge transfer&quot; when a competitor has a well trained model and you don&#x27;t.
a_c超过 8 年前
I wonder if adversarial examples can be deliberately used as a kind of steganography? Kind of like a hidden QR code. On the surface, the product looks a panda, with deliberately added signal. Under the hood, it is classified as gibbon. It could be used to verify the authenticity of a particular product.
oh_sigh超过 8 年前
As a defensive measure, why can&#x27;t random noise just be added to the image prior to classification attempt?
评论 #13663861 未加载
评论 #13664580 未加载
评论 #13663585 未加载
评论 #13663578 未加载
Florin_Andrei超过 8 年前
It&#x27;s like &#x27;fake news&#x27;, but for computers.
评论 #13663754 未加载
aidenn0超过 8 年前
Without knowing much about ML, it seems that using two (or more) very different methods could be a reasonable defense; if the methods are sufficiently different then it will get exponentially harder to find a gradient that fools all the methods; what to do when the outputs strongly disagree is a good question, but switching to a failsafe mode seems better than what we have now.
评论 #13666782 未加载
huula超过 8 年前
Every example provided in the article is model and training data specific, it only tells one thing, your data is not telling the truth, so why not getting better data.
评论 #13666734 未加载
n3x10e8超过 8 年前
This is going to be the SQL injection in the AI age
jmcminis超过 8 年前
Adding high frequency noise &quot;fools&quot; ML but not the human eye. It feels like this is a general failure in regularization schemes.<p>Why not try training multiple models on different levels of coarse grained data? Evaluate the image on all of them. Plot the class probability as a function of coarse graining. Ideally its some smooth function. If it&#x27;s not, there may be something adversarial (or bad training) going on.
评论 #13666773 未加载