TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Robust Adversarial Examples

166 pointsby erooalmost 8 years ago

8 comments

hyperion2010almost 8 years ago
My own view of this having spent some time in visual neuroscience is that if you really want vision that is robust to these kinds of issues then you have to build a geometric representation of the world first, and then learn/map categories from that. Trying to jump from a matrix to a label without having an intervening topological/geometric model of the world in between (having 2 eyes and/or the ability to move and help with this) is asking for trouble because we think we are recapitulating biology when in fact we are doing nothing of the sort (as these adversarial examples reveal beautifully).
评论 #14794830 未加载
评论 #14794883 未加载
arnioxuxalmost 8 years ago
There are plenty of adversarial examples for humans too: <a href="http:&#x2F;&#x2F;i.imgur.com&#x2F;mOTHgnf.jpg" rel="nofollow">http:&#x2F;&#x2F;i.imgur.com&#x2F;mOTHgnf.jpg</a>
评论 #14796214 未加载
评论 #14796248 未加载
评论 #14796507 未加载
tachyonbeamalmost 8 years ago
IMO, what these adversarial examples give us is a way to boost training data. We should augment training datasets with adversarial examples, or use adversarial training methods. The resulting networks would only be more robust as a result.<p>As for self-driving cars, this is a good argument for having multiple sensing modalities in addition to visual, such as radar&#x2F;lidar&#x2F;sonar, and multiple cameras, infrared in addition to visible light.
评论 #14794464 未加载
评论 #14794470 未加载
评论 #14794448 未加载
bsderalmost 8 years ago
I can paint a road to a tunnel on a mountain side and fool some amount of people. Meep. Meep.<p>The problem isn&#x27;t that there are adversarial inputs. The problem is that the adversarial inputs aren&#x27;t <i>also</i> adversarial (or detectable) to the human visual system.
std_throwawayalmost 8 years ago
Does this effect carry over to classifiers which were trained with different training data?
评论 #14794826 未加载
pvillanoalmost 8 years ago
I don&#x27;t know how you guys think this is an adversarial example. I see a picture of a desktop computer.
sharemywinalmost 8 years ago
To me it&#x27;s an image of a picture regardless of the contents of the picture.
评论 #14796631 未加载
therajivalmost 8 years ago
It&#x27;s not clear to me how malicious actors can manipulate this observation to confuse self-driving cars. That said, I don&#x27;t think this discredits the point of the article; it&#x27;s important to note how easily deep learning models can be fooled if you understand the math behind them. I just think the example of tricking self-driving cars is difficult to relate with &#x2F; understand.
评论 #14794125 未加载
评论 #14794132 未加载
评论 #14799452 未加载
评论 #14794259 未加载
评论 #14794225 未加载