TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Robust Adversarial Examples

166 点作者 eroo将近 8 年前

8 条评论

hyperion2010将近 8 年前
My own view of this having spent some time in visual neuroscience is that if you really want vision that is robust to these kinds of issues then you have to build a geometric representation of the world first, and then learn/map categories from that. Trying to jump from a matrix to a label without having an intervening topological/geometric model of the world in between (having 2 eyes and/or the ability to move and help with this) is asking for trouble because we think we are recapitulating biology when in fact we are doing nothing of the sort (as these adversarial examples reveal beautifully).
评论 #14794830 未加载
评论 #14794883 未加载
arnioxux将近 8 年前
There are plenty of adversarial examples for humans too: <a href="http:&#x2F;&#x2F;i.imgur.com&#x2F;mOTHgnf.jpg" rel="nofollow">http:&#x2F;&#x2F;i.imgur.com&#x2F;mOTHgnf.jpg</a>
评论 #14796214 未加载
评论 #14796248 未加载
评论 #14796507 未加载
tachyonbeam将近 8 年前
IMO, what these adversarial examples give us is a way to boost training data. We should augment training datasets with adversarial examples, or use adversarial training methods. The resulting networks would only be more robust as a result.<p>As for self-driving cars, this is a good argument for having multiple sensing modalities in addition to visual, such as radar&#x2F;lidar&#x2F;sonar, and multiple cameras, infrared in addition to visible light.
评论 #14794464 未加载
评论 #14794470 未加载
评论 #14794448 未加载
bsder将近 8 年前
I can paint a road to a tunnel on a mountain side and fool some amount of people. Meep. Meep.<p>The problem isn&#x27;t that there are adversarial inputs. The problem is that the adversarial inputs aren&#x27;t <i>also</i> adversarial (or detectable) to the human visual system.
std_throwaway将近 8 年前
Does this effect carry over to classifiers which were trained with different training data?
评论 #14794826 未加载
pvillano将近 8 年前
I don&#x27;t know how you guys think this is an adversarial example. I see a picture of a desktop computer.
sharemywin将近 8 年前
To me it&#x27;s an image of a picture regardless of the contents of the picture.
评论 #14796631 未加载
therajiv将近 8 年前
It&#x27;s not clear to me how malicious actors can manipulate this observation to confuse self-driving cars. That said, I don&#x27;t think this discredits the point of the article; it&#x27;s important to note how easily deep learning models can be fooled if you understand the math behind them. I just think the example of tricking self-driving cars is difficult to relate with &#x2F; understand.
评论 #14794125 未加载
评论 #14794132 未加载
评论 #14799452 未加载
评论 #14794259 未加载
评论 #14794225 未加载