TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Attacking Natural Language Processing Systems with Adversarial Examples

37 点作者 tequila_shot超过 3 年前

3 条评论

orange3xchicken超过 3 年前
A new subfield of adversarial ML that considers similar challenges to adversarial NLP: topological attacks on graphs for attacking graph&#x2F;node classifiers.<p>Both problems (NLP &amp; graph robustness) are made much more challenging compared to adversarial robustness&#x2F;attacks on image classifiers due to their combinatorial nature.<p>For graphs, canonical notions of robustness wrt classes of perturbations defined based on lp norms aren&#x27;t so great (e.g. consider perturbing a barbell graph by removing a bridge edge- huge topological perturbation, but tiny lp perturbation!)<p>I think investigating robustness for graph classifiers should also help robustness for practical nlp systems and visa-versa. For example, is there any work that investigates robustness of nlp systems, but considers classes of perturbations defined on the space of ASTs?
13415超过 3 年前
I&#x27;m not going to fill out a Captcha just to see your website.
评论 #29573001 未加载
dsign超过 3 年前
Is that what taxpayer research money is being used for? Oh gods. And I bet they bitch about not being able to get grants.