TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

First Compliance Deadline of EU's AI Act Has Arrived

4 点作者 azernik3 个月前

1 comment

blackeyeblitzar3 个月前
From the article:<p>&gt; Under the bloc’s approach, there are four broad risk levels: (1) Minimal risk (e.g., email spam filters) will face no regulatory oversight; (2) limited risk, which includes customer service chatbots, will have a light-touch regulatory oversight; (3) high risk — AI for healthcare recommendations is one example — will face heavy regulatory oversight; and (4) unacceptable risk applications — the focus of this month’s compliance requirements — will be prohibited entirely.<p>&gt; Some of the unacceptable activities include:<p>&gt; AI used for social scoring (e.g., building risk profiles based on a person’s behavior).<p>&gt; AI that manipulates a person’s decisions subliminally or deceptively.<p>&gt; AI that exploits vulnerabilities like age, disability, or socioeconomic status.<p>&gt; AI that attempts to predict people committing crimes based on their appearance.<p>&gt; AI that uses biometrics to infer a person’s characteristics, like their sexual orientation.<p>&gt; AI that collects “real time” biometric data in public places for the purposes of law enforcement.<p>&gt; AI that tries to infer people’s emotions at work or school.<p>&gt; AI that creates — or expands — facial recognition databases by scraping images online or from security cameras.<p>I’m glad they marked things like email spam filters as low risk. But the list of unacceptable risks is still very large. For example even product recommendations on an online store could be considered “manipulating a person’s decisions on subliminally” - even a non-AI ad is doing just that.<p>I also don’t think it is reasonable to prevent people from inferring or predicting things based on learned factors. It’s one thing if an AI is directed to be discriminatory. But if it learns that one particular trait or the other is predictive of something else, should we really be banning that? I can see that having bad consequences. For example, younger males are more likely to drive recklessly - if they are scored for risk higher, is that really a bad thing?<p>Stepping back, I think this is part of a long trend of EU over-regulating themselves into stagnation. I think this political culture is going to severely hurt them in the long term. Anyone who wants to build a great business will just go do it elsewhere.