It's sad the comments here bashing this haven't understood the regulation, or haven't even read it and are just slamming it because in their mind EU REGULATION=BAD, AI=GOOD INNOVATION.<p>EU AI regulation isn't there to stop AI innovation, it's only there to restrict where and when AI can be used on decisions that affect people. For example, you can't deny someone healthcare, a bank account, a rental, unemployment payments, or a job, just because <i>"computer says NO"</i>[1].<p>I don't understand how people can be against this kind of regulation, especially knowing how biased and discriminatory AI can be made to be while also being a convenient scapegoat for poor policies implemented by lazy people in charge: <i>"you see your honor it wasn't our policies and implementation that were discriminatory and ruined lives, it was the AI's fault, not ours"</i>.<p>[1] <a href="https://www.youtube.com/watch?v=x0YGZPycMEU" rel="nofollow">https://www.youtube.com/watch?v=x0YGZPycMEU</a>
Until reading this article I hadn't realized that emotion detection is banned (edit: but confirmed only in workplaces and educational institutions)<p>I've had it on my list to try integrating Hume.ai (<a href="https://www.hume.ai/" rel="nofollow">https://www.hume.ai/</a>) into a prototype educational environment I've been playing with. The entirety of their product is emotion detection, so this must be concerning for them.<p>My own desire is to experiment with something that is entirely complementary to the learner, not coercive, guided by the learner and not providing any external assessment. In this context I feel some ethical confidence in using a wide array of inputs, including emotional assessment. But obviously I see how this could also be misused, or even how what I am experimenting with could be redirected in small ways to break ethical boundaries.<p>While Hume is a separate stack dedicated to emotional perception, this technology is also embedded elsewhere. GPT's vision capabilities are pretty capable at interpreting expressions. If LLMs grow audio abilities then they might be even better at emotion perception. I don't think you can really separate audio input from emotional perception, and it's not clear whether those emotional markers are intentional or unintentional cues.
> everything is now AI, even things that are very clearly not AI<p>Links to a 2019 article. It would probably be good to get some more recent numbers. I think even a ChatGPT wrapper “uses” AI although they did not develop it and have no moat.
Marc Andreessen said that industries that stand to gain from AI may be shielded from it by existing licensing and regulations e.g. education, law, medical. This AI act adds a whole other layer of shielding.
EU tech legislation is comical at this point. A bunch of rules that almost nobody follows and at best they fine FAANG companies a few hours of revenue.
I'm tentatively a fan of the high-risk portion of this legislation, but am disappointed that the EU seems to be taking a "training on copyright data is a copyright violation" stance. This basically kills open models. Only the biggest of companies will be able to strike licensing deals on the scale necessary to produce a model familiar with modern human culture. Any model trained only of public domain data will have surprising knowledge gaps, like a person who has never read a book or watched a movie, only read reviews.