For anyone who is jumping to the comments to complain about how more rules from the EU is going to make innovation difficult, I highly recommend you to read the summary presentation: <a href="https://www.ceps.eu/wp-content/uploads/2021/04/AI-Presentation-CEPS-Webinar-L.-Sioli-23.4.21.pdf" rel="nofollow">https://www.ceps.eu/wp-content/uploads/2021/04/AI-Presentati...</a><p>Basically, as I understand it, it divides AI systems (in the broadest sense Machine Learning sense) into risk categories: unacceptable risk (prohibited), high risk, medium/other risk, and low risk.<p>Applications in the high risk category include medical devices, law enforcement, recruiting/employment and others. AI systems in this category will be subject to the requirements mentioned by most people here (oversight, clean and correct training data, etc).<p>Medium risk applications seem to revolve around the risk of tricking people, for example via chatbots, deepfakes etc. In this case they require to “notify” people that they are interacting with an AI or that the content was generated by AI. How this can be enforced in practice remains to be seen.<p>And the low risk category is basically everything else, from marketing applications to ChatGPT (as I understand it). Applications in this category would have no mandatory obligations.<p>If you ask me, that’s a quite sensible approach.
There was a great article on this recently that cuts through the EU’s dressing:<p>EU AI Act To Target US Open Source Software<p><a href="https://technomancers.ai/eu-ai-act-to-target-us-open-source-software/" rel="nofollow">https://technomancers.ai/eu-ai-act-to-target-us-open-source-...</a><p>TLDR it imposes ridiculous constraints on github and open source developers
I wonder if all those calling ChatGPT and the like “AI” when it’s nothing of the sort regret doing so now. AI is a scary word for certain groups, while machine learning (which is what this is) isn’t. Now you have a bunch of Luddites with pitchforks looking for a witch to burn.<p>What this act will do is severely stunt the European economy compared to the rest of the world, which will be racing ahead (as long as countries like the US don’t pass similar laws). By the time Europe realizes its mistake, it will be too late to catch up.
Article 10 requires that<p>> all training data be "relevant, representative, free of errors and complete."<p>This is especially interesting to me with regard to something like ChatGPT. As we know, ChatGPT occasionally gives factually incorrect information. Does this mean that, in its current form, it would be illegal in EU? We know that Google is currently blocking access to Bard in EU. Will ChatGPT be forced to follow suit?<p>ChatGPT is great and I love it. It would be a shame if I'm not even allowed to use it _at my own risk_ just because it might be wrong about some things. This seems like a simplification, but it sounds like EU is allowing Perfect to be the enemy of Good.
Why is everyone in a hurry to regulate AI?<p>I can't think of one example where someone was harmed by an LLM.<p>Besides "AI" is largely a marketing term, most software has "AI" elements and that has been the case for a while now, this thing has "unintended consequences" written all over it.
Note that this is not an official EU website, but by a non-profit organization <a href="https://futureoflife.org/" rel="nofollow">https://futureoflife.org/</a>
At first glance this looks like official information, but in fact it's a campaign site from <a href="https://futureoflife.org" rel="nofollow">https://futureoflife.org</a> and should be clearly marked as such.
The act is dated 21. 4. 2021. More than two years old.<p>1.) being a part of the team working on this has to be among the most exciting legal jobs in Brussels<p>2.) I did not have time to read the entire act, not even sure if I'd understand it, but I'd be curious how much of it is still relevant given the leaps in both tech and especially popularity in the last two years.
The Annexes ( <a href="https://artificialintelligenceact.eu/annexes/" rel="nofollow">https://artificialintelligenceact.eu/annexes/</a> ) contain a definition of "High risks AI systems" at Annex III.<p>--<p>Incidentally, for the many who claimed on these pages that we would not "have a definition of AI" (actually we have several), well, this legislative text provides one:<p><i>software with the ability, for a given set of human-defined objectives, to generate outputs such as content, predictions, recommendations, or decisions which influence the environment with which the system interacts, employing techniques including (a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning; (b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; (c) Statistical approaches, Bayesian estimation, search and optimization methods</i>
I was trying to understand more about AIA today after it was mentioned a few times in the oversight committee thing. Found this talk, it's is pretty good, I thought it was going to be lame content marketing but the guest is a real lawyer who seems to have a real understanding of AI and what is going on:<p><a href="https://www.youtube.com/watch?v=yoIC5EPPfn4">https://www.youtube.com/watch?v=yoIC5EPPfn4</a><p>(feel like all my HN posts are always revealing the embarrassing about of youtube I watch)
This is a very misleading website. It has an ".eu" domain name but it's nothing to do with the EU, rather it's from the Future of Life Institute.<p>This is bad because those people (the FLI) have weird political motivations, that do not automatically align with EU and human rights legislation that the new AI regulations try to protect. Whatever interpretation the FLI places on the EU act, should be treated with suspicion because of that.
Interesting, from the site:<p>“applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirement”<p>This is a continuation of EU logic first seen in GDPR around what that law calls “automated decision making”.<p>All I can say is that GDPR hasn’t had a good effect. Partly because It’s not well written from a technical perspective.<p>GDPR demands explainable and auditable automation. Non-deterministic AI systems make this difficult or impossible with current tech. So to be “compliant”, vendors dumb-down their software to use explainable methods and often inferior hiring decisions are made because users have to operate on untenable amounts of data using basic sorts. So the Talent Acquisition team end up structuring the hiring process around “disqualifers” such as resume gaps, education requirements, pre-interview qualification tests, etc.<p>It reminds me of an old recruiting joke:<p>“Recruiter: You said you only wanted to interview the 5 best applicants but we are getting so many applicants we don’t know where to start.<p>Hiring Manager: OK, first, I only hire lucky people. Print out all of the resumes and throw away every other one.”<p>Interestingly, if this process is done randomly without reviewing the resume, it’s considered legal.
“Unacceptable risk”, “high risk”, “force for good”. Terms as vague and broad as an interstellar gas cloud. It makes me wonder if this is a strawman argument against regulation.
US congress may be trying to do something, also: <a href="https://finance.yahoo.com/news/congress-took-on-ai-regulation--and-raised-a-lot-more-questions-than-answers-185553310.html" rel="nofollow">https://finance.yahoo.com/news/congress-took-on-ai-regulatio...</a>