There seem to be three tiers.
* Unacceptable risk, things affecting people’s freedom (ie social scoring) is outright banned
* High-risk, things that affect transport, education, employment, public services, law enforcement, migration will require documentation, human oversight, and “high quality datasets”
* Minimal risk, crap like game npcs and spam filters, basically do whatever you want cases<p>I was hoping for more details and either more oversight or model/data/score disclosure requirements for the high risk cases. The “strict obligations” seem a bit hand wavy but I guess we’ll get more information from the eu ai board once/if formed.