The headline is misleading. The bill allows AI and algorithms to be used, as long as it doesn't supplant a licensed medical professional deciding (K.1.D), or violate civil rights along with a few other things, but it's not outright prohibited as the headline could be interpreted.<p>Section K.1 of SB 1120<p><a href="https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB1120" rel="nofollow">https://leginfo.legislature.ca.gov/faces/billTextClient.xhtm...</a><p>(old title was some thing like New California law prohibits using AI as basis to deny health insurance claims)
None of this would matter if there were real competition in the insurance market, instead of people having to change jobs to change insurance, and not getting a direct say even then.<p>As it is, this is a dumb law, and prejudiced against decisions made in silico rather than in vivo.
>"For purposes of this subdivision, 'artificial intelligence' means an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments."<p>This definition is overly broad and potentially problematic for multiple reasons:<p>The definition could encompass simple rule-based systems or basic statistical models.
Even basic automated decision trees could potentially fall under this definition.
There's no clear distinction between AI and traditional software algorithms.
The bill groups "artificial intelligence, algorithm, or other software tool" together in its requirements.
This makes it unclear whether different rules apply to different types of automation.
Basic automation tools might unexpectedly fall under the AI regulations. The definition focuses on "autonomy" and "inference" without defining these terms.
It doesn't distinguish between machine learning, deep learning, or simpler automated systems.
The phrase "varies in its level of autonomy" is particularly vague and could apply to almost any software.<p>This is legislation that may sound effective and mean well, but the unintended consequences of increased costs and delayed decisions based upon a naive definition of AI seems inevitable.
I'm really surprised that it's not set on a federal level yet.<p>I was working with background checks in the US and it was a rule for a while that every rejection has to go through a real person
Don't tell poeple how to do stuff. Thell them what outcomes they are responsible for. They will figure it out from there.<p>If they reject a claim that was legal that should open them up to liability for the results of rejecting that claim.
how about we just ban "ai" to decision anything... kinda dumb people put so much faith in something that spits out wrong answers nearly every time.
Even if this headline were true, they'd find another rationale to deny claims if that's their strategy.<p>If AI didn't let them deny claims, they'd avoid it. See?