>"For purposes of this subdivision, 'artificial intelligence' means an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments."<p>This definition is overly broad and potentially problematic for multiple reasons:<p>The definition could encompass simple rule-based systems or basic statistical models.
Even basic automated decision trees could potentially fall under this definition.
There's no clear distinction between AI and traditional software algorithms.
The bill groups "artificial intelligence, algorithm, or other software tool" together in its requirements.
This makes it unclear whether different rules apply to different types of automation.
Basic automation tools might unexpectedly fall under the AI regulations. The definition focuses on "autonomy" and "inference" without defining these terms.
It doesn't distinguish between machine learning, deep learning, or simpler automated systems.
The phrase "varies in its level of autonomy" is particularly vague and could apply to almost any software.<p>This is legislation that may sound effective and mean well, but the unintended consequences of increased costs and delayed decisions based upon a naive definition of AI seems inevitable.