Great work. In my experience, all these things are frequently true:<p>> the AI label lends itself to inflated claims of functionality that the systems cannot meet<p>> despite the ”intelligent” label, many deployed AI systems used by public agencies [and businesses] involve simple models defined by manually crafted heuristics<p>> AI makes claims to generality while modeling behaviour that is determined by highly constrained and context-specific data<p>> [decision-makers] often define AI with respect to how human-like a system is, and concluded that this could lead to deprioritizing issues more grounded in reality<p>> even critics of technology often hype the very technologies that they critique, as a way of inflating the perception of their dangers ["criti-hype"]<p>Go read the whole thing.