Insurance tech guy here. This is not the revolutionary new type of insurance that it might look like at first glance. It's an adaptation of already-commonplace insurance products that are limited in their market size. If you're curious about this topic, I've written about it at length: <a href="https://loeber.substack.com/p/24-insurance-for-ai-easier-said-than" rel="nofollow">https://loeber.substack.com/p/24-insurance-for-ai-easier-sai...</a>
Man I wish I could get insurance like that. "Accountability insurance"<p>You were responsibile for something, say, child care, and you just decided to go for beer and leave the child with an AI. The house burns down, but because you had insurance you are not responsible. You just head along to your next child care job and don't too much worry about it.
At best, this screams, “you’re doing it wrong.”<p>We know this stuff isn’t ready, is easily hacked, is undesirable by consumers… and will fail. Somehow, it’s still more efficient to cover losses and degrade service than to approach the problem differently.
No mercy. Had to deal with one when looking for apartments and it made up whatever it thought I wanted to be right. Good thing they still had humans around in person when I went for a tour.
Can consumers get AI insurance that covers eating a pizza with glue on it, or eating a rock?<p><a href="https://www.forbes.com/sites/jackkelly/2024/05/31/google-ai-glue-to-pizza-viral-blunders/" rel="nofollow">https://www.forbes.com/sites/jackkelly/2024/05/31/google-ai-...</a><p>How about MAGA insurance that covers injecting disinfectant, or eating horse dewormer pills, or voting for tariffs?
Oooh, the foundation-model developers could offer to take first losses up to X if developers follow a rule set. This would reduce premiums and thus increase uptake among users of their models.
Reading the actual article, this seems odd. It only covers cases when the models degrade, but there hasn't been evidence of a LLM pinned to a checkpoint degrading yet.