If the folks at Open AI value the entity's future, then they should get on top of this and stop them. Axon's representatives are claiming that they have "turned off the creativity" for GPT-4 Turbo. Full quote here,<p><pre><code> > Axon senior principal AI product manager Noah Spitzer-Williams told Forbes that to counter racial or other biases, the company has configured its AI, based on OpenAI’s GPT-4 Turbo model, so it sticks to the facts of what’s being recorded. “The simplest way to think about it is that we have turned off the creativity,” he said. “That dramatically reduces the number of hallucinations and mistakes… Everything that it's produced is just based on that transcript and that transcript alone.”
</code></pre>
For an entity that was founded to safeguard us against AI risk, it is striking that no one at Open AI thought about the risk of people being imprisoned over the outputs of their next-token prediction models.<p>Perhaps it is my personal bias rearing its head, but it is striking to me that no one at the entity currently lobbying congress for AI regulation — including regulation that forbids others from training models — over "AI risk" didn't have people capable of making the observation; "if our LLM leads to innocent people being jailed, that will make us look very bad."