TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Police is using ChatGPT-4 powered body cams that turn audio into reports

15 点作者 giuliomagnifico大约 1 年前

7 条评论

danpalmer大约 1 年前
The main thing that keeps me from using AI for things like document summaries is that it regularly trips up on the nuances in logic – mistaking positives and negatives when multiple points are being discussed, misattribution of motivation, etc. I can&#x27;t think of a much worse thing for this to be used for than police reports.<p>You could say that because the bodycam footage still exists it won&#x27;t be a problem, but the point of these reports is for decisions to be made based on them without needing to review the full footage. People are going to get dragged through court based on this, and if they&#x27;re lucky, come out with thousands spent in legal bills to get someone to just review the bodycam footage where they said the opposite to the report.
评论 #40144161 未加载
areoform大约 1 年前
If the folks at Open AI value the entity&#x27;s future, then they should get on top of this and stop them. Axon&#x27;s representatives are claiming that they have &quot;turned off the creativity&quot; for GPT-4 Turbo. Full quote here,<p><pre><code> &gt; Axon senior principal AI product manager Noah Spitzer-Williams told Forbes that to counter racial or other biases, the company has configured its AI, based on OpenAI’s GPT-4 Turbo model, so it sticks to the facts of what’s being recorded. “The simplest way to think about it is that we have turned off the creativity,” he said. “That dramatically reduces the number of hallucinations and mistakes… Everything that it&#x27;s produced is just based on that transcript and that transcript alone.” </code></pre> For an entity that was founded to safeguard us against AI risk, it is striking that no one at Open AI thought about the risk of people being imprisoned over the outputs of their next-token prediction models.<p>Perhaps it is my personal bias rearing its head, but it is striking to me that no one at the entity currently lobbying congress for AI regulation — including regulation that forbids others from training models — over &quot;AI risk&quot; didn&#x27;t have people capable of making the observation; &quot;if our LLM leads to innocent people being jailed, that will make us look very bad.&quot;
评论 #40144332 未加载
bpiche大约 1 年前
I interviewed with these guys. At the time I could not fathom why you would want to pair LLMs with body cams and thought it was a case of hype and sticking LLMs in everything. What’s next, toasters? Guess this is a lesson to think a little more outside the box. But the certainty that hallucinations will happen and the potential to use models&#x2F;prompts that prefer a certain version of the truth makes me very worried<p>Also, they make tasers. What could go wrong?
flandish大约 1 年前
Axon has a history of doing (imho) shady crap like claiming tazers won’t kill people or pumping up “excited delirium” claims.
underseacables大约 1 年前
Let&#x27;s hope this never gets accepted in the courtS, because we know how much AI never make mistakes
评论 #40144171 未加载
flandish大约 1 年前
Great. Police reports can be leaked through gpt leaks without needing foia hassles now. ;)
gaogao大约 1 年前
My dad&#x27;s a doctor and the amount of paperwork you need to write these days has gone up a whole bunch, even higher for him then the 40% of time mentioned in the article. There&#x27;s a big temptation to fudge on the reports too, since it&#x27;s mostly checkmarking to cya for lawsuits. Thus ,I think this is actually one area where because the people writing the report are someone misaligned with the purpose of the report, ML powered report writing is a better solution.
评论 #40144133 未加载