TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Using symbolic logic to mitigate nondeterminism and hallucinations of LLMs

1 点作者 dhoelzgen超过 1 年前

1 comment

dhoelzgen超过 1 年前
For a medical &amp; caretaking project, I experimented with combining symbolic logic with LLMs to mitigate their tendency to nondeterministic behavior and hallucinations. Still, it leaves a lot of work to be done, but it&#x27;s a promising approach for situations requiring higher reliability.<p>The core idea is to use LLMs to extract logic predicates from human input. These are then used to reliably derive additional information by using answer-set programming based on expert knowledge and rules. In addition, inserting only known facts back into the prompt removes the need for providing the conversation history, thus mitigating hallucinations during more extended conversations.