I find it hard to believe that anything like this will be feasible or effective beyond a certain level of complexity. It seems like a willful denial of the complexity and ambiguity of natural language, and I am not looking forward to some poor developer trying to reason their way out of a two-hundred-step paradox that was accidentally created.<p>And for a use-case simple enough for this system to work (e.g. regurgitate a policy), it seems like the LLM is unnecessary. After all, if your system can perfectly interpret the question and answer and see if this rule set applies, then you can likely just use the rule set to generate the answer rather than wasting resources with a giant language model.
This amuses me tremendously. I began programming in the early 1980s and quickly developed an interest in Artificial Intelligence. At the time there was a great interest in the advancement of AI by the introduction of "Expert Systems" (which would later play a part in the ‘Second AI Winter’).<p>What Amazon appears to have done here is use a transformers based neural network (aka LLM) to translate natural language into symbolic logic rules which are collectively used together in what could be identified as an Expert System.<p>Full Circle. Hilarious.<p>For reference to those on the younger side:
The Computer Chronicles (1984) <a href="https://www.youtube.com/watch?v=_S3m0V_ZF_Q" rel="nofollow">https://www.youtube.com/watch?v=_S3m0V_ZF_Q</a>
I hadn't heard of Amazon Bedrock Guardrails before, but after reading about it, it seems similar to Nvidia NeMo Guardrails which I have heard of: <a href="https://docs.nvidia.com/nemo/guardrails/introduction.html" rel="nofollow">https://docs.nvidia.com/nemo/guardrails/introduction.html</a><p>The approaches seem very different though. I'm curious if anyone here has used either or both and can share feedback.
This is an interesting approach.<p>By constraining the field it is trying to solve it makes grounding the natural language question in a knowledge graph tractable.<p>An analogy is type inference in a computer language: it can't solve every problem but it's very useful much of the time (actually this is a lot more than an analogy because you can view a knowledge graph as an actual type system in some circumstances).
If this is necessary, LLMs have officially jumped the shark. And I do wonder how much of this "necessary logic" has already been added to ChatGPT and other platforms, where they've offloaded the creation of logic-based heuristics to Mechanical Turk participants, and like the old meme, AI unmasked is a bit of LLM and a tonne of IF, THEN statements.<p>I get the vibe VC money is being burned with promises of an AGI that may never eventuate and there's no clear path to.
Post title: Automated reasoning to remove LLM hallucinations<p>---<p>and yet, the paper that went around in March:<p>Paper Link: <a href="https://arxiv.org/pdf/2401.11817" rel="nofollow">https://arxiv.org/pdf/2401.11817</a><p>Paper Title; Hallucination is Inevitable: An Innate Limitation of Large Language Models<p>---<p>Instead of trying to trick a bunch of people into thinking we can somehow ignore the flaws of post-LLM "AI" by also using the still flawed pre-LLM "AI", why don't we cut the salesman BS and just tell people not to use "AI" for the range of tasks it's not suited for.
How does automation reasoning actually check a response against the set of rules without using ML? Wouldn't it still need a language model to compare the response to the rule?