TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Can LLMs accurately recall the Bible?

223 点作者 benkaiser5 个月前

31 条评论

szvsw5 个月前
It seems like LLMs would be a fun way to study&#x2F;manufacture syncretism, notions of the oracular, etc; turn up the temperature, and let godhead appear!<p>If there’s some platonic notion of divinity or immanence that all faith is just a downward projection from, it seems like its statistical representation in tokenized embedding vectors is about as close as you could get to understanding it holistically across theological boundaries.<p>All kidding aside, whether you are looking at Markov chain n-gram babble or high temperature LLM inference, the strange things that emerge are a wonderful form of glossolalia in my opinion that speak to some strange essence embedded in the collective space created by the sum of their corpi text. The Delphic oracle is real, and you can subscribe for a low fee of $20&#x2F;month!
评论 #42537882 未加载
评论 #42542482 未加载
评论 #42538268 未加载
评论 #42539136 未加载
评论 #42538059 未加载
评论 #42538566 未加载
评论 #42547029 未加载
gwd5 个月前
I&#x27;m learning New Testament Greek on my own*, and sometimes I paste a snippet in to Claude Sonnet and ask questions about the language (or occasionally the interpretation); I usually say it&#x27;s from the New Testament but don&#x27;t bother with the reference. Probably around half the time, the opening line of the response is, &quot;This verse is &lt;reference&gt;, and...&quot;. The reference is almost always accurate.<p>* Using a system I developed myself; currently in open development: <a href="https:&#x2F;&#x2F;www.laleolanguage.com" rel="nofollow">https:&#x2F;&#x2F;www.laleolanguage.com</a>
评论 #42544925 未加载
评论 #42545177 未加载
nickpsecurity5 个月前
I tested this back when GPT4 was new. I found ChatGPT could quote the verses well. If I asked it to summarize something, it would sometimes hallucinate stuff that had nothing to do with what was in the text. If I prompted it carefully, it could do a proper exegesis of many passages using the historical-grammatical method.<p>I believe this happens because the verses and verse-specific commentary are abundant in the pre-training sources they used. Whereas, if one asks a highly-interpretive question, then it starts re-hashing other patterns in its training data which are un-Biblical. Asking about intelligent design, it got super hostile trying to beat me into submission to its materialistic worldview every paragraph.<p>So, they have their uses. I’ve often pushed for a large model trained on Project Gutenberg to have a 100% legal model for research and personal use. A side benefit of such a scheme would be that Gutenberg has both Bibles and good commentaries which trainers could repeat for memorization. One could add licensed, Christian works on a variety of topics to a derived model to make a Christian assistant AI.
评论 #42544731 未加载
cowmix5 个月前
When I test new LLMs (whether SaaS or local), I have them create a fake post to r&#x2F;AmItheAsshole from the POV of the older brother in the parable of the Prodigal Son.<p>It&#x27;s a great, fun test.
评论 #42546124 未加载
danpalmer5 个月前
LLMs are bad databases, so for something like a bible which is so easily and precisely referenced, why not just... look it up?<p>This is playing against their strengths. By all means ask them for a summary, or some analysis, or textual comparison, but please, please stop treating LLMs as databases.
评论 #42546545 未加载
评论 #42546183 未加载
asimpleusecase5 个月前
This is nice work. The safest approach is using the look up - which his data shows to be very good - and combine that with a database of verses. That way textual accuracy can be retained and very useful lookup be carried out by LLM. This same approach can be used for other texts where accurate rendering of the text is critical. For example say you built a tool to cite federal regulations in an app. The text is public domain and likely in the training data of large LLMs but in most use cases hallucinating the text of a fed regulation could expose the user to significant liability. Better to have that canonical text in a database to insure accuracy.
ks20485 个月前
This is interesting. I&#x27;m curious about how much (and what) these LLMs memorize verbatim.<p>Does anyone know any more thorough papers on this topic? For example, this could be tested on every verse in bible and lots of other text that is certainly in the training data: books in project gutenberg, wikipedia articles, etc.<p>Edit: this (and its references) looks like a good place to start: <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2407.17817v1" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2407.17817v1</a>
评论 #42542876 未加载
评论 #42543461 未加载
jsenn5 个月前
Has there been any serious study of exactly how LLMs store and retrieve memorized sequences? There are so many interesting basic questions here.<p>Does verbatim completion of a bible passage look different from generation of a novel sequence in interesting ways? How many sequences of this length do they memorize? Do the memorized ones roughly correspond to things humans would find important enough to memorize, or do LLMs memorize just as much SEO garbage as they do bible passages?
评论 #42547984 未加载
评论 #42546473 未加载
waynecochran5 个月前
I find LLM&#x27;s good for asking certain kinds of Biblical questions. For example, you can ask it to list the occurrences of some event, or something like &quot;list all the Levitical sacrifices,&quot; &quot;what sins required a sin offering in the OT,&quot; &quot;Where in the Old Testament is God referred to as &#x27;The Name&#x27;?&quot; When asking LLM&#x27;s to provide actual interpretations you should know that you are on shaky ground.
评论 #42546482 未加载
asim5 个月前
I had similar thoughts about using it for the Quran. I think this highlights you have to be very specific in your use cases especially when expecting an exact response on static text that shouldn&#x27;t change. This is why I&#x27;m trying something a bit different. I&#x27;ve generated embeddings for the Quran and use chromem-go for this. So I&#x27;ll ask the index the question first based on a similarity search and then feed the results in as context to an LLM. But in the response I&#x27;ll still sight the references so I can see what they were. It&#x27;s not perfect but a first step towards something. I think they call this RAG.<p>What I&#x27;m working on <a href="https:&#x2F;&#x2F;reminder.dev" rel="nofollow">https:&#x2F;&#x2F;reminder.dev</a>
评论 #42538137 未加载
评论 #42538188 未加载
评论 #42545257 未加载
评论 #42538217 未加载
评论 #42548243 未加载
评论 #42543891 未加载
avree5 个月前
I wonder if the author knows that &quot;slurpees&quot; is misspelled in his bio on the post.
评论 #42544630 未加载
kittikitti5 个月前
I tried something similar with my favorite artist, Ariana Grande. Unfortunately, not even the most advanced AI could beat my knowledge of her lyrical work.
评论 #42538275 未加载
evanjrowley5 个月前
Approximately 1 year ago, there was a HN submission[0] for Biblos[1], an LLM trained on bible scriptures.<p>[0] <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=38040591">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=38040591</a><p>[1] <a href="http:&#x2F;&#x2F;www.biblos.app&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.biblos.app&#x2F;</a>
评论 #42543917 未加载
jccalhoun5 个月前
It is fun and frustrating to see what LLMs can and can&#x27;t do. Last week I was trying to find the name of a movie so I typed a description of a scene in chatgpt and said &quot;I think it was from late 70s or early 80s and even though it is set in the USA, I&#x27;m pretty sure it is European&quot; and it correctly told me it was the House by the Cemetery.<p>Then last night I saw a video about the Parker Solar Probe and how at 350,000mph it was the fastest moving man-made object. So I asked chatgpt how long at that speed it would take it to get to Alpha Centauri which is 4.37 light years away. It said it would take 59.8 million years. I knew that was way too long so I had it convert mph to miles per year and then it was able to give me the correct answer of 6817 years.
评论 #42547721 未加载
评论 #42547089 未加载
评论 #42547339 未加载
efitz5 个月前
Interesting result but probably predictable since you’re trying to use the LLM as a database. But I think you’re onto something in that your experiments can provide data to inform (and hopefully dissuade) creation of applications that similarly try to use LLMs for exact lookups.<p>I think the experiment of using the LLM to recall described verses - eg “what’s the verse where Jesus did X”- is a much more interesting use. I think also that the LLM could be handy as, or to construct, a concordance. But I’d just use a document or database if I wanted to look up specific verses.
ChuckMcM5 个月前
Interesting that it takes an LLM with 405 BILLION parameters to accurately recall text from a document with slightly less than 728 THOUSAND words. (not quite three decimal orders of magnitude smaller but still).
评论 #42545228 未加载
评论 #42545195 未加载
评论 #42545577 未加载
michaelsbradley5 个月前
I’ve been pretty impressed with ChatGPT’s promising capabilities as a research assistant&#x2F;springboard for complex inquiries into the Bible and patristics. Just one example:<p><pre><code> Can you provide short excerpts from works in Latin and Greek written between 600 and 1300 that demonstrate the evolution over those centuries specifically of literary references to Jesus&#x27; miracle of the loaves and fishes? </code></pre> <a href="https:&#x2F;&#x2F;chatgpt.com&#x2F;share&#x2F;675858d5-e584-8011-a4e9-2c9d2df78325" rel="nofollow">https:&#x2F;&#x2F;chatgpt.com&#x2F;share&#x2F;675858d5-e584-8011-a4e9-2c9d2df783...</a>
评论 #42538262 未加载
评论 #42538286 未加载
评论 #42538320 未加载
orionblastar5 个月前
There is this robot that reads the Bible: <a href="https:&#x2F;&#x2F;futurism.com&#x2F;religious-robots-scripture-nursing-homes" rel="nofollow">https:&#x2F;&#x2F;futurism.com&#x2F;religious-robots-scripture-nursing-home...</a>
评论 #42538625 未加载
cbg05 个月前
While this is slightly more catered towards a technical audience, I think articles on relatable subjects like this one could prove valuable in getting non-technical people to understand the limitations of LLMs, or what companies are calling &quot;AI&quot; these days. A version of this article that is more focused on real-world examples, showing exactly how the models can make mistakes and present the wrong or incomplete information with less technical focus would probably better cater to a non-technical audience.
pwinkeler5 个月前
I love that people are finally comfortable adding the word &quot;artificial&quot; into their analysis of the bible. About time. Because make no mistake, LLMs are at best artificial intelligence. More likely, they are very good regurgitating machines, telling us what we have been telling ourselves in an even better form thus goading us along in our fallacies.
graemep5 个月前
The Bible is a very tricky thing to recall word for work because of differences between canons and translations. Different wording might be taken from a different translation than the one asked for, rather than being wrong.
评论 #42545192 未加载
gerdesj5 个月前
Why?<p>Why do you put a weird computer model between you and a computer and errr Your Faith? Do bear in mind that hallucinations might correspond to something demonic (just saying)<p>I&#x27;m a bit of a rubbish Christian but I know a synoptic gospel when I see it and can quote quite a lot of scripture. I am also an IT consultant.<p>What exactly is the point of Faith if you start typing questions into a ... computational model ... and trusting the outputs? Surely you should have a decent handle on the literature: It&#x27;s just one big physical book these days - The Bible. Two Testaments and a slack handful of books and that for each. I&#x27;m not sure exactly but it looks about the same size as the Lord of the Rings.<p>I&#x27;ve just checked: Bible: 600k LotR: 480K - so not too far off.<p>I get that you might want to ask &quot;what if&quot; types of questions about the scriptures but why would you ask a computer? Faith is not embedded in an Intel Core i7 or an Nvidia A100.<p>Faith is Faith. ChatGPT is odd.
评论 #42545966 未加载
评论 #42546570 未加载
killermouse05 个月前
I believe I saw or read somewhere that, in the case of the brain, memories were not as much stored as they were reconstructed when recalled. If that&#x27;s true, I feel like we are witnessing something similar with LLMs as well as with stable diffusion type of things. Is there any studies looking into this in the AI world? Also if anyone knows what I&#x27;m referring to (i.e &quot;reconstructing memories&quot;) I would love some pointers because I can&#x27;t remember for the love of me where I heard or read of this idea!
评论 #42550197 未加载
Animats5 个月前
It&#x27;s discouraging that an LLM can accurately recall a book. That is, in a sense, overfitting. The LLM is supposed to be much smaller than the training set, having in some sense abstracted the training inputs.<p>Did they try this on obscure bible excerpts, or just ones likely to be well known and quoted elsewhere? Well known quotes would be reinforced by all the copies.
评论 #42543124 未加载
评论 #42545640 未加载
评论 #42543534 未加载
评论 #42544514 未加载
seanhunter5 个月前
By Betteridge&#x27;s law of headlines, the answer is clearly &quot;no&quot;.[1]<p>But also, LLM&#x27;s in general build a lossy compression of their training data so are not the right tool if you want a completely accurate recall.<p>Will the recall be accurate enough for a particular task? Well I&#x27;m not a religious person so I have no framework to help decide that question in the context of the bible. If you want a system to answer scripture questions I would expect a far better approach than just an LLM would be to build a RAG system and train the RAG embedding and search at the same time you train the model.<p>[1] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Betteridge%27s_law_of_headlines" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Betteridge%27s_law_of_headline...</a>
weMadeThat5 个月前
they totally can.<p>I got exiled into an isolated copy of an AI-populated internet once and they put perfectly accurate bible quotes into dictionaries!
ddtaylor5 个月前
I&#x27;m heavily biased here because I don&#x27;t find much value in the bible personally. Some of the stories are interesting and some interpretations seem useful, but as a whole I find it arbitrary.<p>I never tell other people what to believe or how they should do that in any capacity.<p>With that said I find the hallucination component here fascinating. From my perspective everyone who interprets various religious text does so differently and usually that involves varying levels of fabrication or something that looks a lot like it. I&#x27;m speaking about the &quot;talking in tongues&quot; and other methods here. I&#x27;m not trying to lump all religions into the same bag here, but I have seen that a lot have different ways of &quot;receiving&quot; communication or directive. To me this seems pretty consistent with the colloquial idea of a hallucination.
评论 #42543462 未加载
评论 #42542763 未加载
评论 #42545126 未加载
评论 #42538376 未加载
dudeinjapan5 个月前
In the beginning was the Vector, and the Vector was with God, and the Vector was God.
eddiewithzato5 个月前
Why then does it have a hard time being a judge for MTG rule interactions?
sneak5 个月前
&gt; <i>While they can provide insightful discussions about faith, their tendency to hallucinate responses raises concerns when dealing with scripture</i><p>I experience the exact same problem with human beings.<p>&gt; <i>, which we regard as the inspired Word of God.</i><p>QED
MrQuincle5 个月前
&quot;I&#x27;ve often found myself uneasy when LLMs (Large Language Models) are asked to quote the Bible. While they can provide insightful discussions about faith, their tendency to hallucinate responses raises concerns when dealing with scripture, which we regard as the inspired Word of God.&quot;<p>Interesting. In my very religious upbringing I wasn&#x27;t allowed to read fairy tales. The danger being not able to classify which stories truly happened and which ones didn&#x27;t.<p>Might be an interesting variant on the Turing test. Can you make the AI believe in your religion? Probably there&#x27;s a sci-fi book written about it.
评论 #42547437 未加载
评论 #42542806 未加载
评论 #42554257 未加载