TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Chat-based Large Language Models replicate the mechanisms of a psychic’s con

35 点作者 EventH-将近 2 年前

6 条评论

kytazo将近 2 年前
To be honest, there have been occasions where I&#x27;ve been failing miserably for some time to come up with solutions to some sysadmin nature problems with my own systems and I&#x27;ve taken my problem straight to GPT-4 with a clear explanation of the issue alongside with any diagnostics I thought would make sense.<p>To my very surprise, GPT-4 did an astonishing job reasoning on the specifics of my system (aarch64 exotic setup, alpinelinux on asahi) and came back with a very specific on point list of suggestions which included the very solution as #1.<p>I&#x27;ve had it hold my hand many times while navigating relatively complex and niche systems like android smartphones with custom partitioning schemes booting linux and what have you and yes, it still was still, very reasonable, to say the least.<p>So to conclude, it has the ability to reason properly for systems and situations that it&#x27;s not necessary trained in and displays the ability for coherent reasoning on specifics over things which at least for September 2021 were relatively unknown. I&#x27;m really wondering how far this thing needs to get in order for some people to admit its more than a model spitting a token next to another, or some type of mentalist doing an excellent job in hypnotising most of us into thinking it already displays incredible intelligence but its smoke and mirrors.
评论 #36621304 未加载
评论 #36594656 未加载
wilg将近 2 年前
Oops, another blogger falling into the trap of not specifying how they define &quot;intelligence&quot; and then making a &quot;no true scotsman&quot; argument against their loose pre-existing beliefs.<p>If you&#x27;re thinking about writing an article like this, please just define what you think intelligence is right at the top. That&#x27;s the entirety of the discussion, the rest is fluff.<p>Also, as a society we need to minimize the amount of attention we give to debates over definitions. Once a discussion or political debate is reduced to a definitional issue, everyone starts talking past each other and forgets what the argument even is. (See discussions about the definitions of &quot;life&quot;, &quot;woman&quot;, &quot;socialism&quot;, &quot;capitalism&quot;, etc.) Words are lossy proxies to ideas, and they only matter insofar as they allow us to understand one another.
评论 #36594299 未加载
xg15将近 2 年前
As someone who admittedly belongs more to the &quot;AI believer&quot; side, I find the vagueness of the training data increasingly frustrating.<p>The thing that impressed me most about LLMs so far is less the factual correctness or incorrectness of its output but the fact that it appears (!) to understand the instructions that are given. I.e., even if you give it an improbable and outlandish task (&quot;write a poem about kernel debugging in the style of Edgar Allan Poe&quot;, &quot;write a script for a Star Trek TNG episode in which Picard only talks in curse words &quot;), it always gives a response which is a valid fulfillment of the task.<p>Of course it could be that the tasks weren&#x27;t really as outlandish as they seemed and somewhere in the vast amounts of training data there was already a matching TNG fanfic which just needed some slight adjustments or something.<p>But those kinds of arguments essentially shift the black box from the model to the training data: Instead of claiming the model has magical powers of intelligence, now the training data magically already contains anything you could possibly ask for. I personally don&#x27;t find that approach that much more rational that believing in some kind of AI consciousness (or fragments of it).<p>...but of course it could be. This is why I&#x27;d wish for foundation models with more controlled training data, so we can make more certain statements about which responses could be reasonably be pulled from the training data and which would be truly novel.
评论 #36593228 未加载
评论 #36605331 未加载
akomtu将近 2 年前
In a parallel thread hardcore scientists struggle to understand a 100 neuron worm, while here less hardcore scientists proclaim they&#x27;ve understood nuances of a human brain.<p>Note, that there is a rapid rise of the &quot;mechanical consciousness&quot; dogma. Some very smart individuals are so impressed by it that rather than doubting the existence of intelligence in LLM AI, they&#x27;ve started thinking that they themselves might be machines! From there it&#x27;s one step to giving machines rights on par with humans. The dogma is very powerful.
评论 #36594530 未加载
K0balt将近 2 年前
Since it is objectively true that LLMs are predictive text engines first and foremost, it leads me to the hypothesis that the intelligence displayed by them, and by association, perhaps humans as well, is in fact imbedded into memetic structures themselves in some kind of n-dimensional probability matrix.<p>In the same way that an arbitrarily detailed simulation could in theory be made into a “make your own adventure”lookup table, where the next “page” (screen bitmap) was determined by the “control” inputs, the underpinnings of reason could easily be contained in a mundane and deceptively static medium such as a kind of multidimensionally linked list structure.<p>It could be that neural networks inherently gravitate towards the processing of symbolic grammar (sequences of “symbols”) and that the ordered complexity inherent in arbitrarily high dimensional interrelations of these symbols in human memetic structures is sufficient to create the process that we think of as reasoning or even sentience.<p>While I definitely struggle to intuit this interpretation from an emotional standpoint, the sheer multitude of states possible inside such a system are sufficient to appear infinite and therefore intrinsically dynamic, and I fail to find evidence that they could not be instead developed from a static data structure .<p>If there is a grain of truth to this hypothesis it would fundamentally change the philosophical landscape not only around LLMs but also regarding intelligence itself, the implication being not that LLMs might be intelligent, but rather that biological intelligence might in fact derive its behavior from iterating over multidimensional matrixes of learned data, and that human intelligence owes much more to culture (a vastly expanded data set) than we may have previously imagined.
评论 #36764479 未加载
cjbprime将近 2 年前
&gt; There are two possible explanations for this effect:<p>&gt; 1. The tech industry has accidentally invented the initial stages a completely new kind of mind, based on completely unknown principles, using completely unknown processes that have no parallel in the biological world.<p>&gt; 2. The intelligence illusion is in the mind of the user and not in the LLM itself.<p>Great, now write 10k words more, but this time about the psychology of your unwillingness to change from (2) to (1) when the facts changed.