TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

AI instructed brainwashing effectively nullifies conspiracy beliefs

16 点作者 13years大约 1 年前

9 条评论

_ache_大约 1 年前
It needs to explain the method. For reproducibility. Nothing clear in the blog post but everything is well explained in the research paper. Kudo for that.<p>It seems solid. But I would like to have the same study with an human conversation instead of an AI one.
评论 #39969358 未加载
breadbreadbread大约 1 年前
The term &quot;brainwashing&quot; is so misleading and fear-baity here. Humans are gullible and can be convinced of falsehoods. You have just as much if not more &quot;brainwashing&quot; power than any chatbot. The only difference is that a chatbot can reach more people faster, but we can also inoculate ourselves to the effectiveness of AI by things like &quot;media literacy&quot; and &quot;skepticism&quot;. If you know that an AI can be programmed to promote falsehoods (or otherwise fed falsehoods), you can perhaps double check sources that an AI uses to promote their claims. It&#x27;s not brain control its just media baby
评论 #39979499 未加载
boxed大约 1 年前
The article seems a bit all over the place when it comes to moral clarity and reality alignment. I do agree with their premise though that widespread influence of basic beliefs shouldn&#x27;t be done by AI systems that a few people can control. It&#x27;s a bit naive to think this isn&#x27;t done already. And also done manually by troll farms etc.<p>We do need to have better schools so that our children understand the deep interconnectedness of reality. This is the only defense against conspiracy theorists, MLMs, cults, etc.
评论 #39969307 未加载
评论 #39968414 未加载
评论 #39968431 未加载
评论 #39970146 未加载
评论 #39968412 未加载
wildrhythms大约 1 年前
The actual research is here: <a href="https:&#x2F;&#x2F;osf.io&#x2F;preprints&#x2F;psyarxiv&#x2F;xcwdn" rel="nofollow">https:&#x2F;&#x2F;osf.io&#x2F;preprints&#x2F;psyarxiv&#x2F;xcwdn</a><p>I personally found this blog article to be quite insulting to my own intelligence by peppering in these pseudo-intellectual clichés like &#x27;paradox of tyranny&#x27;.
评论 #39970128 未加载
评论 #39969223 未加载
keybored大约 1 年前
It’s astonishing how ideological authoritarianism can be smuggled in by just saying “conspiracy theory”. So agreed with the article.<p>&gt; The paper does mention that these capabilities could also be used for nefarious purposes, such as convincing people to believe in conspiracies. However, it continues to finish the thought with the idea that we simply need to ensure that such AI is only used &quot;responsibly.&quot;<p>&gt; Thus, we have arrived at the same fallacy of all authoritarian power beloved by those morally superior to the rest of us. They make all the rules. They will define what is responsible. They will define what is a conspiracy theory. They will define who is targeted by such methods.<p>See also the mythological “benevolent dictator”. Dictatorship is good because it would be efficient under the hypothetical benevolent dictator. Never mind how you would find one and avoid the malevolent or incompetent ones though.
rurban大约 1 年前
They already have the press (plus TV, radio, film, social media censorship) to do exactly that. AI&#x27;s would be cheaper, but how to get that into people, other than TV, radio and print? Chomsky would have a field day.<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Manufacturing_Consent" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Manufacturing_Consent</a>
评论 #39969254 未加载
im3w1l大约 1 年前
I&#x27;ve heard many people say “you cannot reason a person out of a position he did not reason himself into in the first place.&quot; With the implication being that conspiracy theorists can not be engaged in good faith debate and must be manipulated at best and coerced at worst.<p>If simply discussing matters calmly with an AI can lessen belief in conspiracy theories then it seems to disprove that notion, which should come as a relief to believers and non-believers alike.
friend_and_foe大约 1 年前
Is it unethical? Maybe. Gas lighting people because they believe the Wrong Thing™ and the end justifies the means.<p>People have conversations with others attempting to convince them of things. Is that unethical? Often, people do this from a disingenuous perspective. Is that more or less unethical than an LLM doing it?<p>What about scale? You can deploy LLMs to do this without feeding them. But we already have TV, one person can control hundreds bots on the internet, &quot;the algorithm&quot; in search engines and social feed creation, it would appear to me that propaganda is already industrialized to the point of diminishing returns, taking Howard Beale out of the loop is not going to increase the efficiency by that much.<p>Ethical or not, people will always try to increase their power by convincing others to behave in ways beneficial to themselves. It&#x27;s the world we live in, if you don&#x27;t want to be a tool you must remain vigilant, the tools being used don&#x27;t change that at all.
评论 #39973360 未加载
Eddy_Viscosity2大约 1 年前
Wouldn&#x27;t the opposite also be true: that AI instructed brainwashing can effectively CREATE conspiracy beliefs?
评论 #39969238 未加载