TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Inverse Mengele – A Thought Experiment

11 点作者 Ftuuky将近 3 年前

7 条评论

Barrin92将近 3 年前
I don&#x27;t think you need to go to sci-fi scenarios because that is already reality for millions of lab animals on the planet, who are probably very sentient. Just a reminder:<p><i>&gt;In the complaint, PCRM said Neuralink used a substance known as “BioGlue” that destroyed parts of the monkeys’ brains. It described animals exhibiting substantial psychological effects from the experiments, including anxiety, vomiting, poor appetite, hair loss and self-mutilating behavior including removing their own fingers.<p>&gt;Neuralink called the data cited in the complaint “misleading”, saying in a blogpost it “did and continues to meet federally mandated standards”. After the UC Davis partnership came to an end, Neuralink moved its work to an in-house facility.<p>&gt;It responded directly to allegations that more than a dozen monkeys died after Neuralink procedures, stating that some of these were “terminal procedures” – where live test subjects are euthanized “humanely” following surgery.</i>&quot;<p><a href="https:&#x2F;&#x2F;www.theguardian.com&#x2F;world&#x2F;2022&#x2F;feb&#x2F;15&#x2F;elon-musk-neuralink-animal-cruelty-allegations" rel="nofollow">https:&#x2F;&#x2F;www.theguardian.com&#x2F;world&#x2F;2022&#x2F;feb&#x2F;15&#x2F;elon-musk-neur...</a><p>Given that we&#x27;re torturing the absolute crap out of intelligent, living beings not only &#x27;for science&#x27; but for erection pills and hair loss shampoos and bogus gall blader medicine I think these AI debates are needlessly abstract and we know the answer to the question if we&#x27;d prevent harm anyway.
satisfice将近 3 年前
There is no point at which artificial life assumes the mantle and status of human life, because if there was such a thing it would immediately lead to the destruction of our society. Imagine if I could press a button and generate 50,000 instances of something that has the same rights as me. This is the equivalent of getting three wishes and making one of them be “50,000 more wishes.” If it were allowed the whole system implodes.<p>Either my 50,000 are slaves who outvote you and change all the laws because you didn’t act fast enough to make your own slave army, or each of my 50,000 uses their rights to generate 50,000 more of theor own slaves, ad infinitum.<p>This experiment is a reductio ad absurdum to the notion that artificial life— essentially a long string of carefully chosen bits— can have rights.<p>But aren’t real humans just information too, you ask? Yes. The reason we insist on human rights is simply that the alternative is endless war.
petercooper将近 3 年前
Black Mirror had an episode covering this very issue called <i>Hang the DJ</i>. (Spoiler alert!) Essentially a giant computer was simulating billions of human-to-human relationships using virtual (but sentient) people to see which would work, and which wouldn&#x27;t, in order to make a dating app more accurate, before culling them off. I thought it was absolutely terrifying, yet most viewers overlooked the ethical issues of mass simulation of life for trivial purposes and thought it was cute(!) (I should stress that in the episode, the virtual people caught up in this are portrayed as being sentient and aware, they&#x27;re not just a bundle of numbers and formulas like &quot;people&quot; in a game like Simcity.)
评论 #31964845 未加载
umvi将近 3 年前
For religious folks this dilemma is easy: a material body needs a &quot;spirit&quot; to be alive (this is what is mainly responsible for consciousness and the ability to experience qualia), and automations have no spirit, therefore they are not (and can never be) alive.<p>For the non-religious I can see the dilemma. Human minds are just biological computers and nothing more, so there&#x27;s a point at which electric computers will match or exceed the biological ones.<p>At any rate, both the religious and non religious are forced to assume consciousness is some form of magic or another beyond human control. There&#x27;s no way to prove that anything or anyone is conscious beyond yourself.
评论 #31965486 未加载
cellis将近 3 年前
As someone currently building GPT-* applications for fun and profit it has dawned on me. Essentially it&#x27;s an ethics problem, and an unsolved one. I already feel really uncomfortable asking peurile questions of any LLM I experiment with, but at the same time, that&#x27;s just my human projections of what the LLM &quot;feels&quot;. As it has limited memory, and further, doesn&#x27;t update its nodes in response to my queries, there&#x27;s not a high likelihood that it has feelings. But once it does...
clord将近 3 年前
Isn’t this just a version of Roko’s basilisk?<p><a href="https:&#x2F;&#x2F;rationalwiki.org&#x2F;wiki&#x2F;Roko%27s_basilisk" rel="nofollow">https:&#x2F;&#x2F;rationalwiki.org&#x2F;wiki&#x2F;Roko%27s_basilisk</a><p>Edit: I mean it’s similar in that at some point you’re punishable for knowing and still preventing the AI instead of not bringing it about. I feel about the same with both scenarios.
schoen将近 3 年前
Cf. <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1410.8233" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1410.8233</a> (&quot;Do Artificial Reinforcement-Learning Agents Matter Morally?&quot;).