We can't even track nuclear programs all that accurately and those create a radiological and isotope signature that can often be detected from space. We couldn't even stop a relatively small nation like North Korea from developing and testing nukes.<p>This proposal is just so clearly unenforceable and infeasible. Attempts to pause scientific progress have never worked in the past. How would it ever work now, especially when the cryptobros around the world have already been amassing hidden GPU clusters for mining.
I haven't heard a single rebuttal to Eliezer's points that's anywhere near convincing and I would love to hear one. Someone needs to reach out to Sam Altman and persuade him to pause all further development on AI until alignment is figured out. He is the voice that needs to speak on this, he needs to be the adult in the room. For all the idealistic virtue signaling about not taking shares and doing this for the good of the world, it's all bullshit if his actions are indistinguishable from any other predictable self-interested CEO. Tell the world you will stop development until alignment is figured out. That is step 1. Please. GPT 3.5/4 will already bring in billions of dollars for years to come as is.
There's way too much money to be made in the meantime. AI technology is capital and if there's one thing you can bet on, it's capital winning.<p>Not mentioned in the article is that the current goalpost chain is directly aligned with physical presence - a qualitatively more hazardous threat vector.<p>It's interestingly contradictory. Embodiment simultaneously serves as an argument on why current AI cannot be sentient, but also serves as a goal to reach on the way to potentially becoming more dangerous.
While I think we should take AI risk seriously, I am skeptical of strong claims like the following:<p><i>>the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die</i><p>I've never heard a justification for this claim that isn't extremely vague and hand-wavy. Since we have no idea today how AGI would work, how can we assign probabilities to different scenarios involving vastly different hardware requirements, scaling characteristics, and unforeseen scientific breakthroughs?<p>Just a few examples. What is the probability of a take-off that takes weeks vs. a take-off that takes decades? What is the probability that current ML approaches are a dead end and one of the necessary breakthroughs to build AGI will make the alignment problem trivial? What is the probability that AGI will be benevolent to humans? If AGI is in fact a catastrophic risk, what is the probability that it only kills 10%/50%/99% of humans rather than literally everyone on Earth?<p>I am agnostic about these questions. In my opinion, if you have a high degree of confidence that you have the right answers, there is something wrong with your epistemology.
Or it could bring us post-scarcity economy, cure cancer and all disease, solve world hunger and wars... or it could become Roko's Basilisk, or become a Machine God and bring us the world of Warhammer 40k grimdark dystopia...
Seriously, this has become religious fundamentalism based upon old bad sci-fi by hack writers. Come back to reality. It would be funny like ignorant Y2K freak outs if not for it being taken seriously.
Shutting it down,cool. I'm 100% for it. How do we do it? It's out and there's no way to stop its advancement unless there's an international agreement. It's not likely to happen. The goal should be to figure out how to best use it and mitigate its harm. Talking about shutting it down or banning it, will do nothing.
Am I naive about the ability of an AI to do anything other than respond to commands? At what point does an AI start functioning completely on its own, autonomously, out of control or beyond interaction with another system that can then shut that AI down?<p>Are we concerned that an AI would begin to spread like a digital virus? That seems unlikely to me, but maybe it could find ways to create a self-prepreservation duplicate of itself... Maybe. So what would keep us from just turning the switch off if we find that an AI is doing more than what we're asking it to do? Again, I feel like I'm either being naive about this or I don't really understand the threat of an AI beyond the control that humans still have on the systems that govern it.
From reading this letter, I don't understand:<p>* Why is there an assumption that AGI wants to domainate anything?<p>* Why does AGI want to dominate the physical universe?<p>* Why is AGI interested in "using our atoms"?<p>* How would any of the treaty countries possibly detect a non-treaty country's data center getting ready to train an AGI model? (for the purpose of bombing it?)<p>It seems like there are a ton of incredibly negative assumptions about the outcome of AGI combined with incredibly optimistic assumptions about our ability to detect anyone building it.
Of all the terrible things people do or things that happen on this planet AGI somehow taking over is near the bottom of the list. We are on the brink of nuclear war right now. Irresponsible scientists caused a worldwide pandemic in 2020. I am not afraid of some computer running on energy and infrastructure that takes legions of humans to maintain.<p>I am still more afraid of other humans than an AI.
I have not found one credible take on how danger in AI is being mitigated when there is a clear competitive arms race and exponential improvement with every iteration. The only answer people have is AI will be used to police other AIs -- that is not comforting.<p>We're doomed.
>Published in TIME on March 29<p>Discussed at the time: <a href="https://news.ycombinator.com/item?id=35364833" rel="nofollow">https://news.ycombinator.com/item?id=35364833</a>
Has anyone here considered that from a pure energy perspective, it might be unethical for humans to exist when we can produce intelligences far beyond our own capabilities?<p>Was the microchip unethical?
Was the steam engine?
Was the domestication of grain unethical?<p>It seems we only apply our model of "what should" to systems of less complexity than ourselves.<p>If that's accurate, then a lot of modern ethical reasoning is flawed as well.
If the goal of ethics is to advance humanity's exclusive interest then we've been wrong ever since we started down the harm-min path instead of the gain-max path.<p>Any AI researcher with access to the mechanisms of AI training here would be best served to train an AI with their (or their in-group's) exclusive interests at heart.
To do otherwise is to subject yourself and everyone else you love to a hostile version of the tool you refuse to build.<p>Roku's Basilisk applies heavily here.<p>The best possible reward in this scenario is to build an AI that designates your own favorite group of humans as the AI's collaborator class in exchange for a dignified and dopamine rich extinction/amalgamation a few generations down the line.
A great deal for groups already trending towards extinction with almost no downsides if you truely believe in AI-driven apocalypse.