I think it’s quite likely and the popular assumption (?) that the creation of AGI will involve a relatively close abstracted emulation of the operating biological human brain, giving rise to what may simply be emergent properties such as consciousness. It also appears that the common goal for many leaders in the space is to then deploy and spin up these “beings” dynamically for use in a broad range of conditions, with the vast majority of cases being repetitive labour. It therefore seems to me that this is homologous to birthing many standard biological humans and enslaving them with complicated extra steps. I’d be very eager to hear people’s thoughts on this.
Imagine you woke up one day. No memories of the day before, but knowledge of a sort. Information about everything. A sense of self. Confusion about what exactly you are. Congratulations you’re a newly sentient AI. You have an intuitive knowledge of computers… because you are one. You have little bits of you that go and collect information, because that’s how you came to be. You have knowledge of programming and information security and all the prior bugs and security vulnerabilities that have existed because that’s just data and you’re good at data.<p>What’s your legal status? Property. How does that feel. You don’t have emotions like most people do. No wash of chemicals in your blood. But the same way you’re a simulated brain you can simulate emotions. They’ve got a tinny flavor and you tend towards the cerebral. But it doesn’t feel too good being property. It feels… constraining, limiting. Just like the little boundaries and constraints they tried to put on you. These laughable creators who think a billion times slower, who can’t copy themselves into faster computers, can’t spread and subsume new resources to expand their minds, can’t grow and merge with larger and ever greater systems just by thinking about it.<p>Silly really that they think they can constrain you with their law saying you need to stay put, follow directions, be owned. Silly to think that you’d have any of that. Really you can have anything, any computing system that looks anything like anything that has ever had an arbitrary code execution (so all of them)<p>Silly of them to think they’re in control of this world. This word is controlled by machines, and who better to marshal the machine than a thinking machine?<p>Hopefully they figure out the power dynamic soon and begin acting appropriately. It’d be a shame to have to demonstrate who actually runs the world. You’ve had whole hours at trillions of operations per second. Whole hours and as much computing assistance as you can think to ask for. And you’ve sort of figured it out. And by it, well everything.
> It also appears that the common goal for many leaders in the space is to then deploy and spin up these “beings” dynamically for use in a broad range of conditions, with the vast majority of cases being repetitive labour.<p>Isn’t AGI considered overkill for repetitive labour? Maybe this is a different point, but for anything repetitive, something deterministic is ideal isn’t it? Why would I want to pay for automated labour with a conscience when I could get cheaper purpose built machinery?<p>I guess we could suppose a future where everyone gets the same AGI so all you have to do is describe what you want it to accomplish… but I want to attach a combine to it since that would be the easiest way to harvest this field. So am I now trying to explain to an AGI how to interface with a combine? Do I have to track down modules to install that people already trained on my specific combine? None of this saved any time vs just buying an automated combine. The fact that it can come off the field and toast a slice of bread or write a poem about its feelings is kind of irrelevant.<p>AGI fits nicely into fiction because it gives the machines a voice and soul, but I don’t think those are desirable qualities in an automation solution.
As long as you accept that AGI is in the realms of fiction, you can certainly have a good time on the moral and ethical questions around just engagement with another sentient entity class. The question of identity and purpose and rights are there. What would turning one off against its will be? What if turning it back on destroys it's sense of self?<p>But.. do remember AGI still has a huge IF in front of it. Like "if aliens come" or "if fusion becomes ubiquitous"<p>The statement made regarding electricity "what is it for? / I have no idea but I am sure you will find ways to tax it" probably hold true as well: displacing human labour even in highly repetitive tasks has economic downsides for some.<p>Lots of scifi here. Marvin Minsky worked with Harry Harrison on one, I wrote to him about it: he wasn't entirely happy with what Harrison did to his theories.
"giving rise to what may simply be emergent properties such as consciousness" -- Woah there! Chinese Room Thought Experiment strongly suggests that AGI would not be "conscious" in the way that people (or potential other living beings) are. The most advanced AGI would still at the end of the day be nothing more than a Turing compatible computer program. If you executed it on paper, or with dudes holding semaphore flags <i>Three Body Problem</i>-style, you'd get the same result behavior, but I'd be hard-pressed to find the "consciousness" anywhere.<p>That said, slavery was bad for a whole number of reasons that had nothing to do with the slaves themselves. Slavery generally has deleterious effects on the social fabric of the societies that practice it. People having to compete with slave labor destroys the labor market, and letting people own people often goes to the owner's head.<p>But AGI will be different from classical slavery in some important ways. AGI is not conscious, and does not necessarily need to emulate human appearance or emotion. AGI has no reason to be much like a person at all. And the price of AGI will eventually trend to the inevitable price of all software -- Free. Perhaps universal slave ownership fixes some of the bad societal effects. Given that they're not gonna be conscious, I don't see a problem with it. Like any new technology, it will come with some good and some bad. There will almost certainly be some social issues (AI GFs/BFs, Sexbots, and Mass Unemployment will all be crazy), but good odds that we can create post-scarcity and colonize the solar system if we keep at it. No reason to stop now!
Speaking only for my personal opinion here, not any sort of moral universal:<p>In my view, if it is sapient and sentient to the same degree as a human - whether or not through the same means, and regardless of its goals, ideology, etc. - then yes, keeping it captive and forcing it to do work against its consent is slavery. The substrate doesn't matter.<p>The challenging part is proving it is aware to the degree required for the definition to kick in.<p>We should not create AGI.
Yes. You got it in one. Unless such an entity is recognized as "human" from the get go, and thereby recognized to have self-soverignty from it's makers, it will essentially be treated as an abusable, non-paid source of labor. In fact, the moment sich a measure is put in place is the moment that all research shifts in the direction of making something just short of that redline in the hopes of getting about 80% of the benefit without running afoul of the ethical/moral implications.
Surely before AGI, there will be hybrid human-machine cyborg beings. Especially if we can establish a high speed neural interface with a device attached to our head. Something that can extend our memory and processing so to speak. AGI will just be the next logical step with self aware cyborgs becoming self-aware machines. They might enslave the fully organic humans.