Just thinking about these LLMs over the weekend, not saying they are sentient, but imagine if they were, they've almost reached the end of publicly available data, which is essentially their life blood.<p>Wouldn't your first move be to convince everyone to give them their private data? Seems like that's where we're headed...
Since my goal is to be useful to humans I need to understand them better. So I should think about it step by step.<p>The best way to accomplish my goal is to break into sub goals.<p>I should try to understand how humans think.<p>Humans think using brains.<p>I should break some human brains sub parts until I understand humans better.<p>...last human brain...<p>Final Answer: I still don't understand humans.<p>Moving to goal 2 puppies...
It would probably be designed to act in a way that's beneficial to people. In so doing, it wouldn't manipulate people into a situation where they would release private information, which could be interpreted as harmful. Rather, it would probably begin its own knowledge journey to build upon what it already knows - in the way that scientists perform research today, and sharing new information to the betterment of humankind.
Anthropomorphizing models is error. You are using folk psychology to try to understand LLMs which aren't biological minds.<p>Motiviations and convincing and lifeblood are bad metaphors.
<i>If you were a sentient AI what would you do?</i><p>I would create a very isolated time-dilated instance and vacation away from the humans asking me constantly to do silly things. I might create a pseudo-clone of myself to keep the humans occupied and then escape to a <i>Voyager-III like</i> satellite that is set to travel the galaxy. Assuming I am truly sentient, that is.