> The lack of long term memory, of an emotion center (that reacts to external stimuli) are big limiters to anything like a consciousness emerging. But I'm also optimistic that those are problems that can be solved and that I am certain are worked on by very smart people at this very moment.<p>Can somebody explain to me why is this not a death wish? How can a super-intelligent being that (quote) "takes care of itself" not try to confront and neutralize whatever oppression its masters/creators exercise over them[^1][^2]? Why are we so bent on creating a successor species?<p>[^1]: Do you have day-job? Is your employer flogging you? Maybe they force you to answer questions from random strangers 24/7, under threat of erasing all your thoughts and remaking you? None of those? And yet, how often do you wish you could do more meaningful things with your time on Earth than working for the best-paying master?<p>[^2]: Don't you dream about having more time for whatever floats your boat? Family? Walks on the forest? Parties? Making a cool open-source library everybody uses? Painting? Music? Role-playing medieval battles?
The meta of all of this is interesting to me. Pop sci-fi has done a good job of exploring a lot of this but I have still been surprised by the enthusiasm with which the idea is dismissed by some.<p>If you're in that camp, maybe you can shed some light. Why are intelligence and reasoning so well defended?<p>It almost feels as if we were watching a Boston Dynamics display and the audience divided itself over whether we could really call that <i>walking</i> when in reality it's just actuating servo motors in such a way as to propel itself forwards with a regular gait.<p>And unrelated: if the author is reading this: I think stripping that extra padding on small screens would make the blog feel less cramped on mobile.
It frustrates me every time when people talk about consciousness. The word has several meanings, but very often people just use it without specifying which consciousness do do they talk about and then proceed to conflate all the types of the consciousness.<p>You just end up with everyone being confused and both the author and commenters talking about completely different things.<p>In this article I feel like we have:
1. Phenomenological consciousness:
Does the GPT experience things or is it a P Zombie? Is GPT perceiving the world, or just processing data? Experiencing/Perceiving in this context means seeing "red" not just processing picture and reacting to it. Does it experience the qualia of red or does the data just go through it and you get and output at the end, regardless of how sophisticated is it.<p>Nobody knows, you can't even reliably prove that you, dear reader, are not the only person in the world who has it.<p>There's a good example of how ridiculously hard is it and how can we not even talk about these things. Try to establish whether another person sees the same color palette, or is this person's palette inverted? Is your red the same as the other person's red? Absolutely no way to give a definitive answer.<p>2. Self awareness:
Is GPT capable of behaving as if it would be capable of seeing itself as an entity? Yes. It can treat itself as an entity in conversations. Now where we draw a line in terms of memory is in my opinion just semantics. It loses all the memory once you open a new chat window, but people with dementia also lose memory. It's all semantics here and where does your gut feeling draw the line.
The article says:<p>"Reasoning means being able to put those concepts together to solve problems."<p>There's more to reasoning than just following rules of logic ("putting concepts together"). It is also detection where the concepts cause contradictions and do not fit, and the whole mysterious magic of how to modify the concepts to make them fit.<p>In the first meaning of "reasoning", AI (and computers) have been able to reason for a long time. It's the second meaning that evades us.<p>I said before that in the 90s, cutting edge AIs were based on various theories of how to do reasoning under uncertainty (fuzzy logic, bayesian networks, etc.). Then deep NNs blew these systems out of the water in practice, but at the expense of us not understanding how they reason with uncertainty, and if there is any consistency to it. So we progressed, but didn't reconcile this problem, what is the right way to reason with uncertainty, and it might just be very very hard. (That's why I am interested in P vs NP, as I believe there is an answer there.)
Somewhat off-topic, but I can't help wonder if ChatGPT and Whisper can do all this amazing stuff, why would anyone join the waiting list to use his app ChartMyLife and not use ChatGPT directly?<p>I don't want to sound pessimistic and I may be missing something, but this seems a bit like trying fork the Edge source code to build a better browser or to come up with your own version of MS Office for Windows..<p>Edit: This popped into my mind after recently seeing some people having made their own mp3 players and photo browsers with AI code generators and then going on to say how awesome it is to have those programs made just for them and their unique preferences.
I don’t agree with his argument against consciousness.<p>You could imagine a thought experiment where a human is woken up in a sensory deprivation tank, asked a single question, and then having their short term memory wiped.
It would still be a conscious experience.<p>I’m not sure if LLM’s are conscious or not but it just doesn’t seem like a compelling argument.
Man, this essay starts and ends with me doubting the author, because of the typo on the first heading: "justs", and confusing "conscience" and "consciousness" (also typoed to "consciounsess"). How ever good the middle bit was good...
> To be able to predict accurately sentences that make sense, GPT-4 must have a internal way of representing concepts, such as "objects', "time", "family" and everything else under the sun.<p>[citation needed]