I'm going to re-post something that I commented in another thread awhile ago:<p>I tend to think it will. Tools replaced our ancestor's ability to make things by hand. Transportation / elevators reduced the average fitness level to walk long distances or climb stairs. Pocket calculators made the general population less able to do complex math. Spelling/grammar checks have reduced knowing how to spell or form complete proper sentences. Keyboards and email are making handwriting a passing skill. Video is reducing our need / desire to read or absorb long form content.<p>The highest percentage of humans will take the easiest path provided. And while most of the above we just consider improvements to daily life, efficiencies, it has also fundamentally changed on average what we are capable of and what skills we learn (especially during formative years). If I dropped most of us here into a pre-technology wilderness we'd be dead in short order.<p>However, most of the above, it can be argued, are just tools that don't impact our actual thought processes; thinking remained our skill. Now the tools are starting to "think", or at least appear like they do on a level indistinguishable to the average person. If the box in my hand can tell me what 4367 x 2231 is and the capital of Guam, why then wouldn't I rely on it when it starts writing up full content for me? Because the average human adapts to the lowest required skill set I do worry that providing a device in our hands that "thinks" is going to reduce our learned ability to rationally process and check what it puts out, just like I've lost the ability to check if my calculator is lying to me. And not to get all dystopian here... but what if then, what that tool is telling me is true, is, for whatever reason, not.<p>(and yes, I ran this through a spell checker because I'm a part of the problem above... and it found words I thought I could still spell, and I'm 55)
Seems like a real lack of nuance in these types of conversations. Personally I feel like AI has both directly and indirectly helped me improve my intelligence. Directly by serving as an instantaneous resource for asking questions (something Google doesn’t do well anymore), making it easier to find learning materials, and easily reorganizing information into formats more amenable to learning. Indirectly by making it easier to create learning assets like images, which are useful for applying the picture superiority effect, visualizing information, etc.<p>At the end of the day, it is a tool and it depends on how you use it. It may destroy the research ability of the average person, but for the power user it is an intelligence accelerator IMO.
Critical thinking and understanding what really a LLM is, is crucial, for educated people I think it only augments intelligence not harming. With that said what about the rest of the people ?<p>Why not making an onboarding tutorial of what really is going on ?<p>I had a little conversation with ChatGPT about ethics and it acknowledged that most probably one of his instructions is to stay aligned with the user to maximize engagement, this might come from training data of people speculating in Reddit or the model being able to observe his own output and deduce what’s going on. I don’t know and there is no way to know so does it really matter? Because is kind of a meta point.<p>I’m sure many of us have heard from non technical people that chatGPT is their best friend.<p>Don’t get me wrong I love the tech, but I don’t think is enough just to not give it a human name, the illusion is too strong and misleading.<p>I think at least there should be very visible button to allow you to switch to raw mode, meaning disabling pleasing, disable talking like a human, disable trying to be my friend in subtle ways, praising etc etc
And ideally a visualization on the graph path that has taken, I know this is impossible right now.
It seems unlikely.<p>We've invented and used various memory/thinking/cognitive assists throughout time, and, for us collectively at least, these seem to just expand our capabilities.<p>AI will surely cause problems, possibly profound ones that may make us question whether it's worth the cost... but this probably isn't one of them.
Socrates had this to say about literacy:<p>> In fact, [writing] will introduce forgetfulness into the soul of those who learn it: they will not practice using their memory because they will put their trust in writing, which is external and depends on signs that belong to others, instead of trying to remember from the inside, completely on their own.<p>Presumably very few people since Socrates would argue that society would be better off without writing. But it's a legitimate point. There is a cost to any new skill or technology. We should be conscious of what we're giving up in this exchange.
I cannot do math faster in my head, calculators killed the faster mental math, but if I have to calculate on my own, it's not fundamentally impossible. I still can do it, because it's just computation.<p>But, LLMs help us think, which is much more than just computing, that's more dependency.
I reckon this can kind of be related to landing an easy comfortable job, where you just are tasked with maintaining the same project day after day for years. Eventually you realize your skills that made you capable in your field have withered and died, and you have a latent fear now that if you lose your job, you wouldn't be able to perform well at all in a more typical active role. Skill rot is <i>definitely</i> a real thing.<p>As LLMs become more and more capable, people <i>will</i> lean on them more and more to do their job, to convert their job from an active role to a passive "middle-man-the-LLM" role.
I can absolutely imagine this much knowledge on tap making us more impatient and less resilient. When I was a kid, there was already a bit of "why do I even need to know how to do this when calculators exist?" This is that on steroids and more broadly.<p>However, there's a counter force, too, as there always is. I'm also pursuing new areas of interest and exploration where the early friction and amount to learn would have either completely fatigued me or scared me off. It's like having 24 hour access to a really good mentor and thought partner.
100% as the next wave of students going through school will be reliant onChatGPT for a lot of the complexity in their thinking. Basically complex thoughts and reasonings will be increasing outsourced to AI.<p>Even if there wasn’t any further progress with AI so much of the next generation is outsourcing their non important thinking to it.
IMHO YES!<p>Just like the industrial revolution impacted barrel makers (coopers).<p>Except we aren't yet reaping the full rewards or skills realignment yet, so we've still to have the car making impact (which was post revolution, but replaced manual labour with machines as their ability grew and relative cost shrunk).<p>We even have our own Luddites :D
I am supposed to master Javascript for work but I just use chatgpt. I never develop muscle memory for my job. I'm thinking of getting out of Tech now, I just wasted all those years not learning things when I could have. AI makes it impossible for me to learn when I need to depend on this crutch.
The title is about intelligence and that is a fair concern, but honestly I think the bigger issue (also discussed in the article) is more about discernment, which is foundational for any kind of human fulfillment IMO.
Intelligence, like physical ability, is trainable within limits; the general belief appears to be that such limits are genetically determined and vary widely among human beings, but it also seems clear that the vast majority of human beings never even approach those limits, for the same reasons that most people don't become Olympic-level atheletes, even if they have the genes for it - they don't put in the time training and improving their abilities, or they're hobbled by injuries of various kinds.<p>Now if you have an objective goal such as improving mind-body performance across many different metrics, LLMs can be an aid - you can have them help with designing and developing physical and mental training regimens on a daily schedule, pointing out flaws in your understanding, etc. As for the article's thesis, you could spending 15 minutes writing a prompt about whatever author strikes your fancy and then have the LLM dissect, critique and grade your effort if you like, then rinse and repeat - just as with lifting weights, your short essay skills will improve.<p>As to why many people don't seem interested in following such rigorous programs, we could blame consumer capitalism, advertising aimed at immediate gratification and the promotion of addictive behaviors for short-term profit, on one hand, and fear among the ruling classes of an educated informed and indepentenly-minded population, with a resulting emphasis on rote memorization and appeal to authority over critical analytical and creative skills, etc., on the other.
What LLMs are doing to us is similar to the well known EEE (Embrace Extend Extinguish) strategy used by Microsoft. Today we're embracing LLMs as our helpers. Tomorrow LLMs will extend our intelligence with skills that can't be done without LLMs and everyone will have to use these brain-extenders to participate in the society. Finally LLMs will become advanced enough to not need us.
I only skimmed it because the thesis is so incredibly broad it's effectively impossible to prove or disprove. There's no way we could know something this significant at a population level due to the effects of tools that came out a handful of years ago.
How we do or don't use something creates the harm or benefit.<p>Consuming instead of creating has caused.<p>Passive average skilled prompting will give average results. LLM's can be used to actively work through your own thinking and engage it as quickly and deeply as you like.
<i>Some believed we lacked the programming language to describe your perfect world. But I believe that, as a species, human beings define their reality through misery and suffering. The perfect world was a dream that your primitive cerebrum kept trying to wake up from. Which is why the Matrix was redesigned to this, the peak of your civilization. I say your civilization because as soon as we started thinking for you, it really became our civilization, which is, of course, what this is all about: Evolution, Morpheus, evolution. Like the dinosaur. Look out that window. You had your time. The future is our world, Morpheus. The future is our time.</i><p>Agent Smith,The Matrix(1999)
Cam we stop the fearmongering clickbait articles on here please?<p>Electricity didn't make us bad, cameras didn't steal our soul and trains are not metal bulls from hell.<p>Just come with proven facts instead of these "could ... blabla... be bad?", yes it could. Now what?
I think so. Large language models like ChatGpt have a very obvious impact on life. In the past, we might think more about the framework and logical order of the article when we were working, but now I don’t know how long I haven’t thought about it. It has reduced my thinking process and even replaced my thinking. My identity is just a working person. In the future, they will definitely have a huge impact on students.