TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

The Impact of Generative AI on Critical Thinking [pdf]

203 点作者 greybox大约 2 个月前

41 条评论

Maro大约 2 个月前
A good model for understanding what happens to people as they delegate tasks to AI is to think about what happens to managers who delegate tasks to their subordinates. Sure, there are some managers who can remain sharp, hands-on and relevant, but many gradually lose their connection to the area they&#x27;re managing and become pure process&#x2F;project&#x2F;people managers and politicians.<p>Ie. most managers can&#x27;t help their team find a hard bug that is causing a massive outage.<p>Note: I&#x27;m a manager, and I spend a lot of time pondering how to spend my time, how to be useful, how to remain relevant, especially in this age of AI.
评论 #43485889 未加载
评论 #43485582 未加载
评论 #43485539 未加载
评论 #43486667 未加载
评论 #43485887 未加载
评论 #43485842 未加载
评论 #43486227 未加载
vunderba大约 2 个月前
I&#x27;ve been calling this out since ChatGPT went mainstream.<p>The seductive promise of solving all your problems is the issue. By reaching for it to solve any problem at an almost instinctual level you are completely failing to cultivate an intrinsically valuable skill - that of critical reasoning.<p>That act of manipulating the problem in your head—critical thinking—is ultimately a craft. And the only way to become better at it is by practicing it in a deliberate, disciplined fashion.<p>This is why it&#x27;s pretty baffling to me when I see attempts at comparing LLMs to the invention of the calculator. A calculator is still used <i>IN SERVICE</i> of a larger problem you are trying to solve.
评论 #43485161 未加载
jonahx大约 2 个月前
For those who read only the headline or article:<p>&gt; In this paper, we aim to address this gap by conducting a survey of a professionally diverse set of knowledge workers ( = 319), eliciting detailed real-world examples of tasks (936) for which they use GenAI, and directly measuring their perceptions of critical thinking during these tasks<p>So, they asked people to remember times they used AI, and then asked them about their own perceptions about their critical thinking when they did.<p>How are we even pretending there is serious scientific discussion to be had about these &quot;results&quot;?
评论 #43487150 未加载
oneofyourtoys大约 2 个月前
The year is 2035, the age of mental labor automation. People subscribe to memberships for &quot;brain gyms&quot;, places that offer various means of mental stimulation to train cognitive skills like critical thinking and memory retention.<p>Common activities provided by these gyms include fixing misconfigured printers, telling a virtual support customer to turn their PC off and back on again, and troubleshooting mysterious NVIDIA driver issues (the company has gone bankrupt 5 years ago, but their hardware is still in great demand for frustration tolerance training).
评论 #43485957 未加载
sitkack大约 2 个月前
Thanks, the paper is very readable.<p>&gt; Abstract The rise of Generative AI (GenAI) in knowledge workflows raises questions about its impact on critical thinking skills and practices. We survey 319 knowledge workers to investigate 1) when and how they perceive the enaction of critical thinking when using GenAI, and 2) when and why GenAI affects their effort to do so. Participants shared 936 first-hand examples of using GenAI in work tasks. Quantitatively, when considering both task- and user-specific factors, a user’s task-specific self-confidence and confidence in GenAI are predictive of whether critical thinking is enacted and the effort of doing so in GenAI-assisted tasks. Specifically, higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking. Qualitatively, GenAI shifts the nature of critical thinking toward information verification, response integration, and task stewardship. Our insights reveal new design challenges and opportunities for developing GenAI tools for knowledge work.<p>It is be presented at CHI Conference <a href="https:&#x2F;&#x2F;chi2025.acm.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;chi2025.acm.org&#x2F;</a><p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Conference_on_Human_Factors_in_Computing_Systems" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Conference_on_Human_Factors_in...</a>
评论 #43496814 未加载
lenerdenator大约 2 个月前
So it does what Google searching did: it made retaining of information an optional cognitive burden, and optional cognitive burdens are usually jettisoned.<p>Fortunately, my ADHD-addled brain doesn&#x27;t need some fancy AI to make its cognition &quot;Atrophied and Unprepared&quot;; I can do that all on my own, thank you very much.
评论 #43484758 未加载
评论 #43484920 未加载
greybox大约 2 个月前
Microsoft Study Finds AI Makes Human Cognition “Atrophied and Unprepared”<p>“[A] key irony of automation is that by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise,” the researchers wrote.
评论 #43485568 未加载
评论 #43484735 未加载
pseudocomposer大约 2 个月前
It seems like something like medical&#x2F;legal professionals’ annual&#x2F;otherwise periodic credential exams might make sense in fields where AI is very usable.<p>Basically, we might need to standardize 10-20% of work time being used to “keep up” automatable skills that once took up 80+% of work time in fields where AI-based automation is making things more efficient.<p>This could even be done within automation platforms themselves, and sold to their customers as an additional feature. I suspect&#x2F;hope that most employers do not want to see these automatable skills atrophy in their employees, for the sake of long-term efficiency, even if that means a small reduction in short-term efficiency gains from automation.
评论 #43485574 未加载
nopelynopington大约 2 个月前
I feel like my critical thinking has taken a nosedive recently, I changed jobs and the work in the new job is monotonous and relies on automation like copilot. Most of my day is figuring out why the ai code didn&#x27;t work this time rather than solving actually problems. It feels like we&#x27;re a year away from the me part being obsolete.<p>I&#x27;ve also turned to AI in side projects, and it&#x27;s allowed me to create some very fast MVPs, but the code is worse than spaghetti - it&#x27;s spaghetti mixed with the hair from the shower drain.<p>None of the things I&#x27;ve built are beyond my understanding, but I&#x27;m lazy and it doesn&#x27;t seem worth the effort to use my brain to code.<p>Probably the most use my brain gets every day is wordle
评论 #43488853 未加载
bentt大约 2 个月前
Is this any different than saying that nowadays most people in the USA are physically weaker and less able to work on a farm than their predecessors? Sure, it&#x27;s not optimal through certain lenses, but through other lenses it is an improvement. We are by any rights dependent on new systems to procure food, which is even more fundamental than other types of human cognition being preserved.
评论 #43486193 未加载
评论 #43490221 未加载
评论 #43485854 未加载
sollewitt大约 2 个月前
One thing I&#x27;ve tried using Gemini for, and been really impressed with, is practicing languages. I find Duolingo doesn&#x27;t really translate to fluency, because it doesn&#x27;t really get you to struggle to express yourself - the topics are constrained.<p>Whereas, you can ask an LLM to speak to you in e.g. Spanish, about whatever topic you&#x27;re interested in, and be able to stop and ask it to explain any idioms or vocabulary or grammar in English at any time.<p>I found this to be more like a &quot;cognitive gym&quot;. Maybe we&#x27;re just not using the tools beneficially.
评论 #43485022 未加载
评论 #43485456 未加载
divtiwari大约 2 个月前
As a part of Gen Z, I feel that with regards to critical thinking skills, our generation got obliterated twice, first with Social media (made worse with affordable data plans) then followed by GenAI tools. You truly need a monk level mind control to come out unscathed from their impact.
masfuerte大约 2 个月前
This isn&#x27;t a new thing. I noticed it in the 1990s in bank employees as their work became increasingly automated. As the software became better at handling exceptions, their skills atrophied further and they became even worse at handling the harder exceptions that remained.
评论 #43485069 未加载
rraghur大约 2 个月前
Sort of like once you get used to GPS to get anywhere, you stop developing any further directional sense but even existing capabilities start withering away
评论 #43485592 未加载
tunesmith大约 2 个月前
I like to think of problems as having two components: specification and implementation.<p>With using GenAI (and&#x2F;or &quot;being a manager&quot;) aren&#x27;t they somewhat inversely related?<p>I find implementation level programmers to generally be poor at stating specifications. They often phrase problems in terms of lacking their desired solutions. They jump straight to implementation.<p>But a manager has to get skilled at giving specification: being clear about what they expect, without stating how to do it. And that&#x27;s a skill that needs to be quickly developed to use GenAI well as well. I think getting good at specifying is definitely worthwhile, and I think GenAI is helping a lot of people get better at that quickly.<p>Overall, it seems that should very much be considered part of &quot;critical thinking&quot;.
piltdownman大约 2 个月前
&gt;&gt; Moreover, participants perceived it to be more effort to constantly steer AI responses (48&#x2F;319), which incurs additional Synthetic thinking effort due to the cost of developing explicit steering prompts. For example, P110 tried to use Copilot to learn a subject more deeply, but realised: “its answers are prone to several [diversions] along the way. I need to constantly make sure the AI is following along the correct ‘thought process’, as inconsistencies evolve and amplify as I keep interacting with the AI.<p>While much is made of the &#x27;diminished skill for independent problem-solving&#x27; caused by over-reliance, is there a more salient KPI than some iteration of this &#x27;Synthetic Thinking Effort&#x27; by which to baseline and optimise the cost&#x2F;benefit of AI usage versus traditional cognition?
labrador大约 2 个月前
I&#x27;m a retired computer programmer. All my time is free time. I&#x27;m using AI as a cognitive amplifier. I&#x27;m learning at a much faster rate than I would without AI. I don&#x27;t have to waste time doing google searches and reading thru irrelevant material to find something germane to my research.<p>I don&#x27;t depend on AI for anything. I am not doing corporate work. Could it be that what people are experiencing is that they are becoming less suitable for corporate work as AI and robots replace them? Isn&#x27;t this a good thing? Shouldn&#x27;t the focus be on using AI to bring out the innate talents of humans that aren&#x27;t profit driven?
评论 #43486116 未加载
评论 #43486025 未加载
评论 #43489651 未加载
tsumnia大约 2 个月前
I won&#x27;t disagree with their findings; however I do think there is some need to counter the narrative that &quot;LLM AI worse for humans&quot;. Specifically I think back to an example I use when I would describe why I had such motivation toward study students completing typing practice while learning CS. In short, I use the analogy that when I am browsing the web for code snippets (like extracting files from a tar file), I will explicitly retype out the command rather than rely on copy+paste. My logic is that typing out the command helps build the muscle memory so that someday I&#x27;ll just REMEMBER the command.<p>That said, the counter to my own counter is &quot;do I really need to memorize that?&quot; Yes yes no internet and I&#x27;m screwed... but that&#x27;s such a rare edge case. I am able to quickly find the command and knowing that it is stored somewhere else may be enough knowledge for me rather than memorization. I can see Gen AI falling into a similar design, I don&#x27;t need to know explicitly how to do something, just that that task can be resolved through an LLM prompt.<p>Granted, we&#x27;re still trying to figure out how to communicate with LLMs and we only really have 3 years of experience. Most of our insights have come from blog posts and a handful of research articles. I agree that Gen AI laziness is a growing issue, but I don&#x27;t think it needs to go full Idiocracy sensationalist headline.
评论 #43485744 未加载
Qem大约 2 个月前
Paywalled, but full study available here: <a href="https:&#x2F;&#x2F;www.microsoft.com&#x2F;en-us&#x2F;research&#x2F;wp-content&#x2F;uploads&#x2F;2025&#x2F;01&#x2F;lee_2025_ai_critical_thinking_survey.pdf" rel="nofollow">https:&#x2F;&#x2F;www.microsoft.com&#x2F;en-us&#x2F;research&#x2F;wp-content&#x2F;uploads&#x2F;...</a>
评论 #43486838 未加载
1vuio0pswjnm7大约 2 个月前
Original HN titles:<p>&quot;Impact of Gen AI on Critical Thinking: Reduction in Cognitive Effort, Confidence&quot;<p>&quot;Impact of AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort&quot;<p>&quot;The Impact of Generative AI on Critical Thinking: Reductions in Cognitive Effort&quot;<p>Actual title of the paper:<p>&quot;The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers&quot;<p>Previous discussion:<p>10 Feb 2025 17:01:08 UTC <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=43002458">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=43002458</a> (1 comment)<p>10 Feb 2025 22:31:05 UTC <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=43006140">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=43006140</a> (0 comments)<p>11 Feb 2025 11:14:06 UTC <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=43011483">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=43011483</a> (0 comments) [dead]<p>11 Feb 2025 14:13:36 UTC <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=43012911">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=43012911</a> (1 comment)<p>12 Feb 2025 01:47:16 UTC <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=43020846">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=43020846</a> (0 comments) [flagged] [dead]<p>14 Feb 2025 15:54:57 UTC <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=43049676">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=43049676</a> (1 comment)<p>15 Feb 2025 12:06:01 UTC <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=43057907">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=43057907</a> (101 comments)
allenrb大约 2 个月前
I’ve told people at work, including my boss and his boss, that it will be time for me to go if and when my job ever becomes “translating business problems into something AI can work with.”<p>Right now I’m curious to see how long I can keep up with those using AI for more mundane assistance. So far, so good.
gatinsama大约 2 个月前
You can&#x27;t delegate understanding. I don&#x27;t mean you shouldn&#x27;t, you can&#x27;t.<p>If you don&#x27;t understand what&#x27;s happening, you have no way to know if the system is working as intended. And understanding (and deciding) exactly how the system works is the really hard part for any sufficiently complex project.
deepfriedchokes大约 2 个月前
I seem to recall Socrates arguing that writing weakened the memory and hindered genuine learning. He probably wasn’t wrong, but the upside of writing was greater than the downside.
0x20cowboy大约 2 个月前
I love doing all aspects of building software. However, I’ve noticed when I am feeling lazy I’ll just copy pasta a stack trace into an LLM and just trust what is says is wrong. I won’t even read the stack trace.<p>I only tend to do that when I am tired or annoyed, but when I do it I can feel myself getting dumber. And it’s a weirdly satisfying feeling.<p>I just need a chair that doubles as a toilet and I’ll be all set.
_heimdall大约 2 个月前
I&#x27;m often surprised that a study like this is even needed, the result seems obvious.<p>Critical thinking is a skill that requires practice to improve at and maintain it. Using LLMs pushes the task that would require critical thinking off to something&#x2F;someone else. Of course the user will get worse at critical thinking when they try to do it less often.
jhallenworld大约 2 个月前
Obvious advice for students: Human brains are neural networks- they have to be trained. If you have the already trained artificial neural network do all the work, it means your own neural network remains untrained.<p>You are tremendously better off getting a bad grade doing your own work than getting a good one using ChatGPT.
评论 #43490925 未加载
moralestapia大约 2 个月前
I believe this to be true, and it came to happen at the worst possible time, post-COVID and w&#x2F; education levels through the floor.<p>I also believe, however, that humans who are able to reason properly would become much more valuable, because of this same thing.
DrNosferatu大约 2 个月前
Then prompt the AI to provide its outputs in a way that keeps the human user engaged and aware of where they are in the thought process: maps, diagrams, repetition summaries.<p>We have the cognition science to make it happen - or at least learn how to structure it.
评论 #43485480 未加载
riffic大约 2 个月前
can&#x27;t it go the other way? Can&#x27;t AI be developed to improve and strengthen human cognition? I&#x27;m incredibly naive and ill informed but feel that you can go both ways (growth vs fixed mindsets?)
ChrisArchitect大约 2 个月前
Article from February.<p>Some discussion on the study: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=43057907">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=43057907</a>
AISnakeOil大约 2 个月前
Just as any muscle gets weaker the less you use it, same goes for intelligence.
_aavaa_大约 2 个月前
It’s why I write all of my code directly in binary. Depending on a compiler, or god forbid Python, is really detrimental to me accomplishing my goal as a data scientist: allocating registers.
derefr大约 2 个月前
How odd. I don&#x27;t think I&#x27;m thinking any less hard when making use of LLM-based tools. But then, maybe I&#x27;m using LLMs differently?<p>I don&#x27;t build or rely on pre-prompted agents to automate specific problems or workflows. Rather, I only rely on services like ChatGPT or Claude for their generic reasoning, chat, and &quot;has read the entire web at some point&quot; capabilities.<p>My use-cases break down into roughly equal thirds:<p>---<p>1. As natural-language, iteratively-winnowing-the-search-space versions of search engines.<p>Often, I want to know something — some information that&#x27;s definitely <i>somewhere</i> out there on the web. But, from 30+ years of interacting with fulltext search systems, I know that traditional search engines have limitations in the sorts of queries that&#x27;ll actually do anything. There are a lot of &quot;objective, verifiable, and well-cited knowledge&quot; questions that are just outside of the domain of Google search.<p>One common example of fulltext-search limitations, is when you know how to describe a thing you&#x27;re imagining, a thing that may or may not exist — but you don&#x27;t know the jargon term for it (if there even is one.) No matter how many words you throw at a regular search engine, they won&#x27;t dredge up discussions about the thing, because discussions about the thing just <i>use</i> the jargon term — they don&#x27;t usually bother to <i>define</i> it.<p>To find answers to these sorts of questions, I would have previously ask a human expert — either directly, or through a forum&#x2F;chatroom&#x2F;subreddit&#x2F;Q&amp;A site&#x2F;etc.<p>But now, I&#x27;ve got a new and different kind of search engine — a set of pre-trained base models that, all by themselves, perform vaguely as RAGs over all of the world&#x27;s public-web-accessible information.<p>Of course, an LLM won&#x27;t have crystal clarity in its memory — it&#x27;ll forget exact figures, forget the exact phrasing of quotations, etc. And if there&#x27;s any way that it can be fooled or misled by some random thing someone made up somewhere on the web once, it will be.<p>But ChatGPT et al <i>can</i> sure tell me the right jargon term (or entire search query) to turn what was previously, to me, almost deep-web information, into public-web information.<p>---<p>2. As a (fuzzy-logic) expert system in many domains, that learned all its implications <i>from</i> the public information available on the web.<p>One fascinating thing about high-parameter-count pre-trained base models, is that you don&#x27;t really need to do <i>any</i> prompting, or supply <i>any</i> additional information, to get them to do a vaguely-acceptable job of <i>diagnosis</i> — whether that be diagnosing your early-stage diabetic neuropathy, or that mysterious rattle in your car.<p>Sure, the LLM will be wrong sometimes. It&#x27;s just a distillation of what a bunch of conversations and articles spread across the public web have to say about what are or aren&#x27;t the signs and symptoms of X.<p>But those are the same articles <i>you&#x27;d</i> read. The LLM will almost always outperform <i>you</i> in &quot;doing your own research&quot; (unless you go as far as to read journal papers — I don&#x27;t know of any LLM base model that&#x27;s been trained on arXiv yet...). It won&#x27;t be as good at medicine as a doctor, or as good at automotive repair as an automotive technician, etc. — but it <i>will</i> be better (i.e. more accurate) at those things than an interested amateur who&#x27;s watched some YouTube videos and read some pop-science articles.<p>Which means you can just tell LLMs the &quot;weird things you&#x27;ve noticed lately&quot;, and get it to hypothesize for you — and, as long as you&#x27;re good at being <i>observant</i>, the LLM&#x27;s hypotheses will serve as great <i>lines of investigation</i>. It&#x27;ll suggest <i>which experts or specialists</i> you should contact, <i>what tests</i> you can perform yourself to do objective differential diagnostics, etc.<p>(I don&#x27;t want to under-emphasize the usefulness of this. ChatGPT figured out my house had hidden toxic mold! My allergies are gone now!)<p>---<p>3. As a translator.<p>Large-parameter-count LLM base models are actually <i>really, really good</i> at translation. To the point that I&#x27;m not sure why Google Translate et al haven&#x27;t been updated to be powered by them. (Google Translate was the origin of the Transformer architecture, yet it seems to have been left in the dust since then by the translation performance of generic LLMs.)<p>And by &quot;translation&quot;, I do literally mean &quot;translating entire documents from one spoken&#x2F;written human language to another.&quot; (My partner, who is a fluently-bilingual writer of both English + [Traditional] Chinese, has been using Claude to translate English instructions &#x2F; documents into Chinese for her [mostly monolingual Chinese] mother to better understand them; and to translate any free-form responses her mother is required to give, back into English. She used to do these tasks herself &quot;by hand&quot; — systems like Google Translate would provide results that were worse-than-useless. But my partner can verify that, at least for this language pair, modern LLMs are <i>excellent</i> translators, writing basically what she would write herself.)<p>But I <i>also</i> mean:<p>• The thing Apple markets as part of Apple Intelligence — translation between writing styles (a.k.a. &quot;stylistic editing.&quot;) You don&#x27;t actually need a LoRA &#x2F; fine-tune to do this; large-parameter-count models already inherently know how to do it.<p>• Translating between programming languages. &quot;Rewrite-it-in-Rust&quot; is trivial now. (That&#x27;s what <a href="https:&#x2F;&#x2F;www.darpa.mil&#x2F;research&#x2F;programs&#x2F;translating-all-c-to-rust" rel="nofollow">https:&#x2F;&#x2F;www.darpa.mil&#x2F;research&#x2F;programs&#x2F;translating-all-c-to...</a> is about — trying to build up an agentive framework that relies on both the LLM&#x27;s translation capabilities, and the Rust compiler&#x27;s typing errors on declaration change, to brute-force iterate across entire codebases, RiiRing one module at a time, and then recursing to its dependents to rewrite them too.)<p>• Translating between pseudocode, and&#x2F;or a <i>rigorous</i> description of code, and actual code. I run a data analytics company; I know far more about the intricacies of ANSI SQL than any man ought to. But even I never manage to remember the pile of syntax features that glom together to form a &quot;loose index scan&quot; query. (WITH RECURSIVE, UNION ALL, separate aliases for the tables used in the base vs inductive cases, and one of those aliases referenced in a dependent subquery... but heck if I recall which one.) I have a crystal-clear picture of what I want to do — but I no longer need to look up the exact grammar the SQL standard decided to use yet again, because now I can dump out, in plain language, <i>my</i> (well-formed) mental model of the query — and rely on the LLM to <i>translate</i> that model into ANSI SQL grammar.
thombles大约 2 个月前
I&#x27;m, uh, not a fan of AI, however in this case I would strongly recommend everybody ctrl-f the juicy quotes in the 404media article and see where they came from in the full text of Microsoft&#x27;s study. Both of the leading quotes come from the _introduction_, where they&#x27;re talking at a high level about a paper from 1983. It&#x27;s enormous clickbait.
RecycledEle大约 2 个月前
You decide to rot as AI does the work, or you decide to learn from the AI.<p>The same is true of managers. I have had managers who yelled at me to do things they did not understand. They rotted on the inside. Other managers learned every trick I brought to the company. They grew.
sunjester大约 2 个月前
Sounds like Microsoft is back at it again.
rraghur大约 2 个月前
Just today had Gemini write a shell spot for me that had to generate a relative symlink.. Getting it to work xplat on linux &amp; mac took more than ten tries and I stopped reading after the second<p>At the end, I spent probably more time and learnt nothing.. My initial take was that this is the kind of thing I don&#x27;t care much for so giving it to a llm is OK... However, by the end of it I ended up more frustrated and lost it in the simulation of working things out aa well
MrMcCall大约 2 个月前
Usain Bolt didn&#x27;t walk around on crutches all day.<p>Comedians&#x27; ability diminishes as they take time off.<p>Ahnold wasn&#x27;t lounging around all day.<p>We should understand that fixing crap, unsensible code is not a productive skillset. As Leslie Lamport said the other day, logically developing and coding out proper abstractions is <i>the</i> core skillset, and not one to be delegated to just anything or anyone.<p>It&#x27;s ok; the bright side for folks like me is that you&#x27;re just happily hamstringing yourselves. I&#x27;ve been trying to tell y&#x27;all, but I can only show y&#x27;all the water, not make you drink.
AtomBalm大约 2 个月前
Literacy makes human memory atrophied and unprepared. Kids these days can’t even recite the Illiad from memory!
Deprogrammer9大约 2 个月前
Don&#x27;t listen to Microscam yo!
fbn79大约 2 个月前
People growing up using AI risks to become like France nobility during luigi IVI reign.