I believe there are two kinds of skill: standalone and foundational.<p>Over the centuries we’ve lost and gained a lot of standalone skills. Most people throughout history would scoff at my poor horse-riding, sword fighting or my inability to navigate by the stars.<p>My logic, reasoning and oratory abilities on the other hand, as well as my understanding of fundamental mechanics and engineering principles would probably hold up quite well (language barrier notwithstanding) back in ancient Greece or in 18th century France.<p>I believe AI is fine to use for standalone skills in programming. Writing isolated bits of logic, e.g. a getRandomHexColor() function in JavaScript or a query in an SQL dialect you’re not deeply familiar with is a great help and timesaver.<p>On the other hand, handing over the fundamental architecture of your project to an AI will erode your foundational problem solving and software design abilities.<p>Fortunately, AI is quite good at the former, but still far from being able to do the latter. So, to me at least, AI based code editors are helpful without the risk of long term skill degradation.
I do the same now, I don't use Cursor or similar edit-level AI tools anymore, I just use inline text completions and chat to talk through a problem, and then, I'll copy-paste anything needed (or rather type it in manually just to have more control).<p>I literally felt myself getting AI brain rot, as one Ask HN put it recently, where it felt like I started losing brain cells and depended too much on the AI over my own thinking and felt my skills atrophy. At the end of the day, in the future, I sense there will be a much wider gap between those that truly know how to code, and those that, well, don't, due to such over-reliance on AI.
I also do most of my coding artisanal, but use LLM for semantic search, to enrich the research part.<p>Definitely never trust an LLM to write entire files for you, at least if you don't want to spend more time in code review than writing or you expect maintaining it.<p>Also, a good quote regarding the AI tools market:<p>> A lot of companies are creating FOMO as a sales tactic to get more customers, to show traction to their investors, to get another round of funding, to generate the next model that will definitely revolutionize everything.
Nicholas Carr has a nice book on the dynamic the author is describing [0], i.e. that our skills atrophy the more we rely on automation.<p>Like a lot of others in the thread, I've also turned off Copilot and have been using chat a lot less during coding sessions.<p>There are two reasons for this decision, actually. Firstly, as noted above, in the original post and throughout this thread, it's making my already fair-to-middling skills worse.<p>The more important thing is that coding feels less fun. I think there are two reasons for this:<p>- Firstly, I'm not doing so much of the thinking for myself, and you know what? I really like thinking.<p>- Secondly, as a collary to the skill loss, I really enjoy improving. I got back into coding again later in life, and it's been a really fun journey. It's so satisfying feeling an incremental improvement with each project.<p>Writing code "on my own" again has been a little slower (line by line), but it's been a much more pleasant experience.<p>[^0]: <a href="https://www.nicholascarr.com/?page_id=18" rel="nofollow">https://www.nicholascarr.com/?page_id=18</a>
I've recently disabled code completions, it's too much mental workload to read all those suggestions for so little quality.<p>I still use the chat whenever I need it.
I onky use line-completion AI that comes with Rider. I think it is a reasonable mix of classic code completion but with a bit more smart to it, like suggesting a string for a Console.Write.
But it does not write new lines, as indicated by the author.
This feels similar to articles with titles such as "Why every developer should learn Assembly" or "Relying on NPM packages considered harmful". I appreciate the core of truth inside the sentiment, and the author isn't _wrong_, but it won't matter over time. AI coding ability will improve, whether it's writing, debugging or planning. It will be good enough to produce 90% of the solution with very little input, and 90% is more than enough to go to market, so it will. And yes, it won't be optimal or totally secure, or the abstractions might be questionable but...how is that really different than most real software projects anyway?
Why? Clearly AI tools make life easier.<p>I could drive a manual car, but why? Automatic transmission is so much more convenient. Furthermore, for some use-cases FSD is even more convenient.<p>Another example: I don't want to think about gait movement of my robots, I just want it move, from A to B.<p>With programming, same thing: I don't want to waste time typing `if err != nil {}`, I want to think about the real problem. Ditto on happy-case unit tests. I don't want to waste my carpal tunnel prone wrists on those.<p>So on and so forth. Technology exists to make life more convenient. So why reject technology?
I agree that you shouldn't over rely on AI, that you shouldn't rely on AI to write code when you are learning.<p>However, in most codebases I can't see that happening. Once the codebase is complex enough, AI will not work, take more time to use than writing the code yourself or breaks the existing code.<p>The author himself said that AI isn't usable for anything more complex than an University project, yet he complains he lost his skills because he used AI.<p>As far as I see it, in the current stage of AI, it has limited usage.<p>You can use it to start a greenfield project or start a new feature that is independent of the rest of the codebase. It will help with most of the setup and boiler plate.<p>Past that, it will either not be able to do what you need, it will take more time or will break the code. So this pretty much doesn't allow you to modify large code bases or add logic. But you can still use it to generate a function, a method or a service, provided that function, method or service does not require much context and you don't need to modify the rest of the codebase to accommodate it.<p>I see AI as merely an accelerator, not solving problems by itself but helping you solve problems faster, sometimes. I think it is very similar to Intellicode in Visual Studio and other editor tiols, which don't write code but provide method autocompletion, provide small suggestions, provide syntax highlighting, provide formatting and warn you when you make syntax errors.<p>Was I using a text editor instead of a good IDE, my speed of development would be slower.
Skill loss works both ways. You might miss out on forming early skills in using llms effectively, and end up playing catch up in 3-5 years from now if LLMs mark all the skills you hold to be void.<p>It is also likely LLMs will change programming languages, we will probably move to more formal, type safe languages that the LLM can work with better. You might be good at your language but find the world shifts to a new one that everyone has to use LLMs to be effective for.
I am, despite my distrust of code generating LLMs, using one to write a piece of software that I have intended to write for a while, but haven't, thanks to laziness and inertia.<p>It's interesting, but I'm finding I'm spending a lot of time just figuring out <i>how</i> to describe something to the LLM. A bit less than writing it myself, really, except for the Qt-related stuff, which I've read up on and practiced a bit, but I wouldn't say I'm competent, yet. The generated code is OK and it works the first time, exactly the way I describe it to the LLM. I believe you can easily spot the problem there.<p>It will take a while to learn how to integrate AI into my workflow, and I can't say it's exactly an enjoyable experience, but I feel like it's something I need to do. I do feel it's a crutch, though.<p>I will say, there are two areas where it shines so far: writing tests and making me practice my code reviewing.<p>We'll see how it goes. It's going to take amazing results to ever get me to pay a monthly fee for something running remotely that seems like it should run locally.
It’s replaced google search for me when trying to look up a specific problem. I’d actually rather use google because the results from ai are too long and wordy and give too many answers/options, but something has happened to google that has made it useless. It started when they began putting reddit at the top of the results and it’s just getting worse over time.
I also stopped using AI code editors, but for different reasons. I realized, that with advances like Gemini 2.5 Pro, AI will soon be able to implement whole features, with correct prompt. So the real skills is how to prompt the AI a maintain the overall architecture of the project. I wonder if the IDEs like Cursor or Cline will ever be needed in the future; as for myself, I stopped investing into learning them. I currently use 2.5 Pro + Repo prompt app which prepares prompt and then apply the result to codebase semi-automatically.
> I chose to make using AI a manual action<p>I also find explicit/concious LLM interaction a happy medium.<p>Building tooling into my editor to expedite this concious usage made it much more enjoyable when I didn't have to context-switch into another app (ie. take my current text selection, or error under cursor, etc) <a href="https://github.com/xenodium/chatgpt-shell?tab=readme-ov-file#a-shell-hybrid" rel="nofollow">https://github.com/xenodium/chatgpt-shell?tab=readme-ov-file...</a>
Inventing the automobile has clearly made humanity less fit. Should we stop driving?<p>No. Going back to the stone age is not the solution. For the majority of our day, commuting without a vehicle will be impractical. So will coding without AI, especially as AI improves.<p>To retain human competency, we will have to find a novel solution. For walking, we created concentrated practice time - gyms/outdoor runs. Some evolution of leetcode, or even an AI guided training, might be the solution for coding skill preservation.
I completely agree with this assessment. I also keep my editor separated from the AI stuff. The way I talk about it is that using AI completions will create distance from the code. And just like a sports car driver sits close to the car and as low as to the ground as possible, I think of using non-AI editing like seated inside of a low to the ground sports car. While AI coding is floating above the car and controlling the steering with wires.
It just never made any sense to me. If we're talking real production code bases, not a one off hobby script, working with these systems just didn't get me anything.<p>If you're in a codebase of tens or hundreds of thousands of LOC, these systems don't work well, which leaves only two options, you enter some never-ending chat in which you have to have conversations with these systems and they act like a dimwitted intern, or you just give up on being a serious software engineer and pray and commit things you don't understand.<p>If I have to understand everything anyway I might as well just write it myself instead of talking to a bot that yaps without end, if i went with option two I should be fired because I'd be unqualified for my job. You're just backloading your problems basically.
My refusal to use AI/LLMs are a little more political. Until these companies start behaving more ethical and stop pushing the cost of business, i.e. scrapping data relentless and without respect of copyright and licensing, I don't feel like supporting them.
I think the conclusions and the advice for new programmers is valid. The comparison with the FSD is not very relevant in my opinion but it may be, I just don't see it, as it is a very different kind of skill. I would separate the preference from the "objective truth" and your goals. I think as a technologist you need to keep using AI if not for any other reason then only to keep up with the progress of the technology and have a first hand experience. For projects that I mainly do to learn, I disable autocompletion (Cursor) and type every single character by hand. For some other projects where I am more interested in the end result I am allowing autocompletion and most of the time I read and make sure I understand the generated code.
That’s like saying „I don’t use C anymore because that makes me forget how to use machine language.“ Humans build ever complex systems by building higher abstractions. I don’t need to know how electricity works to switch on/off the light. Learn how to build differently using AI tools. You cannot stop the trend by sticking to old ways.
I stopped using any sort of AI once I started feeling my overall problem solving ability disappear (as in, from memory, intuitively, the one thing that makes me valuable in the job market), and I don't want to end up as some glorified copy-paste vessel. Not to mention, being a copy-paste vessel is only fun for manager-type people I figure. Reviewing code is the least fun part of my job, using AI to write code is essentially just spending all your days reviewing code. No thanks. I like to hammer my own nails, not watch some idiot who can't learn and has to be babysit at every turn try to hammer the nails.
What? Note for any juniors reading this: DO NOT TRY THIS AT HOME.<p>Does the author enjoy writing code primarily because they enjoy typing?<p>Are they not able to have the mental discipline to think and problem solve whilst using the living heck out of an AI auto complete?<p>What's the fun in manually typing out code that has literally been written, copied, copied again, and then re-copied so many times before that the LLM can predict it?<p>Isn't it more dangerous to not learn the new patterns / habits / research loops / safety checks that come with AI coding? Surely their future career success will depend on it, unless they are currently working in a very, very niche area that is guaranteed to last the rest of their career.<p>I'm sorry, this is a truly unnatural and absurd reaction to a very natural feeling of being out of our comfort zone because technology has advanced, which we are currently all feeling.
tldr for those who won't read past the headline: still uses AI, but copy pastes code around from/to chatgpt to have a greater feeling of control.