I'm an experienced programmer currently mentoring an friend who wants to branch out into software development.<p>We agreed the following experiment would be interesting to both of us: We're sharing a ChatGPT Plus subscription, and I'm allowed to read the conversations he has with the model related to his learning projects. He's using it for general tech questions, but also for code analysis and code generation, bug-finding and so on.<p>It's been a mixed bag. On some level, his progress is faster and his productivity is higher than it would have been without the AI assistance. OTOH, the cost to this progress not having been hard-earned is pretty high, too: He takes a lot of AI-generated boilerplate for granted now without understanding it or the concepts behind it, so when the AI gets it wrong or forgets it, he is unable to notice what's missing. He also gets stumped/stuck often where he shouldn't - technically he's aware of all the constituent parts of the solution he needs, but he can't integrate the knowledge. Often he doesn't even try and just heads to ChatGPT, which can't help him, often because he doesn't know how to phrase the question correctly.<p>There's a lot of value to having done the legwork and having fought for every line of code and little bit of a solution that gets skipped over here in this style of the skill acquisition.<p>Edit: A few more details in later comment.
I don't think "lazy" is the word I would use, but I can see how some would see it like that. In my mind, GPT (and LLM's) are just the next layer of abstractions on top of a already massive stack of technical abstractions. For a SWE, this stack of abstractions crosses so many levels for a simple "hello world" to work:<p>1. Software (my code)
2. OS (Linux, kernels, etc)
3. Hardware (3090, 5090x, etc)
4. Electrical (Where is my energy coming from? how is it produced?)<p>Each of these levels could be broken into another 10 abstractions: On the software level, some people may understand how their compiler is working, but could they program in binary? What about understanding how their program interacts with memory? What about the kernel of where their software is deployed in the cloud? Do they know how their software is deployed in the cloud? Could they build the production server rack that their container is deployed on? Obviously this gets a bit ridiculous the further down you go- it's impossible to have knowledge about every part of what makes your code work.<p>I think that when people use terms like "lazy" or say that knowledge is being lost with abstractions like GPT, they ignore the massive list of abstractions that allow them to be productive.<p>I'd guess my thesis is that newer/GPT-aided engineers don't necessarily have less understanding, but their knowledge might just be shifted by one level up on the abstraction stack.
It is impossible to explain just how much ChatGPT has given me confidence to branch out into unknown langs in our company! It has made me fairly lazy in terms of the language I know, and I automate just as much as I can, but it really (really) shines around things I absolutely have no idea about, and has provided so much value.
I have the same kind of thoughts often, but then I think to myself: is it a bad thing being lazier if I get more things done? I don't think so. Overall it's been net positive and that's what matters for me at the end of the day
I had a similar experience just today when trying to debug a script that serves as a connector between AWS Athena and our internal log querying platform. I got fed up with trying to understand a bunch of arcane logic and asked ChatGPT to write me a new one.<p>After a couple of back-and-forth rounds of copying and pasting error messages and sample data, I got the ChatGPT script working as a drop-in replacement. The new script is more readable, the logic is simpler, it took me less time to complete than either debugging the old script or writing a new one from scratch, and it was an overall more enjoyable experience.<p>There is little doubt in my mind that in the not so distant future we will gawk at the thought that humans used to write production code by hand. Sure, the artisans and the enthusiasts among us will still be around to keep the flame, but day coding will be a mostly automated endeavor.
I predict future problems that are not easily “solved” or at least aided by chatgpt will have too high of an “activation barrier” to tackle relative to other problems that can be helped by chatgpt<p>The next generation of thinkers will be shallow and won’t be able to or won’t want to think hard about problems by themselves.
Laziness is something that should be earned.<p>When you’re starting out, you should be doing things the hard way on purpose. Learn things the hard way, don’t look at “Learn X in Y days” type tutorials. Use simple tools. Write code by hand.
ChatGPT is an excellent sparring partner, for new, experienced and senior/ninja-elite developers alike.<p>It does not, however, provide any <i>solutions</i> all by itself:<p>1. A significant amount of code it suggests uses external APIs that, while it <i>would be nice if they existed</i>, are purely imaginary.<p>2. Even when suggesting sensible code using existing APIs, it will happily provide coding snippets that have <i>nothing</i> in common, style-wise, with the code base you asked questions about, <i>even if you provided sufficient context</i>.<p>3. Some code will be, even if you push back, wholesale lifted from sources <i>whose license you simply can't comply with</i>.<p>4. Even the most basic coding questions, like "give me a C# function to fold SMTP headers according to the RFC" are flat-out <i>wrong</i>, or, best-case, woefully inefficient.<p>So, whenever I use ChatGPT, it's entirely to see if there's a perspective that I missed. 80% of the cases, it's just babbling nonsense, and I happily disregard those results. The remaining 20% is quite valuable, though, even if separating the wheat from the chaff definitely involves my human judgement...