I think we’re well within an era in which AI is only truly useful to people who know what they need the AI to do, and that is still an incredibly limited subset of the population. For that reason alone, learning to code isn’t a waste of time; you need to do it so you can tell an AI how to, or catch when it does it wrong. You won’t get far without that ability. You should even go deep into debugging and testing trenches because we'll still need an excellent grasp on how to do that properly for as long as I can imagine. AIs will make mistakes, and we will continue to as well.<p>I made ChatGPT generate some genuinely useful boilerplate for the Connect library by Buf, and that was totally neat, but I had to know which part of the documentation to prompt GPT with, which language to ask for, how the existing server and routing worked, the shape of the data I was working with, to specify what would be streaming and what wouldn’t, etc. I had to coerce it to make several corrections along the way, and actually hooking it all up and running it required a lot of knowledge and some mental/keyboard labour to get something running.<p>It worked and I’m stoked that I managed to make it useful, but that’s just it; I had to prime the system and adjust it along the way <i>just so</i>, otherwise it wouldn’t have been useful.<p>As Carmack suggests, this could be a perfectly useful tool, but what matters in the end is 1. Did it save time and 2. Did it deliver something better than or equivalent to what I could have done alone.<p>If it doesn’t satisfy at least both of those it’s not really relevant yet. And we’re very far from AI accomplishing that without significant assistance.<p>My takeaway is that as software devs we should learn to use these systems, we should try to leverage them to save time and improve quality, but I agree completely that in the end it only matters how much it improves the end result and how long it takes to deliver it. For that reason we still need to code well, we still need to understand our systems and tools well — that won’t change much. In fact, understanding how your AI works is an important aspect of understanding your tooling, and as such, knowing what you’re teaching it will require a great understanding of it as well as the subject matter.<p>I do think a certain class of development work could be mostly eliminated by tooling based on AI. Not the entire industry, though, and not in 10-15 years. Even so, I worry about the people essentially regurgitating code which text-based AIs will rapidly become capable of reproducing at massive scales. They will need to skill up.