For the many programmers asking questions like this… if your programming skills can’t compete with a supercharged auto-complete (call it “AI” if you prefer), then you won’t stay relevant or employed for long. Until SkyNet nukes us all I will continue doing what I do. None of my customers imagine replacing me or anyone else developing software with a really good chatbot.
I think this equally applies to all knowledge workers. LLMs are not going to cut the human out of the loop just yet, but building them into the workflow is going to end up being so powerful that anyone not using an LLM will be uncompetitive.<p>I see this being at least a few years out. Performance is stil scaling up and cost is still scaling down. It’s going to require collecting a lot more specialized training data to produce fine tuned model. Business will need to adapt and people will need to learn how to accelerate themselves. But eventually, I see this going the way of the introduction of the computer to the office. No productive person will work without it.<p>As a thought exercise, I’m going to assume you’ve used same GPT model to produce some code before and been impressed in a narrow kind of way. Then project that forward a few years, through GPT-4 and GPT-5. Add some better training data and a better understanding of how to fit the LLM output into day-to-day work. Maybe add the ability to fine-tune on the existing code base and design documents. The trend-line is at least going to hit junior dev level, so the job will probably turn into a senior dev supervising a “team” of LLMs.
I have been coding for 22 years, never thought of myself as programmer, programming for me is like literacy, it's like calling yourself a 'reader' because you can read.<p>Programming is a first order thinking, a way to deconstruct and construct things, sometimes you have to use it on a computer, sometimes you have to use it while baking bread, or while woodworking.<p>AI will help me so much, I am super excited, and now I use it every day, I plugged in whisper+chatgpt[1] to my emacs and now it is more and more like a partner, it is of course still stupid, but it will get better, and soon (with the recent llama work people are doing) I hope we will be able to finetune it.<p>We are doing lists and tables and data transformations and input validation for 60 years now, we built gazillion specialised spreadsheet clones.. I think now we can start building some new things, some really "soft" software, where the users can program it with language.<p>This is the first time I feel the computer is helping me do something, and its not me "fighting" with it.<p>[1] <a href="https://github.com/jackdoe/emacs-chatgpt-jarvis">https://github.com/jackdoe/emacs-chatgpt-jarvis</a>
The failure of no-code will be the same failure of AI programming...<p>People miss understand (and most business failure to realize the potential) of IT in general which is not typing functions in the latest programming lang...<p>It is problem solving. The number if times I have sat down with a business user to go over a problem they are having only to have to unpack the Rube Goldberg machine of illogical processes, datasets, and dependencies they have created to arrive at their "solution" they are now asking for my assistance with.....<p>I dont look for AI, at least not ChatGPT AI to solve that. Most IT problems are Solution XY issues. ChatGPT can solve to Y, but understanding X is the real trick
The fear of language model AI takes me back to my original fear when I joined a data science team at a huge bank with just a bachelor's degree and five years of experience. I was embedded in the mess of databases created by hundreds of large bank mergers.<p>I was working with really, really smart PHDs and was doubting my hiring to the team. I felt like a lot of these guys had forgotten more than I'd ever known.<p>During my time in that job, I realized that my experience in navigating the complex databases from working with the business and operational teams gave me an advantage. I took my domain knowledge for granted and was able to work on the problems at hand much faster than the highly intelligent colleagues who relied on IT/DBAs to write their queries. They often had to go back and forth for a couple of days to clarify their requests, leading to misunderstandings and delays.<p>In the end, I fit in just fine, held my own, and was aware of my individual talents. I enjoyed learning from the PhDs, and they were happy to teach from their backgrounds.<p>A lot of the experience from the big bank has 0 transferability to any other company. What made me stand out was I knew which tables of the gigantic data environment were the best to use, I had hundreds of already built queries for many different problems and I was dependable. It got me really far before I went off on my own.
I mean, take the next ticket on your board, feed it into an AI and call it a day. You could even take another job (or why not a dozen) and just outsource yourself to an AI. Good luck!<p>In case it's not obvious, this is sarcasm. It doesn't work like that, nor is it likely to anytime soon. And even if that day comes in our lifetime (which I doubt), your boss or PM still couldn't just delegate tickets to the AI. The AI would need people with advanced knowledge of both the AI and the domain who understand how to supervise it, and that would most likely be you and me.
“Programming” isn’t writing code, it’s understanding computers on a technical level well enough to make them do exactly what you want them to do given a certain set of constraints.<p>The tools we use <i>today</i> are incredibly powerful and time-saving without AI. I can whip up a reasonably interesting mobile app with a data backend I could scale to millions of users in less than a week and basically for free. And yet there are more programmers than ever. Which indicates that maybe these tools empower human engineers instead of cutting them out of the loop.<p>Although just as software engineering is very different today than it was 20 or 30 years ago, in 20 or 30 years it will be very different…
I deal with a bit of anxiety for a number of reasons, that I personally work on. I have a background in control theory we even used early neural nets in my research. I’m somewhat familiar with how LLMs work. With all that said? I get mini panic attacks about what my job will look like in 10 years. To pay the bills I’ve been in mobile native development for a few years now. I genuinely panic when I see people say AGI is coming soon, or Sam Altman say that AI is coming for Knowledge based work first. Is my whole world about to crumble? Will I be ok? What can I do to prepare?<p>Then other days, I use chatgpt or bing and I see some silly response, or I remember my control theory background and I feel at ease again.<p>It’s a really horrible rollercoaster ride mentally.
My wife had an uncle who has been programming a long time, and he tells of a time when he believed that languages like COBOL and FORTRAN IV would make programmers obsolete since now anybody could just tell the computer what they wanted it do and they didn't actually have know anything about computers or programming to do so.<p>Turns out he was kind of right and kind of wrong. On the one hand most programmers today know virtually nothing about computers and programming in the sense that a programmer in early 60s knew computers and programming. Yet programmers today are more prolific and in demand than ever before. I suspect we'll see something similar play out.
- difficult to automate or low value dev work is already being offshore to cheaper locations<p>- lot of dev work is just plumbing, config files, packages, APIs and tedious non challenging stuff<p>- demanding roles have high barrier: leetcode, certifications, open source, github, networking and so on - it will get more imo<p>- coding is treated as a basic skill taught in schools, like english or maths - not itself a differentiator<p>AI still does not have access to most businesses internal context, terminologies and so on - that is where we still have leverage.
Existential questions aside, this post is a non-article that adds nothing to this conversation. It's the kind of noise that I'd expect LLMs to replace.
There has been a shift in how we discuss this question.<p>For a long time, the answer was, "No jobs are at risk, AI can't compete in any scenarios. At best, it's a tool."<p>Now the answer is, "Only a few jobs are at risk, AI can only compete in a small range of tasks."<p>It's possible we're at the beginning of a hockey stick graph.<p>So what would it look like for AI to make the leap to mid level developer? It would have to understand:<p>1.) The codebase<p>2.) The technical requirements (amount of traffic served, latency target)<p>3.) The parameters (must have code coverage, this team doesn't integration test, must provide a QA plan, all new infrastructure must be in Terraform)<p>4.) The end goal of some task (e.g. integrate with snail mail provider to send a customer snail mail on checkout attempt if it was denied for credit reasons)<p>It would then have to make a design based as much as possible on the existing code style and library choices and follow it.<p>This is all probably possible now, although perhaps not for a general AI or LLM. But someone could build a program leveraging an LLM to provide a decent stab at this for a given language ecosystem.<p>The hard parts:<p>Point 2 requires an understanding of performance which is a quantifiable thing, and LLMs up until now have been bad at making math-based inferences.<p>Point 3 requires the bot to either provide opinions for you (inflexible) or to be very configurable for your team's needs (takes longer to develop).<p>Point 4 requires a _current_ understanding of libraries, or the ability to search for them and make decisions as to the best ones for the job.<p>-----<p>What about extending the above for a senior role? Now the bot has to understand business context, technical debt (does technical debt even exist in a world where bots are doing the programming?), and other "situational factors" and synthesize them into a plan of action, then kick off as many "mid level bot" processes as necessary to execute the plan of action.<p>The hard parts:<p>Current LLMs are pretty uninspired when suggesting ideas.<p>Business context + feature decisions often involve math, which again LLMs aren't great at.
I'm probably a bit older than you. Even so, I plan to work as a programmer for roughly the next 25 years at least. I'm not worried about competition from AI becoming a meaningful factor in that timeframe, any more than actors in the past few centuries have been worried about being replaced by trained parrots.
Imagining that AIs would truly be better at writing code than humans one day, I wonder who would write the instructions telling the AI what code to write. It may be a whole new kind of job, kind of like programming instructions that the AI has to follow. Maybe we could think of a name for that. Programmers maybe?
I think it depends. As a programmer, you really should carefully watch the development, and adapt to it. It might be that the way programmers work in 10 or 20 years will really be quite different than how we work now. Maybe we just talk to some AI and give it high-level instructions on how to design some system, speak about bugs, etc, and the actual coding, debugging, etc is delegated to the AI. So your role transitions more into management.<p>Or maybe that's not realistic, and it will look somehow different. In any case, I think it will change, and you should be prepared to adapt to it.<p>If you ignore the whole development, your job might indeed be at risk, or just be at much lower value.
I think there two questions.<p>The first is whether the current generation of AI is capable of writing finished software based on an end user's natural language specification. I think the answer to that is no. There are too many loose ends and these AIs are not good enough at logical inference and root cause analysis, especially when the logic is extralingual.<p>The second question is whether these AIs can make developers more productive so that fewer developers are going to find employment. I'm not sure. Yes they will make us more productive, but the demand for developers may well outstrip the added supply. If something gets cheaper, people tend to use more of it.
Yes. For the time being you are still relevant.<p>But AI is slowly taking over these tasks, pretty much the same way the conveyor belt revolutionized our industry.<p>It is difficult to predict how fast this will happen, but somewhere within a 5-10 year period high level programming will definitively be different from what we are used to today.<p>As the AI technology improves, programming will be a matter of dictating what you want, from what sources and how it should be displayed.<p>Code will become better, stable, accessible for all and make us all high level programmers.<p>There will still be programmers, however they will most likely be much more specialized than what they are today.
I'm Naive in AI domain.<p>LLMs are trained on existing contains which are presumably original so far. This means LLM can not think beyond what it has learnt from past resources.<p>- Would it be safe to assume LLM can not produce new original work?<p>- I worry internet will be soon will be flood with LLMs generated content which will again feed in to LLMs to train them further. IMO such feedback will hardly help, in fact that may make LLM more rigid in answers.<p>I know there is parallel effort going to detect machine generated content so that they dont rank on top but its is still in nascent phase.
Much of coding today is doing the same thing all over again in a new environment or new language or tool. LLM's will have plenty of examples of this and will be able to generate those products faster and better.<p>Original code will be harder to duplicate. We can assume that those coders will use an LLM as a tool to speed up their development, but won't be able to trust it for mission critical items. Although, writing this I realized that an LLM may become an excellent way to bug test.
I think that "mass-produced" code; the type that is fairly typical, for many companies, these days, using systems like React Native, Xamarin, Flutter, etc., are almost certainly going to be replaced by auto-generated stuff.<p>I guarantee that the C-Suite people are already salivating over the thought of firing all their programmers.<p>For people like me; probably not, but then, my skills don't seem to be in demand, so much.
Does the AI know your product?<p>Does the AI know your database?<p>Does the AI know your design, device, legal and other constraints?<p>No, and won't for any foreseeable future.
Look around and see how many programmers' jobs have been replaced by AI at your office.<p>None?<p>Search the net to find how many companies are cutting down their programmers' workforce and replacing it with AI.<p>Still none?<p>Search for anecdotal evidence on forums to find any programmer's job been replaced by AI.<p>All talks point to promises in the "future" only?<p>Rest assured. It happens when it happens.
Maybe there will be a time where programing in high-level languages will be seen as punch cards are seen today.<p>Even if that happens maybe the ones that know how programming languages work will be the good ones, as today a good programmer knows about OS’, architecture or compilers.
I have to deal with POs, PMs, helpdesk people, clients changing requirements every week... Programming is almost a tiny part of my job, understanding what people want now and what will want in 6 month is the real trick
Turns out Tom[0] has the laugh after all. The goddamn people skills will be what counts in the end.<p>[0] <a href="https://www.youtube.com/watch?v=hNuu9CpdjIo">https://www.youtube.com/watch?v=hNuu9CpdjIo</a>
Getting a strong John Henry vibe with Programming (and law degrees). Things are going to get interesting...Tinkering with Calculus and ML models feels like the eventual future of logic tasks.
Surely if there’s one task that’s AGI-complete, it’s programming. So if the AI makes programming obsolete, you probably have bigger things to worry about than job security.
For fun, I asked ChatGPT what it thought:<p><a href="https://zmichaelgehlke.com/autocontent/essay-sde.html" rel="nofollow">https://zmichaelgehlke.com/autocontent/essay-sde.html</a>