Looking at our own experience and the experience from practitioners in other companies, it seems like AI-assisted programming is able to deliver a realistic 10-20% overall productivity boost for developers (including the fact that developers don’t code all day). That’s significant, but do we already see an impact on hiring beyond the regular ups and downs of the economy? Personally, I don’t see it yet as the productivity boost seems to be eaten up by unrealistic project plans and expanding scope. But what’s your experience?
My own experience is that it _might_ speed up juniors (and even that, I'm not convinced about). But juniors make up a tiny part of the overall throughput, I am easily 10x faster to do anything than they are. So a 10-20% productivity boost for juniors is pretty much negligible for the company.<p>For myself, everytime I try to use chatgpt it completely fails to be helpful, for the same reason that stackoverflow also fails: the problems I have as a senior are too specific or too complex. Every single time, querying an LLM ends up being a waste of time.
LLMs help bootstrap ideas where developer lack skill. By this very nature, that generated, not totally understood code is the worst thing you can put in production.
The fact that it will be put in production by the metric ton is a guarantee that any initial gains in speed will be offset by countless hours of desperate debugging. And for that, LLMs are useless, in my experience.
As a non-native English speaker LLMs are amazing when it comes to documentation and design docs writing. In these situations, by either proof-reading or helping me conveying the message in a succinct way, they are amazing. Doing those tasks faster frees some time for programming. When it comes to actual coding I use them to write scaffolding or perhaps the trivial test cases, but nothing more complicated.<p>Does it mean that I can do 10/20% more things? No, I likely do the same amount of tasks, but I feel the quality improved because I can save brain cycles for those things LLMs are not good enough at.
> realistic 10-20% overall productivity boost<p>With productivity being measured as what, exactly?<p>The Venn diagram of "discussions about LLMs" and "Idea Guys Talking" is close to a circle in my experience.
I've seen juniors solve issues in an hour or two that could have taken them a day or two without ChatGPT.<p>The code isn't always the most desirable one but code review catches that.<p>On the other hand, this puts a larger burden on seniors who have to review subpar code on a faster pace now.
> AI-assisted programming is able to deliver a realistic 10-20% overall productivity boost for developers (including the fact that developers don’t code all day).<p>Dubious, at best, claim
Not a pro-dev but in & around IT for many many years.<p>My issue with the LLMs is not that "it will replace" your expensive staff. The problem/challenge imho is not how to produce 1000 lines of code. Yes I can do that myself with LLM even now.<p>The real challenge is that business need people that understand the business as well. You've most likely all had arguments in your workplaces with people that make a code suggestion, that 'seems right' at the time but in your core you feel that this will be just a technical debt, or will create some security issue in the future.<p>So cutting down from (e.g.) 10 experienced devs to 2, to save money, will come at the cost of having the business coming back to you asking for more & different things as the direction/strategy is different.
From my experience, more junior devs don't have the skills to use LLMs properly. On top of that, most people in general aren't up to date on the latest updates and will try to use free ChatGPT and form an opinion based on that. It will take some LLM education for them to be more productive, and some serious "you shouldn't use it to generate anything you couldn't write yourself" talks. But I can see the potential in the future.<p>I kept on top of what's happening in this space and found ways to improve my work, but I doubt the improvement crosses 10%. This place will have a skewed statistics of people who can use LLMs efficiently - outside, in real world I don't think the impact is noticeable at all yet.
Not a dev, but as a DevOps engineer, I use LLMs for a few types of work:<p>1. A starting point for a problem I've no experience with - "How do I set up replication on a database". I won't follow it blindly, but it gives me a starting point to search online.
2. Helping me put together proposals and documentation. It's great at setting up an outline for things or rewriting my badly written things.
3. Writing regex<p>As for impacting jobs specifically, I havn't found any impact, yet. If anything, I've seen companies either put down blanket bans of using AI (for fear of people imputting sensitive data), outright banning the URLs on the VPN, or putting very strict policies in place with how they can be used.
We have (probably unwisely) put off hiring a junior because I'm so productive with an AI assistant. I have built a complex desktop app in less than 8 months by myself. I have deep domain knowledge and have only recently brought on a systems analyst to help with QA testing. The PM has no idea about the requirements, at all. My boss (technical architect) has barely had to lift a finger.<p>This was my first app in WPF. Huge learning curve, Copilot has been indespinsible.
No, and I am not expecting there to be much if any change, at least not until the technology drastically improves.<p>Ai can still help a ton with the predictable and routine code, but it sucks at hard code. That gets even more true the more niche the application is. I would expect it is making an impact in applications that are crud, but anything with some amount of depth is going to require the same amount of human to power it.
> the productivity boost seems to be eaten up by unrealistic project plans and expanding scope<p>This is what happens with every improvement in software, doesn't matter if its better hardware, improved tooling, increased libraries or simple more programmers. The increased expectations always create more, not less demand for programming. The only thing that a change in tooling does is a change in skills that are in demand and features that are demanded.<p>The juniors probably are most helped by these tools. I find the benefits a mixed bag. Even as a better autocomplete it (github copilot) frequently makes the most trivial grammatical errors such as unmatched parenthesis, which a 'dumb' autocomplete would never produce. And sometimes the code looks so good, it is easy to overlook that one insidious semantic error and is now costing you debugging time.<p>I won't be replaced by AI, but I might be replaced by a younger dev who is able to get more value out of these newfangled tools than me.
As far as I can tell it will only increase developer demand. The main driver is starry eyed investors shoveling money into AI startups. This isn't just a boost to developer employment it might be <i>the</i> life support machine preventing the bottom falling out of the dev hiring market right now.<p>From what I've seen the productivity boost is negligible and might even be negative. The developers I've seen who claim a productivity boost seem to discount all the times it leads them astray. This needs to be deducted from the productivity gains driven by getting the odd snippets of code a few minutes faster. I know that most of the times I've asked questions I couldn't trivially google it gave me bullshit answers.<p>It's ironic, really. However, it just goes to show that the mainstream corporate media is very, very good at spinning a narrative out of fiction. Even junior devs are convinced by this narrative.<p>The big risk for job prospects for devs is not AI it is tech industry consolidation: that is, Microsoft, Amazon and Google growing their competitive moat and swallowing or destroying startup competition. The more secure they feel in their market position the more likely they will be to swap their workforce with cheaper, lower quality workers. This is what happened to Detroit in the 1950s and why it went from a thriving middle class city with tens of thousands of auto industry SMEs to a desolate wasteland run by 3 vertically integrated companies who conspired to strangle all startup competition.
Here's the thing that I'm wondering about most.<p>Anything that current LLM's can do code-wise is overshadowed for me with what they can do product, design, marketing-wise. If LLM's actually break the productivity ceiling why would any developer bother working for anyone other than themself? Having a job and working for someone else just disproportionately improves their wealth while limiting yours.<p>If this turns out to be the case I'd expect to see unprecedented micro-software shops spring up overnight. Why bother burning yourself out working for a FAANG or F1000 when you could make more and be in control of your own destiny and happiness? The rise of entrepreneurship should follow actual increases of LLM productivity across the board.
Depends on the context.<p>Most people in here fail to understand how broad and varied the software industry and culture has become. This place, like every community on the internet is a bubble.<p>So, if you're talking about basic CRUD in Java/Python/Node done remotely from 3rd world countries and Eastern Europe for companies not directly in the technology or finance sectors (e.g.: retail, services), then the response is a resounding yes. People in Poland, Brazil and India are certainly using LLMs to do faster what they did before: spitting code they don't understand copied from StackOverflow.<p>True, it is most of times bad code. But anyone that hires from 3rd world countries is not overly concerned about code quality.
Counterpoint: $employer evaluated github copilot a couple months ago and decided to pass. Personally I liked the magic autocomplete even if it mostly saved keystrokes rather than time, but couldn't get anything useful out of the other parts.
Definitely!<p>1. Many companies are hiring "AI" engineers. My guess is 90% of these jobs are virtue signaling to investors and those positions will go to folks that are good at appearing competent in interviews. Yay! More overpaid incompetent colleagues, just what we need.<p>2. My editor saves me about 2 minutes a day with smart printf/loop/variable completions (JetBrains editors -- no sarcasm, i like this!)<p>3. I am wasting time responding to email from PMs suggesting that "we don't have engineering capacity to do XYZ, but maybe we can use an AI to do it???"<p>(I am not anti-GenAI -- I've used it to create flyers and do pretty cool stuff!)
While I don't think AI will replace anyone in the near term, I find it interesting how many people react in IT field to any attempts to automate code development to a certain degree using AI.<p>This reaction is quite harsh and emotional. "If you think you can be replaced by AI, it means you are a shitty developer" is quite popular). This talks more not about LLMs, but about our own insecurities.<p>Yes, you CAN be replaced by either AI, or any other technology shift, or by younger more productive developers, or simply by market forces rulling your skills out of favour. It happened before, it'll happen again.
As far as I've seen, LLMs used to write code are only good for getting juniors to a PR features faster. But, it slows down everything else in the process. Reviews take ages because there's random nonsense landmines scattered around, the previous PR feedback is less likely to be applied to later code because they're not writing it, bug fixes take much longer because no one understands the code well, and there's just so much more code to deal with at every step since it doesn't matter to them if they are copy-pasting 10 lines or 1000.<p>I've tried using them myself, but they end up sapping more of my time than they save because of all the dead ends they send me down with plausible sounding bullshit. Things that use real terms, but incorrectly. I basically treat LLM output like that one guy who doesn't know anything except the existence of a bunch of technical terms and who throws those terms around everywhere trying to sound smart. It might be nice to know that a term exists if you're unfamiliar with the topic, but only to go look up what it actually means elsewhere.
In my experience, most coding work is on established code bases. There's that well thought out senior engineer phase at the start of each project, but the vast majority of future work starts with grokking the existing code base.<p>I don't think we'll see much of an impact of LLM-generated code until these systems are trained on the code and the existing user and dev documentation of the project itself.<p>As for jr. engineer impact and prospective candidates, I'd say virtually zero.
I have tried using LLMs on the legacy C++ codebase that I work on, and the only thing it could reliably do was generate code for unit tests.<p>When I fix bugs, it's usually not helpful because I need to debug and track down where the bug is.<p>When I develop new features, it occasionally uses the wrong lock, or makes up APIs that don't exist. I find it gets in the way more for development.<p>For C# and .NET core, I found IntelliCode to be pretty useful.
It'll be really hard to measure this because you can't easily isolate the impact of LLMs vs all the economy. I guess anything at all will overshadow LLMs numbers (e.g. tiny changes in Fed interest rates having more impact on job posts than LLMs ever will).<p>I only have anecdata to share. My coworkers and friend seem to be going through the disillusionment phase and finding LLMs as a better (if mildly outdated) search engine and a helper for simple well-known tasks. I guess the 10% productivity improvement makes sense from what I've seen.<p>I've also met company owners that thought they could reduce their workforce drastically because of LLMs. I can only wish them good luck, it's going to be bumpy for them once they realize the mess they will be in (e.g. spending more time troubleshooting systems their engineers never understood in the first place).<p>TL;DL; No, except for places you wouldn't want to work at.
At the company I currently work at we are detecting people that take a very simple take home test, then they go on to use chatgpt and proceed to not be able to explain why "they" solved the problem that way.<p>Also people sending a barebnes chatgpt cover letter when it is optional to do so.
It's just a more effective google, now that google has basically become completely useless to get results which aren't 'seo optimised' (full of rubbish). Its value is still mostly in generating boilerplate.
Idk, we got access to GH copilot with our project loaded in it to test it and understand if we should use it. Beyond unit tests and trivial code writing it's almost useless. Even for unit tests I tried using the chat feature to ask it to generate ut for a specific function - at first I needed to convince it to generate them since it didn't 'want' to, finally, when convinced - it spewed such a garbage that I would have written everything faster by myself.
In other words, there's some value in autocompletion for simple use-cases, but the moment situation gets complicated involving adding a feature that impacts more files and that needs to do some data processing or something related to either writing threading related code or fix threading bugs - it just takes more time (at least for us, the xp is similar with my colleagues)<p>Maybe other companies did manage to squeeze real help in more complex situations from it or maybe gpt 4.5 is much better than copilot (I tried to use public chatgpt to detect some threading bug and still it couldn't find it, until in the end I found it by myself), but at least for us the xp with using copilot wasn't that stellar
LLMs are to a programmer what a rhyming dictionary is to a writer.<p>A good poet says complex things using few words. A bad poet is someone who conveys something simple using a lot of words.
Specifically for Copilot. It's good for exploration type of work or getting some quick examples of things without diving into specific docs/search all the time. Using the chat is convenient and can go back to previous examples quickly. Also, I've found it's useful for filling boilerplate when writing unit tests. As for actual coding I find most of the time that the suggestions are annoying and have turned them off (have a shortcut to toggle them if I'm feeling stuck etc.). Overall it's an OK tool, but I don't find it mind blowing. It's good if you're playing around with things that you don't understand or if you ask it for different approaches to doing something. I've found some gems that way, but in my work I don't encounter those often enough to say that it's been a big boost to productivity.
Our tech lead was very enthusiastic about LLM-generated code for a minute. This lasted until the third or fourth time one of us reviewed a PR he'd so written and requested ground-up rewrites.<p>My guess is that, by this time next year, the vast majority of people and companies currently enthusiastic about generative AI will be pretending like they never had anything to do with it, a small hardcore of true believes excepted. The hype cycle will then begin anew with something else.
nopes.. beyond starter projects, raw usage of chatgpt hardly leads to any gains in productivity in the enterprise setting .<p>So far no, but in the future with more specific and enterprise suitable tooling - likely
On my own experience, LLMs are great to speed up boilerplate and test cases (note 1). And the closer you are to the business the more it become less useful (e.g. I used to work in an insurance company and it insisted to sum interests or taxes on some payments by hallucinating variables)
Note 1: On tests written by juniors it helped us to have much more tests, but did not improve their quality, llms have no way to measure coverage so it would just generate "cleaver" edge cases (NaN, and infinities even though theses cases were already caught by another place in the code), because the juniors would just accept any code recommendations from llms. Seniors on the other hand would only accept tests they wrote good descriptions for... Were the seniors fast? Yeah, they even enjoyed writing tests now. But reviews were a lot more about catching useless cases.
And yeah, I'm aware tests are not only about coverage, but it is one of the concerns we had.