Last week I used ChatGPT for the first time for a real world task at work. It was a self-contained lambda function to perform some admin tasks, so it seemed like an ideal fit. Although the experience was good, it's far from the end of programmers. This was my experience:<p>* Although ChatGPT is pretty good at generating code, it kept making simple mistakes such as calling non-existing APIs or introducing bugs. Some of them it could fix itself, some I had to fix.<p>* The code provided worked well for the "happy path" but failed miserably for some corner cases. I had to fix that manually.<p>* The code was working, but I wouldn't consider it production ready. It required some cleanup, unit tests, etc. Again, some of this with ChatGPT, some without.<p>* Not to mention that I was the one with the knowledge about the domain, what problem to solve, a vague idea of how...<p>Not to pick on OP but extracting a few seconds of video from a file is a pretty straightforward task, you can essentially do it with a bash one liner [1]. My biggest question is how ChatGPT performs with a large codebase, contributed over time by different authors, with complex domain logic and layers of abstraction.<p>I also had a brief existential crisis, but I just shrugged it off and got back to work.<p>[1] <a href="https://askubuntu.com/questions/59383/extract-part-of-a-video-with-a-one-line-command" rel="nofollow">https://askubuntu.com/questions/59383/extract-part-of-a-vide...</a>
What happens to social mobility in the post GPT world. Given that knowledge work (not just software) has been one big option for people to climb the social ladder. If the AI can reasonably do all knowledge work in future, the amount of social climbing opportunities will drastically decrease. And no, UBI will not create more opportunities for social mobility. It seems like more and more people will have to compete with the fewer and fewer social climbing opportunities.<p>Also what happens to Europe? All these companies behind LLMs are from US, and Europe is nowhere to be found. This seems like it will dramatically accelerate the wealth different between the US and the EU.
I don’t get the overall doom and gloom towards LLMs on the software field.<p>If you are a software engineer, this will output your productivity ten fold on the upcoming years. Now you don’t need to hire junior devs and can just build the product of your dreams with very limited capital.<p>In my opinion this technology will be as democratising as the YouTube’s early days.<p>Instead of worrying, learn to work with it. It will be harder for large companies/large teams to extract value from this compared to small companies/small teams.<p>It means competition between companies will increase but it isn’t necessarily bad for existing software engineers, especially solo founders.
I tried to get chatgpt4 to generate a basic react app that had a public page and a private page. you get access to the private page by authenticating with a google auth popup. gpt valiantly generated code and instructions for google auth. the code was impressive but buggy (outdated api version), but successively pasting errors into chatgpt went most of the way toward fixing it.<p>the instructions for configuring google auth were off. I tried a number of different ways to get gpt to give me the right instructions, but to no avail.<p>so it was back to the old way, of spending a few hours reading google's documentation (which I'm doing today) to figure it out.<p>once I'm there, I feel confident I could better coach chatgpt to instruct me. though I wouldn't necessarily need the help at that point.<p>on the code side, staring at the google auth api code it had generated, I was faced with a hard truth. I didn't understand this code. to iterate with it, essentially to <i>develop</i> it, I would continue to be dependent on GPT. Even if there was a one liner needed, I wouldn't be able to come up with it on my own. I'd always have to rely on this outside "brain". How can that be more efficient than a tight REPL loop conducted by me, an evolving master of this API?<p>And how will we humans even maintain knowledge of these API surfaces if we are not putting in our hours and hours of repetitive usage of them? We become ignorant of the evolving capabilities of the computing platform. And chatgpt becomes useless without humans who understand what's out there, what's needed.
> I think over time, we’ll see that what many of us really liked about building software deep down wasn’t coding, but intricately imagining the things computers could do for us and then getting them to do it.<p>Spot on. It's a good time for existential reflection: Who would you have been hundreds or thousands of years ago? Who will you be now that technology is radically changing again?<p>There will always be interesting, creative challenges like programming, whatever form they take.
I can't wait for the positive feedback loop of statically trained LLMs being retrained on data that was generated from the N-1th generation of statically trained LLMs.<p>There's so much of talk about what these models can generate, which is cool in relation to plugins, but there's still a lot of interesting code to write, companies to build, and ideas to formulate, that an LLM cannot do on its own. If you're terrified of your software engineering job becoming at risk, I urge you to just take a beat.
I appreciate this article and can sympathize with the disorientation the author and many here at HN feel. It can feel unnerving to know that parts of our jobs might become automated.<p>I'm processing this news in realtime like many of you and forming a plan:<p>1. Understand how LLMs work. I've heard the Wolfram paper is good; open to more suggestions here.<p>2. Continue to practice using real implementations of LLMs including ChatGPT and co-pilot.<p>3. Finding painpoints within our company that AI can make more efficient and implementing solutions.<p>If anyone feels the same way and wants to form a working group with me, give me a shout. Email is in my bio.
We need to push the notion that "closed-source LLMs are super dangerous, an existential risk to humanity".<p>Basically we need to equate "safety" in LLMs to mean "being open-source".<p>OpenAI keeps talking about "safety" as the most important goal. If we define it to mean "open-source" then they will be pushed into a corner.
So, the parts where AI makes human labor irrelevant, and where that's a disaster for 99.999 percent of humans unless the whole economy is restructured, isn't exactly news. If ChatGPT doesn't do that, something else will. It wasn't going to be more than 50 years no matter what, and now I don't think it'll be more than 20.<p>The part I'm finding is kind of a shock to me is the impact of the centralization on what you can even <i>think about doing</i>. If your application falls under their random definition of "unsafe", then you can't do it. Not even manually, probably, because the infrastructure for that will go away. If your <i>one off question or task</i> doesn't meet their approval, it doesn't happen.<p>Basically not only do the owners of these things become the only really important people in the economy, but they also get a new kind of direct control over people's lives.
Are people asking it to generate code like “generate a random color hex code” or are they trying to use it to write code you’re going to put in production for users with access controls and various complexities?<p>Because yeah it works fine for basic programming things but I believe you need to know wtf you’re doing when it comes to anything more complex, even something basic like some of our single endpoint services.
On the one hand, I think a lot of what ChatGPT can do is pretty amazing and a bit scary as a software engineer. On the other hand, I look at the projects I've done recently and throughout my career and find it hard to see how something that can solve bite-sized problems can tackle a software project that takes months to come to fruition. I'm currently working as an engineer doing a mix of kubernetes, cloud, Golang, bash scripting, git manipulation and other type of work. I recently upgraded 40+ repos to migrate to out latest build infrastructure and I had to reconcile 5+ years of folks doing things slightly differently. There was a constant process of running some script to make changes, finding outliers and one-offs, figuring out the fixes, running tests and figuring out the right way to ensure things were correct. I just don't see how ChatGPT can have done that project. Maybe it could have reduced the time it took me to write some supporting scripts, but I don't see it material improving the time it took to do this project.<p>I suspect many large IT organizations are like this.
I don't get some coders/devs/software engineers surprised that LLM can now pretty much create a whole code out of a prompt.<p>Wasn't this the final objective of the programming languages abstraction evolution? From Binary/Assembly to Natural Language Programming? I think it is awesome that more people will be able to create software/products as this accelerates innovation cycles a lot.<p>And, for now, I believe devs that don't rely solely on copy/paste coding from stack exchange don't need to worry about their job stability no?
Ok, so we have that software written by AI, AI is clever, it does not need good variable names or functions/methods/class names, some of the stuff it will call according to passed specification so it will be understandable, but the further it goes, everything will get more generic, taken from kirjillion of other code snippets on Github. And it all will be working.<p>Until someone starts testing this and finds a bug. And then AI will say, hey, there is no bug, I don't make mistakes. So you need a human to look on the code, a huge pile of spaghetti code with cryptic names and conventions, code patterns that fell out of fashion years ago but, since there is a lot of code that uses them, AI thinks they are ok.<p>How long it will take to fix anything, how long it will take to extend the code?
I agree that GPT will make creating software redundant.<p>Writing is definitely on the wall for outsourcing and MVP-style work. GPT can create a landing page and a backend/frontend for a business _literally today_. You just have to ship it, but it won't be long until that isn't needed.<p>There will still be a lot of value in understanding how systems work and interact with each other, at least until ML is able to build and maintain entire systems.<p>Until that happens, there will still be a lot of value in being able to dive into codebases and refactor/optimize as needed, at least in the medium-term.<p>Once platform engineering is mostly automated and running AI-generated binaries is de-risked, then code quality doesn't really matter. Hell, _code_ won't even matter at that point.
There has never been a case where better tools mean less software developers, software will only get more complex and full featured as competitions raise because of it
> OpenAI made the extraordinary and IMO under-discussed decision to use an open API specification format, where every API provider hosts a text file on their website saying how to use their API. This means even this plugin ecosystem isn’t a walled garden that only the first mover controls. I don’t fully understand why they went this way, but I’m grateful they did.<p>Did OpenAI just commit a trillion dollar mistake?
Working as a software engineer I often feels like I am living in the world of the Handmaidens Tale as a women with a functioning womb where the hole of society is organised around controlling everything I do. Hopefully LLM will change this but I do not underestimate the intellectual laziness of most "knowledge workers"
> Yesterday, I watched someone upload a video file to a chat app, ask a language model “Can you extract the first 5 s of the video?”, and then wait as the language model wrote a few lines of code and then actually executed that code, resulting in a downloadable video file.<p>I missed this. Can someone show me what he is talking about?
One thing I don't understand well is how much computation using GPT-4 takes. Some of these discussion remind me of Bitcoin as a global payments processor: sure, it can work, but it's doing a tremendous amount of computation and the maximum rate of transactions it can sustain is pretty low.<p>I know it used a <i>huge</i> amount of energy / GPU cycles / time to <i>train</i>, but now that the weights are computed, what's involved in running it? I know the model is huge and can't be run on an ordinary developer's machine, but I believe requests to it can be batched, and so I don't really know what the amortized cost is. Right now, this is all hidden behind OpenAI and its credits; is it running at a loss right now? How sustainable is using GPT-4 and beyond, as a day-to-day part of professional life?
LLMs will be end for portion of programmers for sure. We all know people at our companies who aren't in this for passion, but for a pay check. And while so far it has been fine to code just for a pay check their time is up. We soon won't be needing code monkeys who just produce OK code, we will need people who actually know what they are doing and are passionate about what they do.<p>We still need actual experts to vet the code LLMs produce and to choose the optimal solutions. This is what senior devs have done so far with junior and mid level devs always. There are people who can write code, but someone needs to review and approve what they have done.<p>Obviously LLMs will also eat into that space, but before we come up with AGI LLMs alone won't be able to completely replace humans in software.
In the last few weeks, I've noted on myself how I've been going through several stages of the Chat-GPT "disease", or whatever it is.<p>My first reaction was to be afraid for my money-making skills. My second reaction was fear about us ourselves making ourselves irrelevant--that fear still lingers.<p>My third wave of fright, cemented by days burning my eyes looking at a screen parsing logs and trying to figure out bugs for my corporate master, was, "when did my imagination go for a vacation? Old boy, don't tell me now that you have run out of ideas of things to make, of things to have an AI army to help you build." And now I dread that all of this AI is just hype, that it will never be good enough to come for our jobs without also coming for our jugulars, or that we will make it too damn expensive to matter[^1].<p>-------<p>[^1]: Capitalism has a way of leveraging economies of scale to make certain goods cheaper. But there are physical limits--what if Moore's law with regard to power consumption is really dead, and we as a collective <i>really</i> decide to spare power?
I don't think this is quite correct.<p>If the LLM has seen lots of instances of usage of an API, it can write code to target the API. It can generalize to some degree, but things go off track the further your requirements are away from the training data.<p>If your code is a lot of duct tape between well-documented, or at least well-named, APIs, that code can be automated. Which is great. That kind of code was always boring to write.<p>I'm less convinced that LLMs will be great at inventing new abstractions to map to a problem domain, and wiring up these new abstractions in a large codebase.<p>They'll need augmentation, fine-tuning, guidance, and it's not clear how well it'll all fit together, and where the limitations of the tech will show up as capability cliffs.
I am currently doing some difficult work that involves figuring out the right computational geometry algorithms to apply to my dataset in order to get the answers my users need in a reasonable time. ChatGPT is of no use to me there.<p>When I need to ask for boilerplate code for fetching a web resource or using a well-defined API, ChatGPT is great.<p>ChatGPT has made the mundane plumbing a lot easier. It is a threat to plumbers at this point. Many of those plumbers are now freed up to do more valuable work. I am happy to have it, so I can focus on higher value work.<p>If your only skill is at this kind of low level plumbing, you are in danger. But I doubt this is the case for most.
> OpenAI made the extraordinary and IMO under-discussed decision to use an open API specification format, where every API provider hosts a text file on their website saying how to use their API. This means even this plugin ecosystem isn’t a walled garden that only the first mover controls. I don’t fully understand why they went this way, but I’m grateful they did.<p>Why is this extraodinary? What would be the advantage of going through all the effort of defining a new format just to create busywork for people trying to integrate with you?<p>It's not like there would be anything stopping Bard/Alpaca/etc. from reading the same format as OpenAI.
While chat is intuitive interface to start with. I think we'll see more integration of these NLP models in traditional tools, like we saw with Adobe Firefly and Unreal Engine. That way users retain the control for fine tuning and doing problem specific tasks, but also gain this superpower of doing many actions with few words.<p>Key thing for adoption is to make models smaller and more context specific (to make them smaller), we've seen how LLaMA was downsized to run on commodity PCs, we've seen how Stable Diffusion can run on mobile phones. Even when we have to use larger models remotely, cost and ownership matters.
Check out: "Augmenting Human Intellect: A Conceptual Framework" SRI Summary Report AFOSR-3223 by Douglas C. Engelbart, October 1962 <a href="https://dougengelbart.org/pubs/augment-3906.html" rel="nofollow">https://dougengelbart.org/pubs/augment-3906.html</a><p>> Accepting the term "intelligence amplification" does not imply any attempt to increase native human intelligence. The term "intelligence amplification" seems applicable to our goal of augmenting the human intellect in that the entity to be produced will exhibit more of what can be called intelligence than an unaided human could; we will have amplified the intelligence of the human by organizing his intellectual capabilities into higher levels of synergistic structuring.<p>Now that the computers can talk and think and program themselves, and we can expect them to become exponentially better at it (to some limit, presumed greater-than-human), there is approximately only one problem left: how to <i>select</i> from the options the machines can generate for us.<p>It's still an open-ended challenge, it's just a new and different challenge from the ones faced by all previous generations. And again, just to repeat for emphasis: this is the <i>only</i> intellectual challenge left. All others are subsumed by it (because the machines can (soon) think better than we can.)
Current higher level programming languages are developed for humans to develop software closer to their natural language. If in the future humans will be writing and debugging little code, these LLMs will naturally evolve to directly writing Assembly. Scary to think about, but also makes me wonder how many non-technical people cope today with the "black box" of a computer.<p>About twenty years ago, I had a professor explain to the class that Rational Rose would be replacing us all....yet here we still are.
Not a ton of new material for me to think over, but did catch this random mention, which is super cool & I didn't know:<p>> <i>OpenAI made the extraordinary and IMO under-discussed decision to use an open API specification format, where every API provider hosts a text file on their website saying how to use their API. This means even this plugin ecosystem isn’t a walled garden that only the first mover controls. I don’t fully understand why they went this way, but I’m grateful they did.</i>
Comparitively, GPT has definltely worked here for less-experienced engineers. A coworker (Mech. E.) last week got ChatGPT to create a python HTTP GET for him and today got it to write the code to drive a bunch of relays off a Pi using I2C. Once he had it working, he sent me a DM "Is 0xFF hex?".<p>So accelerant, definitely. Beyond that, I'm on the sceptical side but accept there's quite a chance that's the wrong way to bet.
Reducing coding to "deciding what to build" is not a pleasant endgame.<p>In that endgame, anyone who can speak can command AI can do whatever they want it to do. Any kid with a louder mouth can outwish the wisest man on earth.<p>That means shortsighted impulsive criminals can use it to learn how to steal. Shady politicians can use it to astroturf entire campaigns. Everyone knows the tropes but it bears repeating as we all march dumbly towards what's coming.<p>It is far easier to destroy than it is to create. And humanity aside from China has not demonstrated any sort of sensible strategy to temper the tendency of destruction to outpace harmonious creation when it comes to AI. The more I see AI emerge and see people use it for exactly what people fear it shouldn't be used for, the more I feel China's centralized adoption of it, though maybe not "feel good", might be the DNA that survives in the natural selection of societies.<p>I know of one person who pays for GPT, and I'm guessing they use it to astroturf demand for their own business's products, since that's what they were doing by hand when they were younger.
I think as rather tech savvy people, we forget the degree to which most of the world population really struggles to use computers well[1]. The potential of this chat based AI technology to expand the market is massive.<p>[1] <a href="https://www.nngroup.com/articles/computer-skill-levels/" rel="nofollow">https://www.nngroup.com/articles/computer-skill-levels/</a>
> OpenAI made the extraordinary and IMO under-discussed decision to use an open API specification format, where every API provider hosts a text file on their website saying how to use their API. This means even this plugin ecosystem isn’t a walled garden that only the first mover controls. I don’t fully understand why they went this way, but I’m grateful they did.<p>It's a good point and some have already got this to work:<p><a href="https://twitter.com/vaibhavk97/status/1639281937545150465" rel="nofollow">https://twitter.com/vaibhavk97/status/1639281937545150465</a><p>Given that there's no technical obstacles to drop-in compatibility here, I wonder if we'll soon start seeing exclusivity requirements and such.
I love how all these AI researchers who write small code snippets in jupyter notebooks all day think LLM's are the end of software. Not disparaging AI research; it clearly takes a lot of expertise and work to do it well. But that's not software development.
ChatGPT is an amplifier of our skills. As Ben Thompson in a recent episode of the Podcast dithering said, something along the line that differentiable people will become more prominent. The existential threats are coming to current low-code/no code companies (some of them will thrive after integrating with ChatGPT or the like, though). They are also coming to entry-level engineers because there will be less demand of their time. Unfortunately, ChatGP will not be an existential threat to those who are good at only drawing boxes. Well, maybe asserting influence by drawing boxes is a rare skill.
For now ChatGPT will refuse to solve any bon-trivial programs, but i expect that getting behavior just right will require a prompt so specific that its actually hard than just writing the source code yourself.<p>It's one thing to ask GPT to write a high level script to trim 5s off a video using ffmpeg. It's another thing to ask GPT to make ffmpeg, or even to make a specific modification to ffmpeg.<p>It's hard to say how good GPT will be at real-world programming since we currently can't try it out. Maybe it can scale to the task, or maybe it can't, but i wouldn't say that programming is "finished".
It is interesting how history repeats itself here: When google started it was just a list of links to the websites that contained your answer. As tech advanced, it increasingly started giving out the answers in google's pages.<p>OpenAI's plugins are equally temporary. Right now they will be generating actions through APIs, but GPT4 is probably already capable of performing the same actions on your browser. All it needs is a "control my browser" plugin that allows it to make that reservation on expedia, without expedia having any control in it. It will inevitably eat the world again
My impression is that those AI code generators, if they end up working well enough that many people who don't know how to code can replace people who do, will be to coders what Monsanto is to farmers, ie we will have tons of devs who don't know how to do their jobs without those proprietary tools, who will struggle to earn enough money (they'll be easy to replace and cheaply paid) to pay for their code generator subscription. I'm not excited. I'm not too worried either though :)
Why don't more people see this capability as an accelerator and/or amplifier? To borrow from the book "Smarter Than You Think" by Clive Thompson, it can turn us into "centaurs", fusing the minds of humans with the computing power of AI.<p>Doesn't this show that we can now use this technology to generate and execute code for modest problems that have already been solved, while we can spend more time on even more complex problems?
> Yesterday, I watched someone upload a video file to a chat app, ask a language model “Can you extract the first 5 s of the video?”, and then wait as the language model wrote a few lines of code and then actually executed that code, resulting in a downloadable video file.<p>What chat app? Is this gpt-4? I haven't seen anything executing the code that is generated. So is the above quote a hypothetical or what?
So I think the thing people aren’t getting is this: it doesn’t matter that AIs can write code. That’s not how it’s going to replace us. With a big enough AI, when we’re ready, we won’t have to write software. <i>It will be the software.</i><p>via <a href="https://fosstodon.org/@praeclarum/110070954879714216" rel="nofollow">https://fosstodon.org/@praeclarum/110070954879714216</a>
> OpenAI made the extraordinary and IMO under-discussed decision to use an open API specification format, where every API provider hosts a text file on their website saying how to use their API.<p>Interesting! Somehow I missed this. <a href="https://spec.openapis.org/oas/latest.html" rel="nofollow">https://spec.openapis.org/oas/latest.html</a>
a mental exercise for the doomsayers: if stackoverflow + search engines were invented today, would you be saying the same stuff? it's clear to me that chatgpt is an programmer accelerator, not a replacement. it's just another tool - a very good one at that.<p>90% of programming is communicating with other people - chatgpt can't talk to people.
I'm not doubting that someday AI will able to do better than junior devs, even to lower-level of senior devs. But I doubt they'll able to replace those of higher level seniors, at least not in tens of years.<p>Then I predict we'll get more business analysts than programmers, since managements will still need people to translate their needs to AI.
I'm stoked by the idea that NL processing is suddenly becoming much more accessible and powerful. Old, boring static text documents are suddenly "coming alive". Imagine what this means not just for software engineering, but for all engineering, and even if not a single one of these documents is generated by a LLM.
An easy way to solve some of the problems of employment are to start reducing what "full time hours" means. With this first wave of LLMs, we can start decreasing down to 35hours. With the next wave, maybe we move down to 30 hours.<p>Once we can send LLMs to meetings with each other, we can move down to 15 hours of purely joyful work :-D
> Yesterday, I watched someone upload a video file to a chat app, ask a language model “Can you extract the first 5 s of the video?”, and then wait as the language model wrote a few lines of code and then actually executed that code<p>Have we already solved AI safety problems? It seems like LLMs can now execute shell commands on our computers.
GPT is "Drafts as a service"<p>That the draft happened to work on the video clip is more luck than something you want to bet your engineering life on.<p>You still need to go through an verify every character this statistical package spits out - it is not magic - it is just a probabilistic machine.
I could just be rationalizing here, but I think AI will be illegal soon. The idea of banning AI to protect many well paid middle-class jobs will be a slam dunk for any politician.<p>There will be no Post-GPT computing world, just the Turing police and console cowgirls.
Some betting market needs to host bets on when AI will put programmers out of their jobs. I don't expect it to happen for decades. (Although I might bet that it will happen earlier, as insurance in case it does.)
The devil is entirely in the details unfortunately, and it will make GPT basically unusable for anyone but existing software engineers for doing actual non-trivial programming tasks. At least how it is now.
> To be clear, it is also an end, or at least the beginning of an end, for a lot of the present day activities of software engineers.<p>Or the end of the beginning (of software development)...
ChatGPT will destroy GitHub and NPM long before it destroys programming.<p>What do I need them for if I can get equivalent code written for me on-demand?