Feels like the typical extrapolation: "Look how fast we've done the 80%! It is only a matter of time until we reach 100%".<p>Except that it never works like that, does it? The further we go, the harder it becomes. It's impressive how fast they went from nothing to almost-fully-autonomous cars, but actually-fully-autonomous cars may never happen, who knows?<p>As a developer, I honestly feel more threatened by the energy crisis to come (end of fossil fuels in the next couple of decades) than with AI replacing my job.
When I was 17 (I’m now 50) I was at a careers fair at my school. I was told “don’t go into computer programming, there won’t be any jobs because the computers will program themselves”
> <i>Fast forward to today, and I am willing to bet good money that 99% of people who are writing software have almost no clue how a CPU actually works</i><p>The fact that many people do not know exactly enough what they are doing can be seen in the result. The people whose goal is to write as robust and efficient software as possible still do have to know and control the details. It's like driving a car; you do not have to be a engineer to drive one; but the more you want to push the limits of performance, the more you need to know about the details. And as far as AI is concerned, despite the predictions and full-bodied promises, we are obviously still a long way from replacing humans as drivers. I see no reason why software development should be any different. There are so many very complex issues involved that are not mentioned in the article. Just understanding the requirements of software will stretch the capabilities of AI for a few more decades.
Writing a program is still the most efficient way to explain a lot of things, even to another human - I've been in plenty of meetings where hours of explaination and examples only added confusion, whereas 20 minutes of pseudocode or 5 minutes of real code made it very clear what we were talking about.<p>If you just want to do the same thing you currenty do but faster, AI will handle that. But modelling a business process properly and making it explicit will still bring huge value to those who care to put the effort in.
This keeps me awake at night I must admit. How do I best future proof my career?<p>I was sceptical about this until I started playing with GPT 3 and has it not only writen code for me, but also "explained" code to me. Sure, it's kind of limited right now, but it can only be a matter of time now before this all radically improves.<p>Maybe I should focus on system design and translating the messy real world into systems. That's the hardest bit of my job currently. I was also thinking of moving down the stack and getting deeply into security engineering or something like that (not that this is immune from AI either!!).
> 99% of people who are writing software have almost no clue how a CPU actually works, let alone the physics underlying transistor design.<p>My undergraduate education was in the early 90s, and at no point in my life have I ever had much of a clue regarding the physics underlying transistor design.<p>EDIT: also at one time I probably did have a reasonably solid grasp on how CPUs work, there's been an awful lot of advancement in the field over the decades, and I won't describe my understanding as anything more than a cartoon model.
I think the more realistic model of the future of programming given programs like Copilot is what’s happened in the SQL and compiler domain.<p>SQL and Compilers changed the goal of many programmers from writing custom extractors or serving as computer translators, to writing useful intents for extraction and actions. You still need to know WHY you’re extracting or adding your code in both cases, often need to understand the substructure enough to dive in and debug when results are unexpected or “not optimal enough”, and ultimately much of the cut content caused by the gained efficiency was ultimately boilerplate that, once eliminated, allowed programmers to take on more ambitious project scope than they did previously, because they weren’t spending their time rewriting a new data storage system or translating actions into machine language for the umpteenth time.<p>I feel we’ll see a similar progression - these code generators, very optimistically assuming a world where they work deterministically “good enough” to trust with even core business logic, will be treated as black box valid action generators, but in a world where action generation is free, but under-specification or incorrect specification is wrong, we still have something curiously resembling programming - the art of programming becomes one of maintainably chaining assemblages of black boxes into cohesive, maintainable superstructures.<p>I suspect we’ll say the same thing about code which was of simple and safe enough structure that we could trust it to black box generators that we currently say about SQL,<p>> Thank god I don’t have to redo all that work every time I start a new project<p>And, as with SQL, the project requirement boundaries will move to match your increased output capacity.<p>Much like the old saying “What Andy Giveth, Bill Taketh away”, it is perhaps modernized, “What Copilot giveth, Your PM Taketh Away”
I agree with the gist, and I actually just started doing a course on AI today as a result of not wanting to get left behind<p>However, this bit reads as needlessly hyperbolic to me:<p>> The engineers of the future will, in a few keystrokes, fire up an instance of a four-quintillion-parameter model that already encodes the full extent of human knowledge (and them some), ready to be given any task required of the machine.<p>I mean okay, sure, <i>eventually</i>. But people were predicting hand-wavy everything-solutions like this sixty years ago in Star Trek. It's not very imaginative. Not to mention- this four-quintillion-parameter model will be hugely inefficient for simple tasks. I think it'll be a long time before we care <i>that</i> little about efficiency.<p>But here's a much more near-term scenario I'm imagining:<p>You need to stand up a new microservice. You have an off-the-shelf "learned microservice" web framework that you reach for. You write a small training set of example request/response JSON, not unlike unit-tests. Maybe the training set includes DB mutations too. You start testing out the service by hand, find corner-cases it doesn't handle correctly, add more training examples until it does everything you need.<p>Now, in addition to saved effort vs hand-coding (which may or may not be the case, depending on how simple the logic is), you've got what I've started to think of as a "squishy" system component.<p>Maybe, because AI is fuzzy, this service can handle malformed data. Maybe it won't choke on a missing JSON key, or a weird status code. Maybe it can be taught to detect fishy or malicious requests and reject them. Maybe it can tolerate typos. Maybe it can do reasonable things with unexpected upstream errors (log vs hard-fail vs etc).<p>This is the really compelling thing for me: so much of what makes software hard is its fragility. Things have to be <i>just so</i> or everything <i>blows up</i>. What if instead, the component pieces of our digital world were a little squishy? Like human beings?
When the full car auto-pilot is getting released you said? Oh, I forget about foreign language translation. As a person who speaks multiple languages, we still cannot translate from one language to another so it doesn't look ugly.
I think humans being marginalized by their own inventions, might be a longer term consequence. Short term, we're still dealing with a growing demand for things where people with skills are more effective than any AI. And ironically, there's a lot of demand right now for people that can do some productive but low level stuff with AI.<p>Ten years ago, you needed a team with phd propeller heads to do anything with AI. These days, what you need is a lot of data engineers capable of moving data around via scripts efficiently and people that can use the off the shelf stuff coming out of a handful of AI companies. It's like database technology. You don't have to have a deep understanding of them in order to use them. I can get productive with this stuff pretty quickly. And I need a working knowledge of what's out there in order to lead others to do this stuff.<p>The consequences of a general AI, or even something close enough to that, coming online would be that, pretty soon after, we'd put that to use to do things currently done by really smart humans. Including programming things. The analogy is maybe that as an executive of a large tech company, you don't necessarily have to be a hard core techie yourself. You can delegate that stuff to "human resources". Adding AI resources to the mix is going to naturally happen. But it will be a while before that's cheap and good enough to replace everybody. For the foreseeable future, we'll have a growing number of AI resources but it will be relatively expensive to use them and we'll use them sparingly until that changes.
I totally disagree... Seeing actual development instead I think the era of end-user programming will came back. Really.<p>Try see ANY large enough project: no matter if it's a kernel or a GUI desktop application, at a certain point ALL of them try to integrate, this, that, that other etc becoming monsters. Original desktops was designed as a single OS-application-framework where "applications" where just "code added to the core image". That's the missing level of integration we can't achieve in modern systems and that's why all complex software became monsters trying to circumvent the lack of integration adding features directly.<p>Unix at first succeed over classic systems stating they are too complex and expensive, separating "the system" and "users" is the best cheap and quick solution. They they backpedal violating unix KISS logic with X11 and GUIs, libraries, frameworks etc because the KISS principle do not scale. Widget based GUIs born and succeed over document-oriented UIs stating those are too complex and expensive. The modern web prove they were wrong. In another ten years I think we will came back to Xerox...
> I don’t think this idea is crazy.<p>And other things the author likes to tell themselves.. or perhaps they enjoy building clout for saying outrageous things.. yawn<p>Sure there will be obsolete concepts, algorithms, and plenty of AI assistance, but programming is building a state machine, or like a house in an abstract space that powers machinery to accomplish tasks of value using a general-purpose computation device.. computer science is informs and is informed by a craft (programming) and that can only be replaced by another craft (whatever this is rests in the imagination the author). You’re still doing creative work, and you will only be as effective as your abilities in practice of applying theory.. the theory will not be “use AI” or “don’t learn computer architecture lmao what a nerd”<p>Imho that is.
> A time traveller from even 20 years ago would have a hard time making sense of the three sentences in the (75-page-long!) GPT-3 paper that describe the actual software that was built for the model<p>First off, the only one of those three sentences that a 2002 researcher would be stumped by is the first, and that solely due to the unfamiliar nouns. The other two sentences are perfectly classical, and the only difficulty one of the ancients would have is putting their eyes back in after they popped out on seeing the model sizes.<p>Second, isn't that good? It means the field has advanced, and there are new concepts being used, which I'd have thought is exactly what we want.<p>Third, how different is this than the past? Would a time traveller from 1982 be equally stymied by a paper from 2002? How about 1962 to 1982?
Meh. I read about neural networking in the 1990s, and it has been around as a concept from the 1940s.<p>in 1943, Warren McCulloch and Walter Pitts created a computational model for neural networks. One approach focused on biological processes while the other focused on the application of neural networks to artificial intelligence.<p>The author stated that a 3 sentence passage would not be comprehensible 20 years ago, but not true. Anyone could understand what they were saying from the context.<p>Code that writes code, which he is essentially talking about, heck, I wrote one when I was a freshman back in 1980. Because code that writes code is what it all boils down to. Using fancy words might get someone a job promotion for using the current buzzwords, but that's just buzzwords.<p>Color me not impressed.
Bull.<p>At best, you'll reduce the time spent writing code at the cost of greatly increasing the time spent writing tests to give you sufficient confidence that your autogenerated code actually does what it is supposed to.<p>And if it doesn't, good luck fixing it.
As a great example of what "classical" computer science can do, take a look at the driverless metro introduced in Lille in 1983 [1]. Researchers in France have used formal systems to prove system correctness and this shows in the reliability and safety of the metro. I like to think that this is a better way to handle complex problems than to just throw data and algorithms against and hope that it will work correctly for new data.<p>[1] <a href="https://en.m.wikipedia.org/wiki/Lille_Metro" rel="nofollow">https://en.m.wikipedia.org/wiki/Lille_Metro</a>
The guy doesn’t know much DL, has recently learned it, and is simply hyped. Its that classic curve of how much you think you know vs how much you do, and all the excitement that comes with knowing little.
At least 2 opposing forms of self-deception are both true and a risk to try to do your best to watch out for:<p>"I'm not a classical style developer and that's fine because because programming is over."<p>"I'm a classical style developer and I'm safe because AI can never do this."<p>At the moment, either might be true and either might be false, and both might be merely some percent true and false at the same time.<p>The only real mistake is probably jumping to either assumption at this time.<p>Well I'm sure an argument can be made about fence sitting too so whatever.
There will always be a specialization of CS when other segments splinter from it. It might not be as large but it will be there. The reference to programming being in a death spiral is a bit of a stretch. There will be common mechanisms for like process but business always needs customization so I would worry about the Linux kernel dying anytime soon. What I’d expect is that programming tools become even more forgiving than today allowing almost anybody to do programming tasks with computer aid.
I'd have more faith that we can eventually automate programming if we actually succeeded at automating a very well scoped problem domain such as accounting.
It'll be interesting to see who's right: Matt or Brooks. Matt essentially argues that AI can be trained to take the responsibility of specifying a system, which is the opposite of what Brooks argues in his essay No Silver Bullet.<p>Of course, I'm assuming that we are writing programs to specify what and how a system should work. It could be that AI (but not AGI) is so advanced that specifying a system can be compressed into training a model.
A computer program must produce correct outputs 100% of the time. Most AI assisted things I know are buggy black boxes. I wouldn't rely my bussines on that
Progress in deep learning has been quite astounding the last few years, but the output is still very dreamy, fuzzy, inexact etc, as though eg for image generation, the pixels are representing individual neurons, and you're viewing the 'dream state' of the network.<p>I think actual programming requires something more concrete; the 'atoms' of a program are not text letters or pixels, but something more abstract, more exact. I think once deep learning incorporates a symbolic or logic system of some kind, that might be a solution, but then that will apply not only to programming. All IMHO.
There is a reason we are getting robot dogs with guns on their backs at the same time AI is advancing; because once AI crosses a certain line it is going to be powerful enough to nullify most jobs. It's not science fiction.<p>What happens when you have a NN that understands how to integrate new physical input and render usable actions for creating outputs without human intervention? That's where we get machines building machines.<p>What happens when we start using AI to find the best recreational drugs? How about recreational drugs designed for specific kind of Overdoses - like crumple zones on a car? Or using AI to find the best cocktail of psychedelics that allow us all to work stoned and to maximum benefit all day long without diminishing returns?<p>Finally, what happens when these AI can layer themselves together through transfer protocols and problem solving distribution without us telling them to? A self-analyzing, self-correcting and self-improving system can be considered a kind of life.<p>We really are very close now.