><i>The last major productivity boost in software was OSS. Each of those steps was 10-100x boost but then it stopped...</i><p>I don't think this author understand what is a productivity boost. OSS is a development model, it didn't turn into any "productivity boost", any more than the general technology level (including mainly proprietary technology) offered.<p>><i>Programmers will command armies of software agents to build increasingly complex software in insane record times. Non-programmers will also be able to use these agents to get software tasks done. Everyone in the world will be at least John Carmack-level software capable.</i><p>/rolls eyes<p>><i>At Replit, we're building an AI pair programmer that uses the IDE like a human does and has full access to all the tooling, open-source software, and the internet.</i><p>A, OK, this is building up commercial hype. Makes sense now.
Per Kernighan, debugging is 2X harder. If the AI jockeys don't understand what they're being given, man, it's going to be humorous watching them putting out fires. And how can statistically fit models exceed their training set w/o going random? How is AI going to string together equations to do physics or engineering? They're bloody squiggly symbols and letters.<p>And the marketplace still isn't interested in fixing bugs over "oooh, shiny", so my concerns might never be addressed.
> Everyone in the world will be at least John Carmack-level software capable.<p>lol<p>I'm sure that for simple tasks, AI-based pair programming will offer some level of acceleration, but until it can understand the semantics of the code it's generating, and how it fits into the broader _system_, it will not be able to be trusted. I do not look forward to a world where I have to spend my time debugging AI-generated code.
I was skeptic about AI's writing code but after playing with ChatGPT for a bit I have to adjust my views.<p>I think tools like this can be great for generating skeletons and draft implementations for simple CRUD-like things. For example I asked it "write an Android layout XML for a login screen with username, password and a login spinner using components from the material library" and it did exactly that. I followed up with "write the corresponding activity in Kotlin" and it did. It generated a correct implementation, including a few paragraphs explaining how it worked and that it mocked the login method with an artificial delay for demo purposes.<p>Another thread that convinced me was when I gave it a Kotlin interface for a CRUD Taskrepository and asked it to write the implementation. It wrote a correct implementation backed by a Map. With some followup prompts it was able to write save/load methods to store state in a JSON file, and generate events to a Flow whenever a task was created, updated or deleted.<p>Another one: I asked it how I could debug why a gstreamer pipeline had a refcount of 2 after the pipeline stopped running and it pointed me to a number of debug tools and environment variables I could set to trace refs in the pipeline.
I think it's frequent when professionals in an area are skeptical about massive changes coming to them, even when those later prove to be significant.<p>However, if there are some courses, videos, detailed documentation about the new way of doing software development, I'd be interested to look at that.
Yes, AI will improve things. They said that 30 years ago. Even MIT had an AI lab.<p>In the same time, testing has not really improved in the last 30 years.
> AI is the next 100x productivity boost.<p>I do not agree with this statement. There is totally no progress in AI since cryptowinter. Just there are too much people with always-online smartphones, so governments considered this field as too big to be out of their control. And it leads to 100x increase of no-brain programming job where everything what is needed from that kind of programmers - to fight against users.<p>The author is right about big changes is coming, but not the changes he is writing about.