As we all see GenAI changed a lot of things and evolving rapidly.
How software development practices will change?
Will it be more designing software important?
Will it be more towards to maintaining of large code bases which most of them generated by AI?
How things could change?
What is your vision about future?
Whenever I think about the future of software-engineering, I'm reminded of the history of fourth-generation modeling programming languages, and their failure to gain much foothold in the industry outside of some small niches. We develop software using the current paradigm because we <i>want</i> to. It's what makes sense to software-engineers. As an industry, we've already been shown alternate, arguably superior ways of writing reliable software, and we rejected those in favour of largely the same paradigm we've had since the beginning. I know this point doesn't have anything to do with AI. It's just something to think about whenever people foretell massive paradigm shifts in the industry.
GenAI isn't super magical, it helps senior engineers with boilerplate and certain types of method discovery.<p>I think the simple answer is to imagine life before Google and general purpose search engines that could find answers in documentation.<p>How did life change for developers after this? More information, more easily available, complexity of the domain increased and comprehensive documentation (manuals) went away.<p>GenAI/LLMs really are a better search engine, in so many ways, so just imagine this trajectory.
I don't think that software development as a profession will cease to exist for a long while, but I do think that in a decade or so, it will look very different to development today, and it won't be an improvement (from the perspective of a developer who likes writing software).<p>Based on my own feelings toward the profession which I see echoed around in online and offline discussions, there's often many undesirable elements of this career giving low overall job satisfaction, and I can only see increased AI presence making it worse. So at some point the good won't outweigh the bad, like one might argue it does now, and there'll be a bit of an exodus from the field, leaving few but well paid developers who either enjoy or tolerate AI's involvement. All just my own speculation, but that's the nature of the question.
I think that software development has peaked. Not because of GenAI but because of saturation and lack of innovation.<p>The primary value prop of tech has always been automation, and there has been very little new automation lately.<p>Google and Facebook have already perfected the automation of surveillance and advertising. Streaming services have already solved the automation of content delivery. Cloud services are stagnating, and in some cases we see the pendulum swinging back with companies opting for on-prem hosting due to pricing. Most "innovation" now is just competing for market share and IP.<p>GenAI is kind of the last and greatest promise of automation. But the thing is, either way it goes, it's the end of automation. Either it fulfills the promise and delivers full AGI, rendering all software devs obsolete. Or, its a dud, undermining the value proposition of automation. It would mean there's a ceiling to what can be automated, and we've hit it.<p>Here's a thought experiment: it takes 100 engineers to build a bridge, but only 10 to maintain it. The bridge is now complete, what happens to the other 90 engineers?<p>I think the future will look something like the history of automobile manufacturing. In the beginning, there were assembly lines where cars where assembled by hand. The jobs were numerous and lucrative for the time. However, as the tech evolved, humans were gradually removed from the loop. Now nearly all of car manufacturing is automated. The only human input is design and oversight performed by the exceptional few.
I notice a lot of XY problems with GenAI if you're not careful. Thus I can picture more senior engineers writing (with GenAI) really specific business requirements as unit tests with intermediate and junior engineers (with GenAI) writing the actual code.<p>A lot more work on building these internally (ex: Azure OpenAI & huggingface) and business analysts or others vetting the outputs or sentiment of output.<p>I can also picture a lot of work on designing more complex LLMs architectures. Like if you have the money, could you have a rough loop that reviews issues, roadmaps, and/or todos and proposes changes to the code itself? If so - see the importance of writing good tests and solid devops practices.<p>Conversely, integrate GenAI into PRs. Think "making AI" a team member. And even more so if you think about mythical man month problems. Software engineering makes you a manager for AIs.