A few thoughts (for context, senior developer, and use chat-gpt every day as an assistant)...<p>In the short-term (5-10 years, I cant see them autonomously producing products), it will need an experienced programmer to interpret and use the output effectively.<p>An implication of this is, in the short-term, developers become even more valuable. You still need them, and these tools will make the developer significantly more productive.<p>I was reading Melanie Mitchell's book 'Artificial Intelligence: A guide for thinking humans' recently (which I'd recommend). She has this chapter on computer-vision. And as an example, she shows a photograph of a guy in military clothing, wearing a backpack, in what looks like an airport, and he's embracing a dog. She makes an insightful point, that our interpretation of this photograph relies a lot on living-in-the-world experience (soldier returning from service, being met by his family dog). And the only way for AI to come close to our interpretation of this, is maybe to have it live in the world, which is obviously not such an easy thing to achieve. Maybe there's an analogy there with software development, to develop software for people, there's a lot of real-world interaction and understanding required.<p>In terms of autonomously producing products, I see these tools as they are now a bit like software wizards, or a website that Wordpress will create for you. You get a 'product' up-and-running very quickly, and it looks initially fantastic. But when you want to refine details of it, this is where you get into trouble. AI has an advantage over old-fashioned wizards, in that you can interact with it after the initial run, and refine it that way. But I'm not sure this is so easy, to have that fine-grained control you have with code. This is where I see the challenge being, to develop tools to talk to it, and refine the product sufficiently.