I spend hours prompting to refine my projects. The LLM is not smart enough to prompt itself into building exactly what I wanted. I made it and the LLM just helped me aggregate and access and organize knowledge faster than individual web searches ever could. Do you make sure to credit Google every time you share or use information you learned by Googling, even if you ended up reading research papers by actual people at actual universities?<p>I have my own automated LLM developer system. You simply give it a project description and it will iterate until it thinks it perfectly implemented the project description. I still take credit for what this system produces because I made the automated developer system in the first place and gave it the project ideas.<p>LLMs do not have autonomy. If you setup a self-prompting script with internet access and it figures out how to post something to social media, that still isn't the LLM's own inherent curiosity. That's a person who intentionally put together a script that gives an LLM access to tools and poorly defined instructions.<p>The instruction tuning that makes LLMs reply like they're a person you're talking to, rather than acting like autocomplete, is doing a lot of heavy lifting into personifying a pile of matrices.
It's interesting that you worded that question differently in your title and post because I certainly see people take credit when it works, but blame the AI when it fails.