Every time I hear about AI-generated code, I'm reminded of this comment from Linus Torvalds (taken out of context, of course):<p>"You copied that function without understanding why it does what it does, and as a result your code is GARBAGE. AGAIN."
While I understand the concepts of derivatives and tainted code, this AI/human-dichotomy is not as good as that reasoning requires it to be. Every statement of code I commit is in fact a derivative of work that potentially had an incompatible license.
They seem to have stopped just short of a total ban.<p>> Code generated by a large language model or similar technology, such as GitHub/Microsoft's Copilot, OpenAI's ChatGPT, or Facebook/Meta's Code Llama, is presumed to be tainted code, and must not be committed without prior written approval by core.
Good call, AI-Gen code is the same as most copied code, it might work, but you won't know why, which means that further development will be more difficult.
Do even slightly competent programmers ever actually commit LLM-generated code as is rather than using it as a clue for ideas on how to possibly do something? I thought only people with zero coding skills would just copy paste, everyone else would want to change a lot.
“Automation has always helped people write code, I mean, this is nothing new at all [..] I see that as tools that can help us be beer better at what we do.” — Linus Torvalds’ on LLM code generation/review (<a href="https://m.youtube.com/watch?v=VHHT6W-N0ak" rel="nofollow">https://m.youtube.com/watch?v=VHHT6W-N0ak</a>).<p>NetBSD still has an edge with its memory hardening, NPF, kernel-level blacklist, and “legacy support”. But I fear that this out-of-touch policy might eventually tip it into irrelevance.
Let’s go back to writing in assembly because if you’re writing in python you don’t understand what you’re doing (this is sarcasm of course).<p>AI has decoupled code design from fabrication. We’re just going to need to improve our design language and fabs to get better yield.