I'm imagining some science fiction nightmare scenario where we have to pull the plug on some AI and have to throw out all software because we can't trust that it doesn't contain the building blocks to reproduce the AI.<p>But then we find out we're fucked anyway because the AI has already conditioned human beings to write the software that will reproduce it...as a self-preservation strategy.<p>...cue The Outer Limits theme music.
This is one of the most relevant short stories I've read on "AI contamination"<p><a href="https://www.teamten.com/lawrence/writings/coding-machines/" rel="nofollow">https://www.teamten.com/lawrence/writings/coding-machines/</a>
From a security viewpoint; i wouldn't trust that code not to have embedded AI seeds, anyway.<p>"Reflections on trusting trust" <a href="https://dl.acm.org/doi/10.1145/358198.358210" rel="nofollow">https://dl.acm.org/doi/10.1145/358198.358210</a>
Good. One great archive is better than none.<p>AI rips apart conscious intent and reassembles it using what may best be described as piecewise functions. We lose the intricacy and the detail of individual thought of the unbroken line of thinkers that came before us when we interpret such piecewise functions as conscious intent.
What is a code dataset? And what is AI contamination? Are you saying it's impossible to create a collection of hand-written code from here on out and know that none of it was generated by an LLM?
This. And I'm wondering whether this was the end of human forums on the net as well. I mean, who can tell whether the comments he reads are coming from a human or a tuned AI. And then the implications of this in politics...
LLM:<p>Large language models (LLMs) are a subset of artificial intelligence that has been trained on vast quantities of text data to produce human-like responses to dialogue or other natural language inputs. LLMs are used to make AI “smarter” and can recognize, summarize, translate, predict and generate text and other content based on knowledge gained from massive datasets. LLMs have the promise of transforming domains through learned knowledge and their sizes have been increasing 10X every year for the last few years.<p>source: NeevaAI (What is an LLM in AI?)
John Barnes wrote a novel called "Kaleidoscope Century" back in 1995.<p>AIs had been created, some went rogue, and then they were fighting each other for computing resources. Then humans started shutting down computers and fragmenting the network, the AIs wrote new software that would run in human brains. Once someone was running the new software, there wasn't much room left for "human."<p>Pretty much the worst-case scenario, at least of the ones I've seen so far.
I've previously suggested the use of META tags on all pages where AI was used to help generate the content. But it seems this isn't going to happen.