This article has 2 major problems:<p>1) It seems to somewhat conflate "more power" and "an order of magnitude more training data". I don't think there is any particularly strong evidence that more training data is the key. We know from things like AlphaGo that AIs are using training data very inefficiently. Humans learn to play grandmaster level Go with many less games than it takes a computer; arguably orders of magnitude less data. We need better graphics cards/chips.<p>2) The evidence to date is that every time we add an order of magnitude more FLOPS we get transformative improvements in performance. AI can now make a decent-enough attempt at every field of endeavour humans are active in; including arguably superhuman performance at artistic work and being much better read and more reasonable conversationalists than the average person. It is quite challenging to name a field of endeavour where AI isn't becoming superhuman in practice, let alone in theory with enough computational power to call on.<p>At this point I think the onus is very much on the people who think AI won't improve to justify themselves. This is the most obvious trend I've seen in my lifetime.
> cyberpunk is a warning, not a suggestion<p>In a subverted version of the spirit of "I will do anything to make cyberpunk a reality," I would really like to generate choice AI botshit at scale for the purpose of gumming up the machines.<p>And not SEO'd private label articles meant to turn a profit botshit, but insane nonsense and clear falsehoods botshit. Code with bugs botshit. The kind of stuff that will lead to non-hallucination hallucenations in LLMs.
Just playing Devil’s Advocate here, but we assume that AI consuming AI-generated content is going to inherently hurt the next generation of AI training.<p>But I’d argue that good AI content is more valuable training data than bad human content. The problem is not the source but the quality.