TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: How “real” are AI scaling laws?

1 pointsby ffwdover 2 years ago
My main question is why would scaling lead to intelligent behavior in an AI and how and why would it generate that intelligent behavior? For this explanation I&#x27;m assuming that scaling can lead to an AGI and superintelligent behavior, as some AI engineers claim. Using OpenAI&#x27;s paper for the base: https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2001.08361.pdf<p>You basically have 3 things, compute, dataset and parameter size. The parameters as far as I understand are basically universal computation approximators and can execute any function. The data provides the &quot;geometry&quot; which the parameters try to model with functions and compute allows you to accelerate and increased the parameter size and dataset size.<p>Now, as far as I can tell, the parameters will only ever model whatever is actually in the data, and it stands to reason, with more parameters and more data, you can better model the data, and with more data you can model more &quot;stuff&quot; (whether real physical stuff or digital data etc). That&#x27;s all fine, but here&#x27;s the problem. What about all the stuff that is not in the data? Primarily, most intelligent behavior in humans is not in the data. The most important part of intelligence is not the ability to know things, it&#x27;s the ability to synthesize all disparate pieces of information from wildly different places and then generate a coherent sequence of steps &#x2F; actions to reach some desired outcome.<p>Humans do this I think primarily because as organisms, we probably evolved to detect disturbances in the body, and extended environment, and then generate actions&#x2F;behavior to correct those disturbances. Things like needing to eat, sleep, but also higher social things like maintaining ego, even maintaining moral order :P But the point here is, other than those actions that we learn socially from others, a lot of those behaviors are not in the &quot;data&quot;. And a lot of those sequences of steps we generate we learn on our own with the basis of our hands, feet, and the output actuators we have, and so on. So what happens if say, we have an AI with trillions and trillions of parameters, and it has been trained on an incredible amount of data, maybe all of earths data, and even raw sensor input from all surveillance equipment or something like this.<p>What happens if I ask that AI to generate nanobots that can go into a human host, super accurately target every cancer cell and eliminate it? And let&#x27;s say for arguments sake the schematic and shape of the nanobot doesn&#x27;t exist in the data it&#x27;s been trained on (since humans haven&#x27;t invented it), the factory needed to produce the nanobot doesnt either, nor all the materials science needed nor computational ideas needed to power the nanobot CPU&#x2F;computer.<p>How would that AI generate a large sequence of steps (not in the data), to produce in the real world, an army of nanobots? Where would it get all the information on how to synthesize all the information from the disparate places in its neural net that might actually contain all the information needed, but actually generating a novel sequence of behavior in the physical world to produce them, I don&#x27;t know how... But there&#x27;s also another problem, why would it generate _ANY_ such novel sequence of actions&#x2F;steps AT ALL? What would be the impetus for it to for example generate this idea for nanobots on its own? The only way to generate this is by modeling humans or other organisms I think, but even then there&#x27;s a question because humans and animals brains and behaviors are so tuned to our biological needs that I&#x27;m not sure such actions would in the aggregate even make sense for an AGI.<p>Curious about this! Sorry if I&#x27;m incredibly wrong I just want to know more.

1 comment

ffwdover 2 years ago
I have one slight addendum that didn&#x27;t fit in the main post character limit (sorry I know this is long) :P:<p>And then my final point is that (and I&#x27;m not sure about this exactly), but any action that current AI&#x27;s do, are essentially only the computer code that some programmer wrote to execute at that time. For example if we have an AI it can classify images, someone wrote a classify() function, and right now, the AI cannot do anything more or less than run exactly that function. All the knowledge and actual activations in the deep learning net are essentially &quot;dead execution&quot;, it doesn&#x27;t matter nor change the classify() function at all.<p>But that&#x27;s not how humans work, humans create novel functions other than classify() and we also have a biological imperative to do so, as well as a capability. Right now none of the information in the AI neural net can actually be utilized to create alternatives to classify() nor does it have an imperative nor technical ability do so, so the neural net remains &quot;dead&quot;.<p>And to do so you would need to make different parts of the neural net actually count in the creation of the new output function and also to actual physical actuators in the world, but even then we are back to how would it actually synthesize disparate pieces of information in a coherent way as said above.