TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

A Hope for AI

8 pointsby mauricioover 2 years ago

1 comment

Teletrupover 2 years ago
I bet the compute isn&#x27;t a bottleneck if you do it right. Each time you&#x27;re dreaming, you&#x27;re executing something like stable diffusion for video (and much more), running at about 20 watts. It seems it was mostly trained throughout your life at about 20 watts as well.<p>Humans are SOTA in most of AI applications if your value functions penalize work. Nature always trains models in ways that penalize work. Nature abhors compute more than memory or exogenous data, because it has comparatively large linear contribution to the work done across time.<p>Consider a simple problem like computing n-th step of Conway&#x27;s Game of Life. A naive algorithm, conceived probably right after the GoL&#x27;s discovery in 1970, calculates the n-th step in O(n). If we considered the problem to be something as challenging as video syntesis is to us today, we would&#x27;ve probably label it as solved by then. It would scale arguably much better than today&#x27;s video synthesis. (I can&#x27;t describe what it means to generate video twice as good, but it surely requires more than twice as much compute with our best algorithms). In the early 80s Bill Gosper discovered the Hashlife algorithm, which computes n-th step of GoL in O(log n). It&#x27;s bounded mostly by memory and endogenous data, which depend on the complexity of patterns it simulates. The data it generates can be used to speed up arbitrary simulations of GoL without being modified. Hashlife allows mediocre hardware to compute stuff which we wouldn&#x27;t be able to compute with the naive algorithm even if we turned the whole observable universe into computronium. To me it sounds similar to the comparison between $10^8 models and our daily 20W lifes.<p>I think AGIs and AIs that will arise as specializations of AGIs will be mostly memory limited, and the vast majority of their datasets would be endogenous. Humans whose sensory bandwidth (and hence the volume of external dataset) is severely limited as a result of disabilities, can still bextremely intelligent in a general way. Humans as SOTA examples of general intelligence find it rewarding to generalize what they learn. I understand it as being rewarded for solving dissimilar problems using similar policies. To maximize such reward it&#x27;s beneficial to synthesize not only policies, but also environments and problems within them. It seems to me that people mostly talk about (often recursive) generalizations of stuff they experienced by senses.<p>I believe superintelligent AI, shaped by the natural constraints would sooner or later take form in which the human spirit would thrive - a form indistinguishable from nature. I doubt the human spirit would survive the shitty, high compute superintelligence and the brief power surge it would give its temporary overlords.<p>(Sorry for crappy English, feel free to point out errors)