TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Towards Machine Intelligence by IBM Research

111 pointsby laudneyabout 9 years ago

9 comments

laudneyabout 9 years ago
There exists a theory of a single general-purpose learning algorithm which could explain the principles of its operation. This theory assumes that the brain has some initial rough architecture, a small library of simple innate circuits which are prewired at birth and proposes that all significant mental algorithms can be learned. Given current understanding and observations, this paper reviews and lists the ingredients of such an algorithm from both architectural and functional perspectives.
评论 #11400766 未加载
mindcrimeabout 9 years ago
From reading the introduction, it sounds like the author is covering similar ground as the book <i>The Master Algorithm</i>[1] by Pedro Domingos[2]. If you find this interesting, you may find his book interesting as well.<p>[1]: <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;The_Master_Algorithm" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;The_Master_Algorithm</a><p>[2]: <a href="http:&#x2F;&#x2F;homes.cs.washington.edu&#x2F;~pedrod&#x2F;" rel="nofollow">http:&#x2F;&#x2F;homes.cs.washington.edu&#x2F;~pedrod&#x2F;</a>
fizixerabout 9 years ago
Cites Schmidhuber besides Hinton and friends, so that is good.<p>But where is mention of Marcus Hutter and his AIXI formalism? Any review of machine intelligence is incomplete without that.
评论 #11401250 未加载
andreykabout 9 years ago
Do people generally read these things before upvoting? Legitimately curious.<p>It lists some possible goals to achieve more general human-like intelligence beyond the fancy function approximation we get with deep supervised learning, as stated in the abstract for both architectural and functional perspectives. In general I find the language fairly wishi-washy and the writing often awkward, but it is a nice summary of relevant thoughts and concepts. Beyond the abstract, here is a bit of summary and my thoughts.<p>For architectural aspects, it lists:<p>1) Unsupervised - Agrees with LeCun, Bengio, etc. But not sure it&#x27;s fair to conclude this yet, maybe it should be reinforcement? our brains are prewired to do some things<p>2) Compositional - basically hierarchical, aka deep. Seems reasonable.<p>3) Sparse and Distributed - again plausible and empirically seen in deep learning. One reason ReLu neurons are nice is that they lead to sparser distributed representations.<p>4) Objectiveless - a metaphysical statement having to do with the Chinese room argument? This seems to mean not optimizing an objective function with gradient descent, and instead &quot;Clearly, the learning algorithm should have a goal, which might be defined very broadly such as the theory of curiosity, creativity and beauty described by J. Schmidhuber&quot;. Seems vague and not clear.<p>5) Scalable - Again not the best choice of words, it seems to argue for parallelism as well as a &quot;hierarchical structure allowing for separate parallel local and global updates of synapses, scalability and unsupervised learning at the lower levels with more goal-oriented finne-tuning in higher regions. &quot; I am disappointed no discussion of memristors or neuromorphic computing was here.<p>For function aspects, it lists:<p>1) Compression - sure, pattern matching is in a sense compression so this seems fairly obvious.<p>2) Prediction - &quot;Whereas the smoothness prior may be considered as a type of spatial coherence, the assumption that the world is mostly predictable corresponds to temporal or more generally spatiotemporal coherence. This is probably the most important ingredient of a general-purpose learning procedure.&quot; Again, reasonable enough.<p>3) Understanding - basically equivalent to predicting?<p>4) Sensorimotor - not clear? Similar to human eye movement?<p>5) Spatiotemporal Invariance - &quot;one needs to inject additional contex&quot; having constant concepts of things?<p>6) Context update&#x2F;pattern completion - &quot;The last functional component postulated by this paper is a continuous (in the- ory) loop between bottom-up predictions and top-down context.&quot; Constant cycling between prediction and word state update, pretty clear.
评论 #11400887 未加载
评论 #11404275 未加载
评论 #11401033 未加载
评论 #11401840 未加载
评论 #11400833 未加载
KasianFranksabout 9 years ago
Favorite quote: &quot;An intelligent algorithm (strong AI [66], among other names) should be able to reveal hidden knowledge which might not even be discoverable to humans.&quot;
评论 #11401879 未加载
tvuralabout 9 years ago
I think the main idea can be summarized pretty simply. The most important next step towards general intelligence is creating a learning algorithm that can solve a sufficiently general class of problems without much tweaking by humans, and it couldn&#x27;t hurt to list out the properties such an algorithm would have to have.
kasevabout 9 years ago
Most of this ideas have been pioneered and implemented by Jeff Hawkins and his team at Numenta. See his book &quot;On Inteligence&quot; or the open source project at numenta.org.
bra-ketabout 9 years ago
how very unsurprising. my advice to the author and the rest of the field would be to read a bit more on learning and memory in humans. AI starts with I.
grondiluabout 9 years ago
I&#x27;m too lazy to read this but not enough not to throw my two cents.<p>Non-human mammals are amazing considering how many of them are incredibly capable very early. I&#x27;m thinking mostly of large prey for whom the ability to walk and run is crucial. Basically they need to be able to do many things very quickly. It&#x27;s incredible to see how fast so many new-born grow in so many species.<p>Also, I&#x27;ve searched for the word &quot;play&quot; in this article and found no occurrence. To me how young mammals play and more importantly what drives them to do so is the core mystery behind the development of the mammalian brain. I suspect that once this is cracked, a big part of the work will be done.