TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Untapped opportunities in AI

164 pointsby dennybritzalmost 11 years ago

4 comments

strigliaalmost 11 years ago
Cool article. I really like repetition that model complexity is not a pancea. Seems like the industrial AI&#x2F;ML movement as a whole has gone down a road where practitioners will, by default, throw the most powerful model they know at a problem and see how it pans out. Works well on benchmarks(if you regularize&#x2F;validate carefully) but isn&#x27;t a very sustainable way to engineer a system.<p>Separately, I do find it curious that his list of &quot;pretty standard machine-learning methods&quot; included Logistic Regression, K-means and....deep neural nets? Sure they&#x27;re white hot in terms of popularity and the experts have done astounding things, but unless I&#x27;ve missed some <i>major</i> improvements in their off-the-shelf usability they strike me as out of place in this list.
评论 #7847235 未加载
评论 #7846889 未加载
评论 #7847091 未加载
评论 #7846801 未加载
评论 #7849871 未加载
评论 #7847306 未加载
hyp0almost 11 years ago
Massive datasets do outperform clever theories... but I think that&#x27;s just because no one has yet worked out the theories that work best with the data. This requires insight, in addition to data, and could come from anyone.<p>The alternative - that massively complex probabilistic models <i>are</i> the best theory of the data - is hopefully not true. Especially not of our minds. But it <i>could</i> be true, and if so, it would mean that our intelligence is irreducible, and we are forever beyond our own self-understanding (even in principle). Our history is full of inexplicable mysteries that were eventually understood. But not all: quantum randomness. I really hope intelligence is will be one of the former.
评论 #7847446 未加载
评论 #7847348 未加载
评论 #7850486 未加载
araesalmost 11 years ago
I can honestly say that this post has revolutionized my thoughts on AI. Primarily this is because of what I perceive as the thesis statement, which is:<p>&quot;&lt;AI&gt; is the construction of weighted tables (choices, data, meta relations, whatever) from large sets of &lt;prior data&gt; by &lt;method&gt;&quot;<p>This is kind of crazy, because I think it says you could make a Turing AI by using large datasets of prior life data for humans. In essence, &quot;&lt;my life&gt; is the construction of weighted tables from large sets of &lt;life experience&gt; by &lt;human learning&gt;.&quot; For example, if you had an AI that could learn through text, you could have extensive transcribed conversation logs of people and then large time-activity logs to use as your inputs.<p>If it could learn through video (IE, it could view images, understand objects, object relations, events in time, and assign will to the person behind actions &#x2F; events) then you could instead just feed it huge video logs of people&#x27;s lives. If you wanted a copy of a person, you could feed it only a single individual, and if you wanted a more general AI, then you could feed it cross sections of the population.<p>In addition, there&#x27;s a very cool meta aspect to the large dataset concept, in that it can be large datasets for when to use, or to feed data to, specialized sub-AI&#x27;s. For example, you might have a math sub-AI that has been trained by feeding it massive sets of math problems (or perhaps it can learn math through the video life logs of a person?). If its then being used as a part of a larger piece, then you&#x27;d want to know when to use it to solve problems, or when to feed it experience inputs for further learning. In essence, its tables of categories for experience types, and then grown &#x2F; paired sub-AI&#x27;s for those types.<p>I would wager that it is possible, right now, to create a chatbot that can pass Turing using the above by feeding it the equivalent of mass IRC chat or somesuch huge, human interaction by text dataset over a variety of topics. This would naturally need sub-AI&#x27;s for mechanical things like grammar or parts of speech, and then possibly higher level meta-AI&#x27;s for interpreting intent, orchestrating long form thought, or planning. In a way, its layers of AI based on level of thought abstraction. If it were a human, the high intensity portions of sub-AI would occupy space relative to intensity within reconfigurable co-processor zones (sight:visual cortex, sight:face recognition:occipital and temporal lobes, executive functions:frontal lobes, ect...)
评论 #7847241 未加载
评论 #7849625 未加载
评论 #7847112 未加载
jostmeyalmost 11 years ago
As a postdoctoral candidate in biology, I can say that my approach to problem solving is exactly the opposite: My job is to infer as much as I can from the scant amount of data I can obtain. The goals outlined in this article are to collect as much data as you can, creating what is essentially a glorified lookup table of results. I must say the latter approach seems a hell of a lot easier.