TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Andrew Ng and the Quest for the New AI

147 点作者 ivoflipse大约 12 年前

12 条评论

jhartmann大约 12 年前
I've said this before, but deep learning is terribly powerful precisely because you don't have to spend lots of time doing feature engineering. Multi layer networks that are trained in semi-supervised, unsupervised and supervised fashions all can now produce networks that meet or beat the state of the art hand created models for speech, handwriting recognition, ocr, and object recognition. We are only just beginning to see what is possible with these sorts of techniques. I predict within a few years time we will see a huge renaissance in AI research and Neural Network research specifically as these techniques are applied more broadly in industry. My startup is building some cool stuff around this technology, and I know there are hundreds like me out there. This is going to be a fun ride.
评论 #5669725 未加载
wookietrader大约 12 年前
If an article says that Andrew Ng is "the man at the center of deep learning" it's just not right. Geoffrey Hinton's and Yoshua Bengio's impact were at least as high as his, if not much higher.
kinofcain大约 12 年前
There is a very quick reference to the person who inspired him, Jeff Hawkins, whose book is worth a read:<p><a href="http://www.amazon.com/On-Intelligence-Jeff-Hawkins/dp/0805078533/" rel="nofollow">http://www.amazon.com/On-Intelligence-Jeff-Hawkins/dp/080507...</a><p>Edit: update link
评论 #5670085 未加载
评论 #5668534 未加载
评论 #5669752 未加载
评论 #5668876 未加载
EthanHeilman大约 12 年前
We should stop trying to claim every new method is "like the brain". We don't have any clear understanding of how the brain works. One can be inspired by a particular and likely wrong cognitive theory, but one can not say one is building "machines that can process data in much the same way the brain does" truthfully without a deeper, and currently unavailable, understanding of the functioning of the human brain.
评论 #5671706 未加载
hvs大约 12 年前
OT: His online Machine Learning class last year was great. He is the best professor I've ever had, and explains things so clearly that you understand them the first time. You are lucky if you ever get to work or study under him.
评论 #5669368 未加载
评论 #5668641 未加载
评论 #5669153 未加载
评论 #5669658 未加载
评论 #5668610 未加载
gavanwoolery大约 12 年前
Hmm...I don't mean to be a skeptic, but I do not see any new theories here. Neural networking has been around for a long time, as have an abundance of theories and implementations around it...some people have gone so far as to build an actual brain replica (a digital version of the bio/analog thing). Neural networking is extremely powerful, but to be of any use, you need a <i>lot</i> of computing power. As it turns out, our brains are really good at massively parallel tasks like image and motion processing; these things can be done explicitly on a computer with some ease, but having a computer learn on its own from scratch how to do them is not easy.
评论 #5669420 未加载
评论 #5668911 未加载
评论 #5669001 未加载
why-el大约 12 年前
So I am a little confused. Where are we on the learning part of AI? As I understand it, the current consensus is to throw as much data as you can at your model (millions of cat pictures in this article's example) to make it pick up patterns and yet still claim that we are closing in on how the brain works? As far as I can tell no human brain would need that many pictures to see a pattern. In fact, and this is probably more apparent in language, we humans tend to work with <i>degenerate</i> data and still end up with perfect models.
评论 #5670433 未加载
评论 #5670403 未加载
pilooch大约 12 年前
It seems the relationship between the brain and deep learning has evolved in such a way that the later can help with insights into how the former works.<p>In this regard, I thought I would mention the extraordinary simple and elegant talk by G. Hinton last summer: <a href="http://www.youtube.com/watch?v=DleXA5ADG78" rel="nofollow">http://www.youtube.com/watch?v=DleXA5ADG78</a><p>It starts from a simple and clever improvement to an existing deep learning method and ends up with beautiful (and simple!) insights on why neurons are using simple spikes to communicate.
aespinoza大约 12 年前
I thought the man behind the Google Brain was Ray Kurzweil (<a href="http://www.wired.com/business/2013/04/kurzweil-google-ai/" rel="nofollow">http://www.wired.com/business/2013/04/kurzweil-google-ai/</a>).
评论 #5668648 未加载
cscurmudgeon大约 12 年前
Ah. The good old AI cycle.<p>Scientist: X can help us get full AI!<p>You: Why?<p>Scientist: Because of reason R.<p>You: But, reason R is a non sequitur...<p>More seriously, reasons similar to that for deep learning have been repeated multiple times in AI with failure (e.g. Thinking Machines).<p>I would suggest that these folks remain calm and build something on the scale of IBM's Watson using just deep learning..
niklaslogren大约 12 年前
Very interesting article, it makes me hopeful.<p>This might be slightly off-topic, but I'll try it here anyway: can anyone recommend any books/other learning resources for someone who wants to grasp neural networks?<p>I'm a CS student who finds the idea behind them really exciting, but I'm not sure where to get started.
yankoff大约 12 年前
Great and inspiring professor. Taking his ML course on coursera and trying to follow his talks.