TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

The Uselessness of Useful Knowledge

83 点作者 bainsfather超过 3 年前

5 条评论

ugvgu0oiua超过 3 年前
One skeptical way of looking at it is that the the explosion of &quot;data science&quot; and ML is basically comp sci running into modeling space in a way it never had before, and getting into territory that it wasn&#x27;t equipped to handle.<p>It wasn&#x27;t that long ago that there were posts on here about statisticians giving conference keynotes about how data science was basically old wine in new bottles, and were being ridiculed for being behind the times, etc.<p>Now we see that basically no one actually knows what&#x27;s going on. My guess is when the dust settles a lot of things will be explained, but it won&#x27;t be as different from established statistical and information theory as some would make it out to be. That is, some of this is new discovery and figuring out new territory, and some of it is neglecting basics that have been there all along.<p>My guess is the next phase of this is basically comp sci ML research rediscovering mathematical statistics and information theory.
评论 #28967048 未加载
评论 #28967173 未加载
评论 #28966062 未加载
YeGoblynQueenne超过 3 年前
I honestly don&#x27;t understand all this flowing uphill and flowing downhill talk. We advance science when we understand stuff. Untill we understand stuff, we don&#x27;t have science, we just have stuff. Experimentation can come before or after, but science is the knowledge that comes with understanding that explains observations- not the experiments that generate observations.<p>People could still flow boats before Navier-Stokes? Yes, so people had boats, i.e. stuff. Now we have Navier-Stokes which is science, not stuff.<p>Btw, Yan LeCun knows this much better than me, but neural networks are already ancient. The first &quot;artificial neuron&quot;, the Pitts &amp; McCulloch neuron, was described in 1938. Frank Rosenblatt created his Perceptron in 1958. Kunihiko Fukishima described the Neocognitron, daddy of the Convolutional Neural Network, in 1979. Hochreiter and Schmidhuber described Long-Short-Term Memory Networks in 1995. Yan LeCunn himself used CNNs to learn to recognise handwritten digits in zip codes in 1989.<p>That&#x27;s at least 30 years of research on deep neural nets- almost a human generation. Many of today&#x27;s postgraduate students studying deep neural nets weren&#x27;t even born when all this was being done. If this is just the experimentation phase before we pass on to the theorising and understanding phase- <i>when</i> are we going to get to the understanding phase? In 100 years?
nathias超过 3 年前
Basing everything in your daily life on unknown&#x2F;unknowable algorithms seems like a step towards a society where knowledge loses its value.
评论 #28968764 未加载
kangnkodos超过 3 年前
This is a bad analogy.<p>The main difference between alchemy and chemistry is that chemistry follows the scientific method.<p>When an alchemist learned something new, they kept the information to themselves and tried to profit from it. They wanted to turn lead into gold, and then keep the secret method to themselves.<p>A chemist profits by sharing the new information.
robthebrew超过 3 年前
tl;dr, but it reminded me of the &quot;big&quot; news yesterday that via ML you can guess bank ATM codes from shielded hand movements. Reading down, it turned out you could guess 30% after the 3 try limit imposed by the machines. Not terrible, but completely impractical unless you steal a LOT of bank cards.
评论 #28965989 未加载
评论 #28943298 未加载
评论 #28964973 未加载