TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

AI is transforming Google search – the rest of the web is next

162 pointsby nzonbiover 9 years ago

15 comments

lowglowover 9 years ago
I can&#x27;t be the only one that considers all these AI articles just smoke and mirror puff pieces to prop up a company&#x27;s value by capitalizing on the hype (hysteria?), can I? I think the first flag is that the journalists don&#x27;t seem to really understand the technical capabilities or limitations of current ML&#x2F;AI applications. They accept grandiose claims at face value because there is no way to measure the real potential of AI (which the promise of seems limitless, so anything appears plausible especially coming from a big company like GOOG).<p>I think there are a couple of really overhyped areas right now, AR&#x2F;AI&#x2F;ML and IoT&#x2F;IoE. Now while I don&#x27;t mind the attention and money being thrown at tech, I can&#x27;t help but feel we&#x27;re borrowing more against promises, hopes, and dreams, while simultaneously under-delivering and I think that&#x27;s going to hurt tech&#x27;s image and erode investor confidence sooner than later.
评论 #11044161 未加载
评论 #11044288 未加载
评论 #11047078 未加载
评论 #11045934 未加载
评论 #11046302 未加载
评论 #11044330 未加载
评论 #11044964 未加载
评论 #11044181 未加载
评论 #11050580 未加载
评论 #11046563 未加载
评论 #11044832 未加载
评论 #11045983 未加载
osmodeover 9 years ago
There is a tendency among non-technical admirers of ML to regard deep learning methods as beyond their creators: independent entities that will one day, given refined enough algorithms and enough energy, out-comprehend their human creators and overwhelm humanity with their artificial consciousnesses. The term “neural networks” is itself a misnomer that doesn’t at all reflect the complexity of how human neurons represent and acquire information; it’s simply a term for nonlinear classification algorithms that began catching on once the computing power to run them emerged.<p>The question of whether or not deep neural networks are capable of “understanding” is largely a theoretical concern for the ML practitioner, who spends the bulk of his or her time undertaking the hard work of curating manually labeled data, fine-tuning his or her neural classifier with methods (or hacks) such as dropout, stochastic gradient descent, convolution and recursion, to increase its accuracy by a few fractions of a percentage point. Ten or twenty years from now, I imagine we’ll be dealing with a novel set of ML tools that will evolve with the rise of quantum computing (the term “machine learning” will probably be ancient history, too), but the essence of these methods will probably remain: to train a mathematical model to perform task X while generalizing its performance to the real world.<p>As fascinating and exciting as this era of artificial intelligence is, we should also remember that these algorithms are ultimately sophisticated classifiers that don&#x27;t &quot;understand&quot; anything at all.
评论 #11045962 未加载
评论 #11046268 未加载
评论 #11045963 未加载
评论 #11045964 未加载
rifungover 9 years ago
&gt; At one point, Google ran a test that pitted its search engineers against RankBrain. Both were asked to look at various web pages and predict which would rank highest on a Google search results page. RankBrain was right 80 percent of the time. The engineers were right 70 percent of the time.<p>I don&#x27;t really understand the point of this metric. Why are they predicting what ranks highest on Google search? Wouldn&#x27;t a better metric be who predicts the correct place a user was looking for?<p>Is the thinking that if they are using machine learning, than whatever the user is looking for should have bubbled up to the top?
评论 #11044321 未加载
评论 #11046261 未加载
评论 #11045640 未加载
Houshalterover 9 years ago
This is very interesting. As late as 2008, Google said they don&#x27;t use any machine learning in search. Everything was hand engineered with tons of heuristics. They said they didn&#x27;t trust machine learning, and that it created bizarre failure cases.
评论 #11044403 未加载
评论 #11044416 未加载
ohitsdomover 9 years ago
&gt; The truth is that even the experts don’t completely understand how neural nets work.<p>I&#x27;m no AI&#x2F;ML expert, but I can&#x27;t believe this is true... Is it?
评论 #11044151 未加载
评论 #11044265 未加载
评论 #11044193 未加载
评论 #11044977 未加载
评论 #11049602 未加载
评论 #11044142 未加载
reza_nover 9 years ago
RIP big data. Hello AI. Makes sense, data drives a lot of &#x27;AI&#x27; tech. I guess what I find amusing is the push from Google to rebrand themselves as an AI company. My guess is it won&#x27;t be too long until we see everyone else jumping in the AI branding boat. That will kind of dilute a lot of what is being done.
varelseover 9 years ago
I can understand Amit Singhal&#x27;s opposition to replacing hand-coded features with machine learning models. He&#x27;s right that ML models have bizarre failure cases across large sample sizes, but he&#x27;s apparently career-endingly wrong to seemingly believe that one cannot do anything about it. He&#x27;s also wrong IMO to not recognize that hand-crafted signals and features lack bizarre failure cases themselves.<p>IMO this shifts the focus from lovingly hand-crafted signals and features to lovingly hand-crafted loss functions and variants of boosting and training algorithms to address those bizarre failures as they occur. For example, recently much ado was made about minimal changes to the input data to image recognition convolutional nets to spoof the object ID. And the simplest remedy is to augment the training data with these cases and perhaps boost the gradients of outputs that are wrong. It&#x27;s not perfect, but Google search was never perfect either. Evidence: I was on the Google search team for a bit and we had all sorts of meetings to address such failures as they happened.<p>While I agree that the quality of Google technical searches has declined dramatically recently, I believe there&#x27;s huge opportunity to fix them by understanding why the ML models are failing (shooting from the hip, I suspect it&#x27;s a long-tail problem writ large) and changing the loss functions, models and training algorithms to address these failures as they&#x27;re detected.<p>Anything less IMO is a failure of imagination in an age of 6.6 TFLOPS for ~$1000 and the ability to stuff 8 of them into a $20K server and go wild.
评论 #11048016 未加载
ameliusover 9 years ago
The current trend seems to be to put a human behind a web API.<p>I guess when AI is sufficiently advanced, those humans can be seamlessly replaced by computers.
hyperpalliumover 9 years ago
I recall google engineers complaining that their clever insightful carefully engineered code was soundly beaten by a statistical approach.<p>The current approaches aren&#x27;t so much AI as having really, really, ridiculously large datasets.
knownover 9 years ago
No alternative to AI for Google <a href="http:&#x2F;&#x2F;www.bbc.co.uk&#x2F;news&#x2F;technology-23866614" rel="nofollow">http:&#x2F;&#x2F;www.bbc.co.uk&#x2F;news&#x2F;technology-23866614</a>
graycatover 9 years ago
Tech hype is a little like old spontaneous combustion of some oily rags in the corner: No telling just when they might ignite, but when they do the result can be a big fire, for a short while.<p>Once the hype gets a flicker, there are good sources of more fuel to make the fire bigger. E.g., the situation is old, say, back to the movie <i>Lawrence of Arabia</i> where a news reporter was talking to Prince Faisal and said: &quot;You want your story told, and I desperately want a story to tell.&quot;. So, tech people who want their story told get with tech journalists who desperately want a story to tell.<p>One such case doesn&#x27;t mean very much, but once the <i>fire</i> starts, more techies and more journalists do the same because the fact that there are already lots of stories gives each new story some automatic credibility.<p>But, fairly soon the stories get to be about the same, with little visible progress (usual situation in reality), and interest falls, the bubble bursts, becomes yesterday&#x27;s news. Then, the world moves on to another source of a hype conflagration, bubble, viral storm, whatever.<p>For AI, by 1985 DARPA funding at the MIT AI Lab had gotten AI going. There were <i>expert systems</i> and more. Lots of hype. In a few years, the fire went out, the bubble burst, and there was <i>AI winter</i>.<p>For the next bubble, say, System-K (right, doesn&#x27;t mean anything), print up some labels about System-K. Then order a gross of children&#x27;s bubble bottles, right, soapy water with a plastic stick with a circle at the end good for blowing bubbles. Put the labels on the bottles and send them to various departments at Stanford, start up companies in Silicon Valley, VC firms on Sand Hill Road, and tech journalists. Then stand back and watch the media conflagration for System-K! So, get stories:<p>&quot;System-K -- Next Big Thing&quot;<p>&quot;System-K Deep Background&quot;<p>&quot;Ex-Googlers Respond on System-K&quot;<p>&quot;System-K, Son of AI&quot;<p>&quot;Leading VC Talks about System-K&quot;<p>&quot;Silicon Valley Goes All in on System-K&quot;<p>&quot;System-K, Bigger Than the Internet&quot;<p>&quot;The First System-K Unicorn?&quot;<p>&quot;System-K Trending up&quot;
PaulHouleover 9 years ago
With the Google Knowledge Graph they don&#x27;t need the rest of the web.<p>It&#x27;s starting to get rare to see organic results to web pages more than most.
inaudibleover 9 years ago
I don&#x27;t quite understand why people want to dismiss examples of machine learning as valid techniques for understanding the human environment.. It&#x27;s not as if the human brain was built and guided from nothing, many of the same adaptive principles are as present in our minds as they are in other mammals and equally so from where all the branches divide. Even tiny organisms. And we seems to center the brain at the core of humans intelligence, when there&#x27;s a range of chemical and metabolic coordination going that might bypass the brain entirely.<p>It&#x27;s efficient, failure resistant models that matter. We&#x27;re talking about accelerated learning, finding the models that work out of all those many iterations that fail. You can model it, decompile the results and try to understand and emulate what makes things seem real, but we don&#x27;t even need to analyze it, because case by case it changes and it&#x27;s circumstance makes things very different. &#x27;Many ways to skin a cat&#x27;.<p>I think the challenge of the future is finding the general API that can negotiate all the things and make all the parts communicate, the kernel if you want. We can determine optimum speech algorithms, babel communication, create seeing eyes that recognize objects, optimize forms that can negotiate physical terrain, work out what is meant in human expression, but it&#x27;s not until all these units work together that the &#x27;AI&#x27; will seem seamless in human terms.<p>All of those parts have discreet forms, they generate a lineage of algorithms from iterations based on code, languages often derived from need. A Lisp might be the best way of interpreting language, a Haskell might be work best for defining strict biomechanics and area physics. Different abstractions are better for the results they are designed to intuit. But when we are to create the ultimate neural net, the composite of all these machine languages that are constantly required to optimize beyond human intelligible understanding, what will be using? What structure will state &#x27;this works good enough&#x27; to not bother with the computation any more - in the familiar context of why don&#x27;t our eyes have faster frame rate, need better detail, or need us to see into UV. What regulates such a machine, and how does a machine understand failure without guidance?<p>I like to think of these questions when I see rough examples posited around potentials in machine learning. Getting one human system sorted is one thing, communicating the results to other sub-systems an optimize concurrent results is another. The data model is too huge to even comprehend!<p>I&#x27;m just excited that these things exist, that there are individuals, research groups and companies looking at the what makes us &#x27;us&#x27;. It might help us unlock the features of the brain and evolution.. Used for commercial gain - who cares, just a small cog, with revenue to continue development.<p>Just going to add my favourite example of machine learning, not because it&#x27;s &#x27;best&#x27; but because it&#x27;s so dynamic that you feel the wonder. <a href="http:&#x2F;&#x2F;www.goatstream.com&#x2F;research&#x2F;papers&#x2F;SA2013&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.goatstream.com&#x2F;research&#x2F;papers&#x2F;SA2013&#x2F;</a>
jonesb6over 9 years ago
<a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;List_of_fallacies" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;List_of_fallacies</a><p>If anyone wants to practice their critical think skills, see how many fallacies you can spot in this article.
wangiiover 9 years ago
One part of me truly hope Google to success in ML&#x2F;AI, although I consider Google an evil company. AI, Singularity, they are the most important things in this century. The implication is simply beyond our imagination. I don&#x27;t care too much if Skynet takes over the earth and kicks human being into the dustbin. If it&#x27;s the destiny, so be it.<p>Another part of me believe it&#x27;s a sign of folks don&#x27;t know what they are doing, writing. How can we achieve AI without understanding? Google will fall apart.