A not-so-subtle reading shows Google is doubling down on ecommerce applications here:<p>> It could also understand that, in the context of hiking, to “prepare” could include things like fitness training as well as <i>finding the right gear</i>.<p>> fall is the rainy season on Mt. Fuji so <i>you might need a waterproof jacket</i>.<p>> MUM could also surface helpful subtopics for deeper exploration — like <i>the top-rated gear</i> or best training exercises<p>> you might see results like where to enjoy the best views of the mountain, onsen in the area and <i>popular souvenir shops</i><p>Or, my favorite line:<p>> MUM would understand the image and connect it with your question to let you know your boots would work just fine. It could then point you to a blog with a list of <i>recommended gear</i>.<p>(in other words: "Thanks for showing you're interested in hiking gear. Here's a lot of hiking gear you can buy.")
Search quality at Google has been decaying over the past decade. Accuracy and quality of search results is compromised to optimize advertising revenue, penalize competitors or neutralize threats, and cater to the various needs of political or regulatory authorities.<p>Google's search was at its peak in 2008 when advertising hadn't fully compromised search quality. Google is an advertising business that supports its otherwise money losing properties. Why will things change in the future because you can synthesize data from multiple sources only to compromise that quality with the realities of Google's business model?
Content of the article:<p>- 1000 times more powerful than BERT, but still transformer architecture<p>- trained on 75+ languages, can transfer knowledge between languages<p>- can do text and images (not audio and video yet)<p>- can understand context, go deeper in a topic and generate content<p>Not much apart from their words about how amazing it is. Paper? Demo?
In most sci-fi, you ask the ship computer a question and it can answer using the sum total of all human information.<p>But judging by the comments her, when Captain Picard asks the ship how long to Starbase 17 at Warp 9, rather than answer you want it to tell the Captain to visit WarpTravelCalculator.com<p>If you publish information in this world, there’s nothing preventing people from learning it and rewriting it in a new way. Humans do it all the time and they don’t pay the people they learned it from a portion of proceeds.<p>Future AI will do this too. I want machine learning to read every book and paper ever written and be able to answer queries and summarize things for me.<p>We may need to find a better model for encouraging content contribution to society besides copyright and demanding royalties on every use.
<i>When I tell people I work on Google Search, I’m sometimes asked, "Is there any work left to be done?" The short answer is an emphatic “Yes!” There are countless challenges we're trying to solve so Google Search works better for you.</i><p>Sorry to be off-topic but it's hard to get excited about blue sky ventures when the search UI offers no capability for simple things like delivering search results in date order. You can filter results by date, but not sort them.
I really hope Google gets some competition in their NN endeavors because they are creating an economy that sucks in free information and eventually spews out buying recommendations. In the past they would compensate websites for providing the precious raw material for their results with advertising. With DL models websites don't need to get anything back. This will lead to stale information or pretty much end the web
Wasn't Google supposed to have some sort of AI that could make phone calls for you? It looked amazing when they demo'ed it but I haven't heard diddly squat since then. Did they cancel that project?
Their hiking question is an odd example. Technology like this is probably perfectly fine for asking questions with low downside for wrong answers. But if someone asks "I've hiked Mt Pirongia and now I want to hike Mt Taranaki; how do I need to prepare differently?" and Google erroneously answers "nothing", that could get someone killed.
An AI named after the British diminutive for 'mother' is surely a wise choice. I would not trust this AI unless it kissed my forehead and tucked me into bed.
A lot of knowledge on the internet is just wrong. Also a lot of scientific progress is driven by folks persisting against the current dogma. So that seems like a big problem. I imagine this is true for almost any subject where there is tribal domain expertise.
...and still, Google Suggestions cannot understand that in Switzerland, some population do not speak German (e.g. here in Geneva, we are a trilingual country), and only shows me search completion in German (from the browser search bar). And there is no way to change language there. I would prefer English.
This isn't "better search" it's entrenched market domination from the only player with enough smarts, data and (crucially) users to make this work.<p>While Google is building a bigger and "better" Behemoth we should ask if this kind of innovation is really doing anything at all to make the world a better place in a meaningful way. Better monetization of search seems like a way to make the world worse in my opinion.
There is no doubt that given the current state of AI, these requests would produce bullshit answers. AI is just not capable of constructing the proper conceptual models for now. But it sure can give you some answers.<p>It's sad to see that they'll be spending so much time, effort and money on this...
>Take this scenario: You’ve hiked Mt. Adams. Now you want to hike Mt. Fuji next fall, and you want to know what to do differently to prepare.<p>Ah yes, that totally common scenario which I'm faced with all the time.<p>I love this. It perfectly illustrates the peril we are in with the current state of AI research. That the author would choose this as a problem to solve shows exactly the socioeconomic class they come from, and how that influences the way they solve problems. It may seem like a trivial and meaningless example, but these subtle biases will creep their way into these systems and be amplified. And you can bet that this kind of work is the foundation for what will become the technology that eventually governs every facet of our lives once AGI is a thing.<p>I, for one, am terrified of the implications that a bougie tech bro AI overlord entails.
A bit off topic but I am wondering if there are open knowledge graphs in public?<p>Ignoring AI etc, my kids play a couple of games where there is clearly some backend that "knows" Taylor Swift is a Singer, is Female, and has acted in this movie X<p>You can go a long way in a Turing test with that and I was wondering if folks knew where those graphs were built ?
Makes sense. I want insights and context. If Google can do that synthesis that’s great. I do wonder about the training data and data quality though. When I do these targeted searches you have to filter the spam... books are somewhat better but nothing beats talking to someone who lives it or did it.
I see a lot of people here expressing doubts and confusion. I want to try to clear up some of that.<p>The key notion here is scale relativity. This is the reason why transformer models have been so, well, transformative. Bigger models are better than smaller models in a proportional manner. That is, they display scale relativity. Where is the limit? Where does this break down? We don't know. We haven't found the ceiling yet.<p>Another important notion is multimodality. When you can cross-reference your text-based knowledge of an apple with your image-based knowledge of an apple, you can use this information as leverage. Archimedes said, "Give me a place to stand on, and I'll move the Earth." It might seem ridiculous to say that the same is true when it comes to information, but it is. Informational leverage is powerful. Multimodality allows you to make very accurate predictions. The McGurk effect is a nice demonstration of how we do the exact same thing. We rely on visual information from a speaker's lips to predict what they're going to say. In other words: we make use of multimodal leverage.<p>The twin notions of scale relativity and multimodality explain what makes MUM possible. As some of you have pointed out, there's another aspect that we can't ignore: utility. Google will be using MUM to make money. Which means that they'll have to train MUM to make you spend it. But if you're uncomfortable with this idea, you are uncomfortable with capitalism in general. Which is fair, but I think it's important to keep it in mind.<p>As I'm sure they've already considered at Google, MUM can be used to revolutionize education. Imagine people all over the world having access to an expert instructor who can answer all of your questions. You might think this sounds like a dream, but we're a mere stone toss away from achieving it. That's the true power of scale relativity + multimodality: we can now make advanced systems that can communicate with us.<p>I appreciate the skeptics and naysayers here: you keep the rest of us sane. For that, I thank you. At the same time, I want you to open your eyes to the possibility that something very important and transformative is happening right now. You don't have to go full Kurzweil, but I think you would benefit from reflecting on the opportunities this new technology might offer.
I find it difficult that Google wants search to be easier for the end user - for example I believe a very long time ago you could setup sites to exclude from all of your searches - I don’t think this is possible any longer.
"Since MUM can surface insights based on its deep knowledge of the world"
Which just means taken from the millions of websites written by humans and used without permission or any payment.
<p><pre><code> "Is there any work left to be done?"
</code></pre>
The short answer is an emphatic “Yes! Dismantling your monster of a corporation!”
Any millennial who is using search for some time would easily know where to find what he needs.
This sounds like Google is trying hard to drive more money out of its search business.
> "Is there any work left to be done?"<p>Google could search captions on all the Youtube (etc) videos. Not sure why this doesn't happen. Along with a few other big resources not indexed.<p>I think the big thing with the article(Taken as a workable technology) is it's not search, it's getting other peoples information and transforming it into a Google resource.<p>Which does add to humanities knowledge, but it's owned and profited on by Google.
When the text starts with "Is there any work left to be done?" The short answer is an emphatic “Yes!” I was sort of hoping they would announce that pinterest will now be banned from all non-image search results...<p>Instead it's an announcement that Google has made a new, even bigger, pile of linear algebra that can sort of answer questions and won't end up like Watson.<p>I like that they put in a deadpan bit about how they are <i>very ethical</i> when they make and then exploit their huge collections of data found by their spiders. There sure hasn't been any AI controversy at google this quarter, no sir-e!
"When I tell people I work on Google Search, I’m sometimes asked, "Is there any work left to be done?" The short answer is an emphatic “Yes!”<p>Hands up everyone who is 100% satisfied with Search ... ... OK no one.<p>So now we have an unsolved problem left behind in favour of ... chat about mountains ...<p>"MUM has the potential to transform how Google helps you with complex tasks. Like BERT, MUM is built on a Transformer architecture, but it’s 1,000 times more powerful. MUM not only understands language, but also generates it."<p>Piss off and while you are at it, get BERT to explain my response to MUM or vice versa.<p>If MUM can decipher my immediately prior sentence given this input then I might start to get interested.
There is nothing in that press release that could not have been done in the 1980s with Prolog.<p>Yeah, it’d have been more code but you would not have needed to destroy a forest to train the thing.<p>This is the NLP trade off of the 21st century. The code is easier to write but the model is completely opaque, and you need to really burn a lot of electricity to make it work.