Story time: many years ago, user testing of search ranking worked with a process called "side by sides". This where for a certain group of people they would compare search results in production with an experiment and they would see which were preferred. The goal of Google is to make the result the user wants #1 on that list. So you would have known searches and you could compare the old result and the new.<p>This was actually labor-intensive on Google scale and thus expensive. It also had problems. For example, certain results might be time-sensitive. Searching for "Simone Biles" now should have her Olympic results near the top of the list. Run that same query in 2 years and the desired result will be different.<p>Also, user behaviour changes over time. If you had user search data from 2000, how would you deal with Reddit search results now? Or the SEO content farms that came later?<p>You need ongoing user behaviour data to continually refine search results. It's a constant arms race.<p>So along comes Chrome. Originally then-CEO Eric Schmidt didn't really see why Google should get into the browser business. This was in the 2000s. But we were still in the grip of the IE dominance although it was waning and Firefox was floundering. In the 2000s browser support was a much bigger problem than it is today. It's why we got things like jQuery.<p>But Chrome went ahead and even from Chrome 2 or 3 it was so much better than anything else. One big innovation was a process-per-tab. FF was known for freezing the entire browser because each tab was a thread.<p>But why did Google invest so much in this? Search results.<p>Chrome gave Google insights into how users interacted with search. Side by sides no longer became necessary because Google had direct insight into how users interacted with search results, whether they clicked on a link and immediately left (ie bouncing, this absolutely hurts how Google ranks your site and Google uses it to downrank SEO content farm sites) and what link gave the user the result.<p>This accumulated knowledge and insight from the user's browser is something that no one other than possibly Microsoft could theoretically compete with.<p>Why did I tell this story? Because this article talks about Google's search dominance and doesn't mention Chrome. You cannot talk about one without the other. If you don't mention Chrome, I really question your knowledge on the subject.<p>Disclaimer: Xoogler
> But the real question is, given that Google has around 90% of the search market share, and so has around 90% of the available user data, is the advantage it gives them so unfair that it should be illegal?<p>In the U.S. it’s not illegal to be a monopoly but to use your monopoly position to prevent competitors from entering the market. (This is why it’s not illegal for a small town to have a single gas station, for example.) If 90% of users freely choose to search with Google that’s fine from a strictly legal point of view. The U.S’s case was that Google was coercing that choice by bidding extremely high amounts to be the default search on web browsers, thus hindering competitors from entering the search market.
And they own Youtube...<p>That will be a massive benefit when it comes to training AI-powered robots I think. Conceivably, an AI that's seen every plumbing-related video on Youtube would be superior to any human plumber you can imagine, assuming a good mapping between the human hand movements in the video and its own robotic arms. And so on for other domains.
Early in this article, there's this sentence: Google’s results are really good.<p>Reminds me of the comedian who, when asked, "How's your wife," would say, "Compared to what?"
I can't get behind the article's conclusions.<p>1) <i>Google’s results are really good.</i> Not quite. Google's results <i>used to be really good</i> but now are a mix of helpful to some, less helpful and fully counterproductive.<p>2) <i>and they’re really good because they have a monopoly on the user data.</i> User data may elevate some products but the worth of <i>search results</i> stems from the crawled index and search algorithms.<p>2a) <i>every engineer could leave and a start a new search engine with the exact same source code and that search engine would be worse.</i> Only until their crawlers built a sufficient index. If they can filter out results gamed for Google SEO, they'll be better.<p>3) <i>In a landmark case, a US judge has ruled that Google’s monopoly on search is unlawful.</i> Trim the first 3 letters of the last word and this conclusion will be sound.
Whenever this comes up I'm glad having read "The Age of Surveillance Capitalism".<p>Not an easy read by any means, but it's core hypothesis is spot on IMO: Google (and others) core business is aggregating and processing as much user data as possible. Every product they create and every decision they make aims to increase or protect their access to data.
I'm sure all that user data was acquired from consenting users who knew what they were doing, and not at all illegally. Right? And certainly not while taking advantage from a monopoly (or duopoly) in many areas?