It seems like a recurring narrative is that this level of AI will disrupt Google and others, but why? Is it just because it will become easier to crest “shovelware” articles that will mess with uniqueness of pages, or a personal assistant type feature “Book me the best hotel in Phoenix” or something else?
> It seems like a recurring narrative is that this level of AI will disrupt Google and others, but why? Is it just because it will become easier to crest “shovelware” articles that will mess with uniqueness of pages, or a personal assistant type feature “Book me the best hotel in Phoenix” or something else?<p>Yes. And lots more.<p>Part of Google's competitive advantage is a combination of real and perceived barriers to entry. ChatGPT has stretched what people think it possible. The degree to which such futures are feasible may be relatively less important at this point, since the excitement alone will drive a lot of investment.<p>To horribly mix metaphors, Google wants everyone to think their resources, mindshare, talent pool, and track record create a huge moat, so to speak. They want everyone to think the stone walls are steep and the risk of storming the castle is just too high. But perhaps the emperor has no clothes? Maybe the castle doesn't need to be stormed? Castles are drafty. Who wants to live there anyway? Maybe drafty castles provide nice cooling for server farms? Perhaps castles will just become a tourist attraction as new architectures are developed. But I digress. Maybe this gives some ideas?
This snippet from an old interview seems to be relevant: <a href="https://www.youtube.com/watch?v=XeIIpLqsOe4">https://www.youtube.com/watch?v=XeIIpLqsOe4</a><p>In words: Schmidt here is saying that Google returning lots of pages is a "bug" and the future of search is understanding what the user wants and providing a single best answer.
I'd be more worried about the problems Google outlined in their paper "Machine Learning: the high interest credit card of technical debt" (<a href="https://research.google/pubs/pub43146/" rel="nofollow">https://research.google/pubs/pub43146/</a>)<p>Once more content is created with AI, and then AI is trained on it, we get this irreversible learning loop because you won't know who or what wrote your training data. It's that paper, but without the human in the loop any more.
If a really good, dynamic chat-like agent can answer your questions/queries and has been trained on some large, relevant subset of the Web, it kind of short circuits the whole "search engine" concept. Why would I want to have to shovel through pages of crap when I can have my ChatGPT AI bro just let me know the critical things I am asking about?
Similar to Microsoft being threatened by the Internet -- isn't it logical to throw as many resources as possible at it -- to conquer the new frontier? Otherwise it might mutate into something you don't control that threatens your money-printer? GPT-4 is trained on 1000x data as GPT-3, so its going to get a lot more interesting.