No, it is not just you, the quality of Google results has gotten dramatically worse. BTW, you can ask Google 'is Google in decline' and see for yourself (DISCLAIMER: my answer on Quora is highly ranked where you can go into more details).<p>In short, Internet as we have known it has fallen apart because there are hardly any true organic links left. They used to be the lifeblood of the Web, the links based SOLELY on the relevance and quality of the linked page. Such a notion is quaint these days.<p>The Web used to grow exponentially in the early days, from say 1994-2008. Google launched with 100M webpages in their index, then they grew it to 1B in 2000 but they were NOT the first to reach that milestone, they were beaten to it by a Norwegian search company FAST that launched alltheweb.com with 1B before Google in 2000.<p>FAST was acquired by Yahoo in 2003 and you can guess what happened to them. For a few years Google and Yahoo were playing off against each other in terms of the size of their indexes, with Google always in the lead. But around 2003 the game of numbers stopped as they both announced they would stop publishing numbers. The Web continued to grow, still basically exponentially, and the next big milestone was announcement of Cuil in 2008. Cuil was a competing search engine created by a top team of Anna Patterson and Tom Costello, also with Luis Monier from Altavista. Their claim was that they would launch with an index of 120B pages, with an index bigger than Google.<p>That was widely considered an outrageous claim, as the notion that Google knows practically everything was already firmly entrenched. But they did manage to stir things up a bit, to the point of Google issuing a vague release in their official blog claiming they knew about 1 trillion urls. Of course they did not mention anything about indexing all that but the damage was done.<p>Shortly upon launch, the quality of Cuil results turned out to be far worse than expected which is really a shame as their basic premises were spot on, apart from index quality. Cuil then promptly fizzled out.<p>Note that projecting exponential growth (doubling every 18 mo i.e. quadrupling every three years) since 2008 we would expect 4^4*120B or more than 3 quadrillion(!) for the size of the Web index, with Google knowing 8 times that.<p>Such an expectation is plain silly, especially having simple queries such as e.g. 'Novak Djokovic', or 'Roger Federer' on Google returning less than 100 results.<p>But all this is only a (smaller) part of the story. Indexing is now a LEGACY technology, more than 20 years old. Users expect much more than returning bunch of blue links with matched keywords in response to their queries. They want much of the time direct answers to their questions.<p>The technology to do it has been known for 10 years now, in terms of dense vectors also known as word, sentence and other types of embeddings. Direct answers would be then found by nearest neighbor search. The scale of the system would of course have to be in the billions. BTW, it is a very interesting open question how many direct answers Google can provide now, in terms of infoboxes/featured snippets. Google has been coy about the issue but in my professional opinion, as a founder of multiple search engines, the answer is no more than around 20B. Feel free to shed more light on the subject and challenge this number.<p>In summary, the time has come to have a system based on vectors and nearest neighbor search with billions of vectors, giving direct answers to queries, with no ads nor tracking,and hopefully with API too.<p>One more DISCLAIMER, such systems are online for all to try and play with, at <a href="https://qaagi.com" rel="nofollow">https://qaagi.com</a> (for causal queries about causes and effects of things with billions of ranked answers) and <a href="https://yottaanswers.com" rel="nofollow">https://yottaanswers.com</a> (for factoid and and general questions what/how/where etc. with billions of answers). Both of the projects are led and principally funded by me, Borislav Agapiev.