I noticed that although doing queries to the AI based chatbot like bing or bard, they have a fundamental handicap of being out of date with current events … is this going to be the Achilles heel preventing AI from totally displacing existing search?
Good observation.<p>The Bing chatbot can answer questions about a sports game that happened yesterday because it runs a conventional query and feeds that to the chatbot.<p>I was interested in things like <a href="https://en.wikipedia.org/wiki/Cyc" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Cyc</a> ten years ago and made some progress in using Freebase as a generic database for information, particularly I discovered the first and (I believe) only correct conversion of Freebase to RDF, a process I could run on the weekly data dump from Freebase.<p>One thing I struggled with was that obviously people would need an updated database sooner or later but most users could live with an out-of-date database and I was quite frustrated that companies I contacted who were interested in the data weren't interested in a repeatable solution but would instead go with whatever half-baked system they had.<p>It's a tough problem for a model like ChatGPT because (1) the model training process is so resource intensive and (2) if you retrain the model the "activation code" used by the neural network will change so systems like LangChain that depend on storing activation vectors will break.<p>Right now the standard procedure for updating the model is to redo the training process completely with added documents, there is a research area called "continuous learning" where you could add a few documents here and there but the direct approaches suffer from "catastrophic forgetting" where the network loses what it already learned if you don't blend your documents into a uniform sample.<p>I have a simple model that I retrain every day (or every few days depending on the mode I am running it in) and it takes just one minute so it is OK to build from scratch. I've got another model that does the same task that takes more like 45 minutes to train, for my particular task it doesn't really beat my simple model, but for some similar tasks I'd expect it to better. That would already have me thinking about tradeoffs between training time and the results I get, people who are training foundation models are facing this problem times a billion.