I already did some googling and gpt-ing but hoping someone here could explain it better. Is it a niche thing or could it be applied to anything where an LLM may be useful?
Ignoring LLMs, let’s say you could represent any piece of knowledge as a vector. If the vector is an accurate embedding of this knowledge, and the dimensions and scales of those dimensions are all the same between all the embeddings, then you can compare anything to anything. You can compare a picture of a horse to a Wikipedia article about equines. It’s all part of the same knowledge space, so searching and comparing things becomes a simple mathematical operation. That’s why vector databases are important and embeddings are important - they are a way to store all information in a consistent way and compare across that knowledge.