How are people actually using vector databases?<p>The closest explanation to a use case architecture I've seen recently was <a href="https://mattboegner.com/knowledge-retrieval-architecture-for-llms/" rel="nofollow noreferrer">https://mattboegner.com/knowledge-retrieval-architecture-for...</a> - it basically describes doing knowledge retrieval (keyword parsing) from LLM queries, feeding that to a vector db to do similarity search to get a top K similar documents to the parsed keywords, then feeding that list that back into the LLM as potential useful documents it can reference in its response. It's neat but it seems a bit hacky. Is that really the killer app for these things?
Very cool that there's a "preview" downloadable version: <a href="https://cloud.google.com/alloydb/omni" rel="nofollow noreferrer">https://cloud.google.com/alloydb/omni</a>
> AlloyDB AI allows users to easily transform their data into vector embeddings with a simple SQL function for in-database embeddings generation<p>Slick!
> AlloyDB AI allows users to easily transform their data into vector embeddings with a simple SQL function for in-database embeddings generation, and runs vector queries up to 10 times faster than standard PostgreSQL. Integrations with the open source AI ecosystem and Google Cloud’s Vertex AI platform provide an end-to-end solution for building gen AI applications.<p>- Embrace [X]<p>- Extend [X]<p>- Extinguish [?]<p>Will they allow it to use custom embeddings?