Marqo is an end-to-end vector search engine. It contains everything required to integrate vector search into an application in a single API. Here is a code snippet for a minimal example of vector search with Marqo:<p>mq = marqo.Client()<p>mq.create_index("my-first-index")<p>mq.index("my-first-index").add_documents([{"title": "The Travels of Marco Polo"}])<p>results = mq.index("my-first-index").search(q="Marqo Polo")<p>Why Marqo?
Vector similarity alone is not enough for vector search. Vector search requires more than a vector database - it also requires machine learning (ML) deployment and management, preprocessing and transformations of inputs as well as the ability to modify search behavior without retraining a model. Marqo contains all these pieces, enabling developers to build vector search into their application with minimal effort.<p>Why not X, Y, Z vector database?
Vector databases are specialized components for vector similarity. They are “vectors in - vectors out”. They still require the production of vectors, management of the ML models, associated orchestration and processing of the inputs. Marqo makes this easy by being “documents in, documents out”. Preprocessing of text and images, embedding the content, storing meta-data and deployment of inference and storage is all taken care of by Marqo. We have been running Marqo for production workloads with both low-latency and large index requirements.<p>Marqo features:<p>- Low-latency (10’s ms - configuration dependent), large scale (10’s - 100’s M vectors).
- Easily integrates with LLM’s and other generative AI - augmented generation using a knowledge base.
- Pre-configured open source embedding models - SBERT, Huggingface, CLIP/OpenCLIP.
- Pre-filtering and lexical search.
- Multimodal model support - search text and/or images.
- Custom models - load models fine tuned from your own data.
- Ranking with document meta data - bias the similarity with properties like popularity.
- Multi-term multi-modal queries - allows per query personalization and topic avoidance.
- Multi-modal representations - search over documents that have both text and images.
- GPU/CPU/ONNX/PyTorch inference support.<p>See some examples here:<p>Multimodal search:
[1] <a href="https://www.marqo.ai/blog/context-is-all-you-need-multimodal-vector-search-with-personalization" rel="nofollow noreferrer">https://www.marqo.ai/blog/context-is-all-you-need-multimodal...</a><p>Refining image quality and identifying unwanted content:
[2] <a href="https://www.marqo.ai/blog/refining-image-quality-and-eliminating-nsfw-content-with-marqo" rel="nofollow noreferrer">https://www.marqo.ai/blog/refining-image-quality-and-elimina...</a><p>Question answering over transcripts of speech:
[3] <a href="https://www.marqo.ai/blog/speech-processing" rel="nofollow noreferrer">https://www.marqo.ai/blog/speech-processing</a><p>Question and answering over technical documents and augmenting NPC's with a backstory:
[4] <a href="https://www.marqo.ai/blog/from-iron-manual-to-ironman-augmenting-gpt-with-marqo-for-fast-editable-memory-to-enable-context-aware-question-answering" rel="nofollow noreferrer">https://www.marqo.ai/blog/from-iron-manual-to-ironman-augmen...</a>
I get your larger point, but the errors and phrasing are a bit off putting.<p>Vector similarity alone _IS_ enough for vector search. That's literally what "search" means in this context! Finding another vector within an epsilon bound given a metric. After the 3rd read, I understand the point you're trying to make I think, and I think you might be right.<p>There might be room in the market for an integrator, an all in one platform. It won't have the best performance or functionality, I doubt it would win in _any_ category. But if you can get the business model working right I could imagine such a product having sizeable market share. Hm...<p>Edit:
I'm also curious about the dimension and metric used. Any numbers about latency or size is kinda pointless without :).<p>1 point in 1536-D space (what OpenAI uses),4 byte float == 6KB, so even 100 million points is only 600G...
I guess if you wanted to do decompounding and stemming you should make the fields with the stemmed values and the decompounded values yourself and ... then implement it for the queries as well? Or is there a way to do that kind of thing somewhere in there?
probably stupid question - is there a way to use this to search over graph data - like some way to do graph embeddings here to map a graph to the vectors?