Hey everyone!<p>I’m excited to announce the release of my last project, MiniSearch.<p>I admire Perplexity.ai, Phind.com, You.com, Bing, Bard and all these search engines integrated with AI chatbots. And as a curious developer, I took the chance and created my own version.<p>Using Web-LLM and Transformers.js to provide browser-based text-generation models on desktop and mobile, I built a minimalist self-hosted search app on which an AI analyses the results, comments on them and responds to your query summarising the info. In the backend, it still queries a real search engine, but besides that, there's no other remote connection happening.<p>For running in the browser and on mobiles, lightweight models are required, so we can't expect them to give stellar answers, but there are a few advantages of using this over the services as mentioned earlier:<p>- Availability: The AI will always be available and respond with the maximum available speed from the device.
- Privacy: Besides the queries that go anonymously to the actual search engine, nothing else leaves your device.
- No ads/trackers: Get the relevant links clean and fast without being tracked.
- Customization: As it's open-source, you can fork it and re-style it any way you want.<p>You can get started with MiniSearch by cloning the repository from GitHub (<a href="https://github.com/felladrin/MiniSearch">https://github.com/felladrin/MiniSearch</a>) and running it locally or by using it online on this HugginFace Space: <a href="https://felladrin-minisearch.hf.space" rel="nofollow noreferrer">https://felladrin-minisearch.hf.space</a>
(Alternative Space address: <a href="https://huggingface.co/spaces/Felladrin/MiniSearch" rel="nofollow noreferrer">https://huggingface.co/spaces/Felladrin/MiniSearch</a>)<p>You can even set it as your browser's address-bar search engine using the query pattern `<a href="https://felladrin-minisearch.hf.space/?q=%s" rel="nofollow noreferrer">https://felladrin-minisearch.hf.space/?q=%s</a>` (where your query replaces %s).<p>At the moment of this writing, the app is using TinyLlama and LaMini-Flan-T5 models, but there's an option to try to use larger models like Mistral 7B (not recommended, though, as it could be slow and break the fast-search experience).<p>That's what I had to share. Thanks for reading!<p>Your feedback means the world to me! Please don't hesitate to reach out if you have any questions or suggestions or want to learn more.