hey hn,<p>I built an open-source Perplexity clone that can run local LLMs and cloud LLMs.<p>It's fully self-hostable through Docker and uses ollama to support local LLMs.<p>The demo video in the repository shows me running it locally with llama3 on my M1 Macbook Pro.<p>I'm open to any suggestions or feedback, thanks!