Little demo that searches Hacker News comments for a topic (using <a href="https://hn.algolia.com/api" rel="nofollow">https://hn.algolia.com/api</a>), extracts sentiment and other metadata, then generates a research summary.
Really proud of the API we've built at <a href="https://substrate.run" rel="nofollow">https://substrate.run</a> – you don't have to think about graphs, but you implicitly create a DAG by relating tasks to each other. Because you submit the entire workflow to our inference service, you get automatic parallelization of dozens of LLM calls for free, zero roundtrips, and much faster execution of multi-step workflows (often running on the same machine).
I’m new to RAG, but have been learning about it lately for fun and it’s pretty incredible as a concept.<p>What are your thoughts around these frameworks like llama index and langchain? Being a seasoned engineer, it seems like a ridiculous amount of fluff around an already simple process.