TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

RAG, fine-tuning, API calling and gptscript for Llama 3 running locally

30 pointsby lewq12 months ago

4 comments

lewq12 months ago
But what I think is really interesting is the ability to define a helix app yaml like:<p><a href="https:&#x2F;&#x2F;github.com&#x2F;helixml&#x2F;example-helix-app&#x2F;blob&#x2F;main&#x2F;helix.yaml">https:&#x2F;&#x2F;github.com&#x2F;helixml&#x2F;example-helix-app&#x2F;blob&#x2F;main&#x2F;helix...</a><p>Then version control it and deploy your updated LLM app with a single git push. LLMGitOps?
评论 #40467444 未加载
lewq12 months ago
Deck: <a href="https:&#x2F;&#x2F;docs.google.com&#x2F;presentation&#x2F;d&#x2F;11bBUP8gBekmI7GkwvGdrw2j5L3gek4CBNz47SORjuTk&#x2F;edit" rel="nofollow">https:&#x2F;&#x2F;docs.google.com&#x2F;presentation&#x2F;d&#x2F;11bBUP8gBekmI7GkwvGdr...</a>
pavelstoev12 months ago
Very interesting project and good progress on making private LLM use cases more accessible and usable, please keep going !
NocodeWorks12 months ago
Been looking for something where I could do this all locally without doing a bunch of wiring. Will check it out.