TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Devs looking to implement LLMs are turning to retrieval augmented generation

4 pointsby bobvanluijtalmost 2 years ago

1 comment

ilakshalmost 2 years ago
Decent article.<p>I feel that &quot;implement LLMs&quot; is an inaccurate description. &quot;Apply LLMs&quot; is more accurate.<p>Now that we have fine-tuning from OpenAI, there are probably a lot of applications that will now make more sense to use that with little or no retrieval (at least not for core knowledge&#x2F;skills). Because 3.5-turbo is much faster than 4 and can be sufficiently smart for many things after fine-tuning.<p>After CodeLlama or any actually smart (comparable to OpenAI) open model comes out, it will probably be a different situation again. If it&#x27;s smart enough then you won&#x27;t have to rely on a single company to have so much control over your data and business. The ultimate would be to be able to generate training examples for fine tuning with an open model also.
评论 #37242449 未加载