TE
TechEcho
Home
24h Top
Newest
Best
Ask
Show
Jobs
English
GitHub
Twitter
Home
Ask HN: Have you reduced costs by caching LLM responses?
2 points
by
KennyFromIT
almost 2 years ago
Providing a chat bot to a large user base means you spend a lot of money on similar requests. I'm looking for best practices or lessons learned from implementing LLM-enabled apps at scale. Thanks in advance.
no comments
no comments