TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Observability for LLM apps with structlog and DuckDB

24 pointsby edublancas12 months ago

3 comments

bbor12 months ago
Still reading the post, but for anyone who is more interested in Structlog than this particular tutorial, here&#x27;s their docs: <a href="https:&#x2F;&#x2F;www.structlog.org&#x2F;en&#x2F;stable&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.structlog.org&#x2F;en&#x2F;stable&#x2F;</a><p>Looks preeeetty cool. I rolled my own python logging system when I was using Textual and RichText, but the methods all revolved around style and flagging content. It never occurred to me structure <i>logs</i> in a way other than the (obviously, super noisy) call stack...<p>EDIT: Philosophy&#x2F;&quot;Why Structlog?&quot; doc: <a href="https:&#x2F;&#x2F;www.structlog.org&#x2F;en&#x2F;stable&#x2F;why.html" rel="nofollow">https:&#x2F;&#x2F;www.structlog.org&#x2F;en&#x2F;stable&#x2F;why.html</a>
评论 #40600129 未加载
verdverm12 months ago
This seems limited compared to the observability frameworks that integrate with the popular LLM frameworks like LangChain and LlamaIndex. When you use those, you want automatic traces for the nested calls hidden down the function call chain.<p>This really seems more like a Flask API tracing example with as LLM call as an endpoint. Dare I call it a shallow developer marketing piece?<p>re: LLM observability more generally, they mostly seem to be building their own full-stack suite, whereas I&#x27;d like them to ship to my existing Grafana LGTM stack. I&#x27;m keen to check out <a href="https:&#x2F;&#x2F;docs.openlit.io&#x2F;latest&#x2F;introduction" rel="nofollow">https:&#x2F;&#x2F;docs.openlit.io&#x2F;latest&#x2F;introduction</a> and migrate away from LangFuse, but there is the feedback on model response -&gt; training dataset that is really nice
评论 #40600976 未加载
评论 #40600223 未加载
OutOfHere12 months ago
The stronger need for LLM apps is for persistent response caching and reuse. Once this is available at scale with well-defined cache expiration policies, printing logs selectively is easy. The solution ought to also support individual key invalidation as needed.
评论 #40600784 未加载