Still reading the post, but for anyone who is more interested in Structlog than this particular tutorial, here's their docs: <a href="https://www.structlog.org/en/stable/" rel="nofollow">https://www.structlog.org/en/stable/</a><p>Looks preeeetty cool. I rolled my own python logging system when I was using Textual and RichText, but the methods all revolved around style and flagging content. It never occurred to me structure <i>logs</i> in a way other than the (obviously, super noisy) call stack...<p>EDIT: Philosophy/"Why Structlog?" doc: <a href="https://www.structlog.org/en/stable/why.html" rel="nofollow">https://www.structlog.org/en/stable/why.html</a>
This seems limited compared to the observability frameworks that integrate with the popular LLM frameworks like LangChain and LlamaIndex. When you use those, you want automatic traces for the nested calls hidden down the function call chain.<p>This really seems more like a Flask API tracing example with as LLM call as an endpoint. Dare I call it a shallow developer marketing piece?<p>re: LLM observability more generally, they mostly seem to be building their own full-stack suite, whereas I'd like them to ship to my existing Grafana LGTM stack. I'm keen to check out <a href="https://docs.openlit.io/latest/introduction" rel="nofollow">https://docs.openlit.io/latest/introduction</a> and migrate away from LangFuse, but there is the feedback on model response -> training dataset that is really nice
The stronger need for LLM apps is for persistent response caching and reuse. Once this is available at scale with well-defined cache expiration policies, printing logs selectively is easy. The solution ought to also support individual key invalidation as needed.