TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Launch HN: Helicone.ai (YC W23) – Open-source logging for OpenAI

166 pointsby justintorre75about 2 years ago
Hi HN - Justin, Scott, and Barak here. We&#x27;re excited to introduce Helicone (<a href="https:&#x2F;&#x2F;www.helicone.ai">https:&#x2F;&#x2F;www.helicone.ai</a>) an open-source logging solution for OpenAi applications. Helicone&#x27;s one-line integration logs the prompts, completions, latencies, and costs of your OpenAI requests. It currently works with GPT, and can be integrated with one line of code. There’s a demo at <a href="https:&#x2F;&#x2F;www.helicone.ai&#x2F;video">https:&#x2F;&#x2F;www.helicone.ai&#x2F;video</a>.<p>Helicone&#x27;s core technology is a proxy that routes all your OpenAI requests through our edge-deployed Cloudflare Workers. These workers are incredibly reliable and cause no discernible latency impact in production environments. As a proxy, we offer more than just observability: we provide caching and prompt formatting, and we&#x27;ll soon add user rate limiting and model provider back off to make sure your app is still up when OpenAI is down.<p>Our web application then provides insights into key metrics, such as which users are disproportionately driving costs and what is the token usage broken down by prompts. You can filter this data based on custom logic and export it to other destinations.<p>Getting started with Helicone is quick and easy, regardless of the OpenAI SDK you use. Our proxy-based solution does not require a third party package—simply change your request&#x27;s base URL from <a href="https:&#x2F;&#x2F;api.openai.com&#x2F;v1" rel="nofollow">https:&#x2F;&#x2F;api.openai.com&#x2F;v1</a> to <a href="https:&#x2F;&#x2F;oai.hconeai.com&#x2F;v1" rel="nofollow">https:&#x2F;&#x2F;oai.hconeai.com&#x2F;v1</a>. Helicone can be integrated with LangChain, LLama Index, and all other OpenAI native libraries. (<a href="https:&#x2F;&#x2F;docs.helicone.ai&#x2F;quickstart&#x2F;integrate-in-one-line-of-code">https:&#x2F;&#x2F;docs.helicone.ai&#x2F;quickstart&#x2F;integrate-in-one-line-of...</a>)<p>We have exciting new features coming up, one of which is an API to log user feedback. For instance, if you&#x27;re developing a tool like GitHub Copilot, you can log when a user accepted or rejected a suggestion. Helicone will then aggregate your result quality into metrics and make finetuning suggestions for when you can save costs or improve performance.<p>Before launching Helicone, we developed several projects with GPT-3, including airapbattle.com, tabletalk.ai, and dreamsubmarine.com. For each project, we used a beta version of Helicone which gave us instant visibility into user engagement and result quality issues. As we talked to more builders and companies, we realized they were spending too much time building in-house solutions like this and that existing analytics products were not tailored to inference endpoints like GPT-3.<p>Helicone is developed under the Common Clause V1.0 w&#x2F; Apache 2.0 license so that you can use Helicone within your own infrastructure. If you do not want to self-host, we provide a hosted solution with 1k requests free per month to try our product. If you exceed that we offer a paid subscription as well, and you can view our pricing at <a href="https:&#x2F;&#x2F;www.helicone.ai&#x2F;pricing">https:&#x2F;&#x2F;www.helicone.ai&#x2F;pricing</a>.<p>We&#x27;re thrilled to introduce Helicone to the HackerNews community and would love to hear your thoughts, ideas, and experiences related to LLM logging and analytics. We&#x27;re eager to engage in meaningful discussions, so please don&#x27;t hesitate to share your insights and feedback with us!

27 comments

ianbickingabout 2 years ago
I have _specifically_ thought of writing something just like this, so it&#x27;s awesome to see it!<p>One thing I would really like to be storing with my requests is the template and parameters that created the concrete prompt. (This gets a little confusing with ChatGPT APIs, since the prompt is a sequence of messages.) Custom Properties allow a little metadata, but not a big blob like a template. I see there&#x27;s a way to have Helicone do the template substitution, but I don&#x27;t want that, I have very particular templating desires. But I _do_ want to be able to understand how the prompt was constructed. There&#x27;s probably some danger on the client side that I would send data that did not inform the prompt construction, and balloon storage or cause privacy issues, so there is some danger to this feature.<p>Backoffs and other rate limiting sounds great. It would be great to put in a maximum cost for a user or app and then have the proxy block the user once that was reached as a kind of firewall of overuse.<p>Note your homepage doesn&#x27;t have a &lt;title&gt; tag.
评论 #35280977 未加载
评论 #35280842 未加载
smithclayabout 2 years ago
Congrats on the launch. Like a lot of the people right now, am doing some side project with OpenAI + LangChain and immediately got some value out of this, specifically:<p>* When your chains get long&#x2F;complex enough in LangChain, it&#x27;s really hard to understand from debug output what&#x27;s final prompt that actually being sent, how much it costs, or catching runaway agents. This pretty much solves that for me.<p>As a &quot;prompt developer&quot;, one thing that&#x27;d be incredibly useful is a way to see&#x2F;export all of my prompts and responses over time to help me tune them (basically a &quot;Prompt&quot; button in the left nav).<p>Congrats on the launch. So nice to see a tool in the space that lets you get up and running in 4 minutes.
ssddanbrownabout 2 years ago
Congrats the on launch! I noticed you are referring to the project as open source while using the commons clause, which isn&#x27;t typically considered an open source license, so I raised this via the discussions of the GitHub repo [1].<p>[1]: <a href="https:&#x2F;&#x2F;github.com&#x2F;Helicone&#x2F;helicone&#x2F;discussions&#x2F;165">https:&#x2F;&#x2F;github.com&#x2F;Helicone&#x2F;helicone&#x2F;discussions&#x2F;165</a>
transitivebsabout 2 years ago
Here&#x27;s another open source alternative: <a href="https:&#x2F;&#x2F;github.com&#x2F;6&#x2F;openai-caching-proxy-worker">https:&#x2F;&#x2F;github.com&#x2F;6&#x2F;openai-caching-proxy-worker</a>
评论 #35280562 未加载
评论 #35291559 未加载
NetOpWibbyabout 2 years ago
Y&#x27;all committed an env file <a href="https:&#x2F;&#x2F;github.com&#x2F;Helicone&#x2F;helicone&#x2F;pull&#x2F;136&#x2F;files">https:&#x2F;&#x2F;github.com&#x2F;Helicone&#x2F;helicone&#x2F;pull&#x2F;136&#x2F;files</a> O___O
评论 #35284960 未加载
评论 #35284406 未加载
评论 #35284926 未加载
评论 #35287441 未加载
评论 #35284721 未加载
StablePunFusionabout 2 years ago
The entire company is based around the idea of providing metrics for one closed-source platform&#x27;s API? Or is the &quot;OpenAI application observability&quot; just one part of what the company does? Otherwise it seems like taking the &quot;all eggs in one basket&quot; to the extreme.
评论 #35280062 未加载
评论 #35280556 未加载
评论 #35280070 未加载
评论 #35280782 未加载
评论 #35280053 未加载
评论 #35280561 未加载
评论 #35280100 未加载
VWWHFSfQabout 2 years ago
Does this run afoul of OpenAI&#x27;s terms of of service in any way? Using a commercial proxy&#x2F;broker like this to access their API services instead of using directly.
评论 #35281069 未加载
评论 #35280114 未加载
olliepopabout 2 years ago
The reality is that virtually all of the public AI tech available right now is from OpenAI. Over time as more models are commercialised and generally available, it&#x27;s probable that Helicone will serve them too.<p>Congrats Justin and team! Excited for you.
评论 #35281068 未加载
social_quotientabout 2 years ago
Could be cool if you offer quota enforcement. So if something goes rogue on the application side, your layer could not only offer observability but a ceiling to protect against unexpected cost overruns.
yawnxyzabout 2 years ago
Wow so cool! Does this act kind of like a logger and would we be able to have access to the logs later on, or should we bring our own logger as well?<p>(Also just curious, are you guys just using D1 or KV under the hood?)
评论 #35281326 未加载
otterleyabout 2 years ago
What is the market for this solution? Whose pain point(s) are you solving? And what stops OpenAI from &quot;Sherlocking&quot;* you, i.e., making whatever you&#x27;re building a free included feature and extinguishing the market?<p>*Or, to use a more modern analogy, &quot;Evernoting&quot;
评论 #35281293 未加载
samstaveabout 2 years ago
This is great, however, I am concerned that the various AI &#x27;ecosystem&#x27; of all the bolt-on, add-on, plug-ins etc... will be like a billion services all looking for payment - and any complex startup that needs a bunch of these services to build their own service&#x2F;product&#x2F;platform, it will be like the current state of streaming services.<p>So, youll be trying to manage a ton of SLAs, contracts, payment requirements, limits on service access that may be out of your budget to pay for all the various services, API calls, etc.<p>This is going to be an interesting cluster....<p>So we need a company thats a single service to access all available AI connects and the multiple billing channels.<p>However, then you have that as a single POF
dcreaterabout 2 years ago
So route traffic through you so that you can monetize the insights from the data??
killthebuddhaabout 2 years ago
LLM infrastructure is <i>the spot to be right now</i> for startups.
curoabout 2 years ago
Happy Helicone customer here. It&#x27;s a dead simple setup. It&#x27;s great to have the extra charts and logging to debug issues and make sure all is running well.<p>Congrats to the team!
评论 #35281678 未加载
antonokabout 2 years ago
congrats! Helicone provides one of the biggest missing pieces from the the AI tool dev experience today, thanks for building this and sharing it with the rest of us!
Hansenqabout 2 years ago
Congrats! We&#x27;ve been happy users of Helicone for the past few months--it literally helped us solve a bug with OpenAI&#x27;s API where we didn&#x27;t know why requests were failing and we failed to log some of their responses. Helicone helped us debug that it was a token limit issue really quickly, especially since the logging around hasn&#x27;t been great.<p>Love how easy it was to integrate too--just one line to swap out the OpenAI API with theirs.
评论 #35280051 未加载
CGamesPlayabout 2 years ago
Perfect! I was just today thinking about how I need to build up my own data set and should be logging all of my transcripts. This is exactly what I wanted.<p>I want to gather up my chat transcripts, then identify poor experiences in the chat, and then use that to guide fine-tuning. I don&#x27;t believe that OpenAI actually provides anything to enable this as part of their platform, right?
nicoabout 2 years ago
Awesome.<p>Is there a consumer version of this?<p>Like an alternative ChatGPT client or chrome extension that will save my prompts&#x2F;conversations, tell me which ones I liked more and let me search through them?
haolezabout 2 years ago
From a VC perspective, this sounds like easy money, since it&#x27;ll probably be acquired by OpenAI or Microsoft if it succeeds just a little.
speculatorabout 2 years ago
This is exactly what we needed. We were one day away from an internal build out so the timing couldn&#x27;t be better.
Kkoalaabout 2 years ago
Hmm, so to integrate I have to basically send my api key to you on every request? Not great
jacquesmabout 2 years ago
Are you going to be peeking at the data for your own use or is that off limits somehow?
cphooverabout 2 years ago
Is data encrypted in such a way so that your company cannot read conversations?
zekoneabout 2 years ago
onboarded and started using it in literally 2 minutes. nice
评论 #35280065 未加载
jacobpeddabout 2 years ago
This is awesome guys!
ninjaaabout 2 years ago
Congrats on the launch!