TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: Have LLM API Updates or Deprecations Impacted You?

4 pointsby fatso784almost 2 years ago
Have you encountered issues around LLM model or API updates? Are you concerned about model updates or deprecations?<p>We&#x27;d love to hear about your experiences. We are academic researchers at Harvard CS investigating investigating user experiences with LLM APIs like OpenAI, particularly in light of recent or &#x27;silent&#x27; changes. For instance, the deprecation of Codex and certain updates to ChatGPT have posed challenges for some, making research irreproducible or affecting software projects. Hearing from those impacted will help us develop guidelines regarding model updates and deprecations, such that companies who provide LLM APIs can improve their rollout processes and avoid pitfalls in the future.<p>If you are interested and affected by model updates, please consider signing up for an virtual interview by filling out this form: https:&#x2F;&#x2F;forms.gle&#x2F;zDhm1A16kZie4dPJ8 We will pay you a $30 Amazon gift card for an hour of your time. Of course, if you prefer, you may also reply to this post and share your experiences here.<p>Thank you!

1 comment

ilakshalmost 2 years ago
I was using Codex for a website builder service. And really freaking out because of the lack of response from OpenAI about increasing the rate limit. But after the ChatGPT API came out, I came to the conclusion that ChatGPT was just as good or better than Codex anyway. So I immediately switched to the Chat API.<p>I also wanted to use OpenAI&#x27;s Edit endpoint but now that the models are faster that is less of an issue because it&#x27;s not a big deal to rewrite most files that fit in context. Also with something like the new function call support, you can make something like a find and replace function call, or find and replace between start and end.<p>I think the biggest issue people are going to have is that within a few months OpenAI will release fine-tuning for the ChatGPT models, and that probably will work significantly better than the &quot;in context training&quot; (i.e., adding relevant help to prompts per user query) that people are using now, at least for some use cases. So there will be a lot of projects that just finished getting vector search to enhance prompts working and then that will immediately be kind of obsolete. Although it&#x27;s probably going to be expensive.<p>But overall I am pleased with the rate of progress and updates from OpenAI.<p>I am still hopeful that within not too many more months we will finally have really strong code generation from open models. There are definitely some better reasoning open models coming out lately but not quite there yet.