They announced this at DevDay at the beginning of October.<p>It's effectively a layer of (well needed) sugar on top of their existing fine-tuning mechanism.<p>The challenge with fine-tuning is collecting a representative dataset to tune against. The tooling they added makes it easy for you to persist your prompts and responses within the OpenAI platform, and then later select those persisted pairs (that were created with e.g. GPT-4o) and use them to fine-tune a cheaper model (like GPT-4o mini) - such that the more expensive model is effectively "teaching" the cheaper model what to do.<p>You could do this before, but it was a LOT of work. The new "distillation" features make it easier.
With each announcement, OpenAI kills yet another class of startups. I wonder if there are areas that OpenAI (and other AI companies) can't enter because those seem to be the only viable startup ideas in the long-term.<p>Currently, OAI does all the following:<p>- offers flagship models<p>- offers lite models<p>- offers easy finetuning of their models<p>- offers structured output and guaranteed JSON output.<p>- offers parallel tool/function calling which remains unmatched.<p>- has low API costs.<p>- offers a nice UI for their models<p>- offers Mac, iOS, Android, Windows app clients.<p>- offers image generation capabilities INTEGRATED with their language models.<p>- offers two-tier subscription plans for ordinary/pro (team) users.<p>- offers custom GPTs which can be used by ordinary people to create GPT experiences tailored to specific tasks (no need to build a website on your own).<p>- allows users to easily share chats!! (it took Anthropic a long time to have this feature, and even now it's not as good as OpenAI's solution).<p>- offers prompt caching and task scheduling to further save costs.<p>- offers unrivaled voice-to-text models at different sizes (Whisper).<p>- offers text-to-voice models that feel much more natural than the competition.<p>- has outstanding documentation.<p>- sets the standard for API (all other companies have to follow their conventions, such as `messages`, `.choices[0].message.content`, etc.)<p>- has the most capable team to, idk, build AGI/ASI...
From today's ChatGPT search announcement:
"The search model is a fine-tuned version of GPT-4o, post-trained using novel synthetic data generation techniques, including distilling outputs from OpenAI o1-preview."
We've been working on a Python framework where one of the use cases is easy distillation from larger models to smaller open-source models and smaller-closed source models (where you don't have to still use / pay for the closed-source API service): <a href="https://datadreamer.dev/docs/latest/" rel="nofollow">https://datadreamer.dev/docs/latest/</a><p>Here's an (now slightly outdated) example of OpenAI GPT-4 => OpenAI GPT-3.5: <a href="https://datadreamer.dev/docs/latest/pages/get_started/quick_tour/openai_distillation.html" rel="nofollow">https://datadreamer.dev/docs/latest/pages/get_started/quick_...</a><p>But you can also do GPT-4 to any model on HuggingFace. Or something like Llama-70B to Llama-1B.<p>For some tasks, this kind of distillation works extremely well given even a few hundred examples of the larger model performing the task.
To anyone that thinks models are going to be commodified, it seems like it's going to be exceedingly difficult to compete with OpenAI. The developer experience working with them is just too good.<p>Sure you could use a different providers, but you're going to be stuck with an incredibly fragmented ops stack. My experience with Google has been shockingly bad and Anthropic has got a good amount of catching up to do. No one else is remotely competitive. Honestly would love to see something from Meta long term.
Part of the point of distilling from their models is that I control the model, its availability, and its cost to me. So while this may be a convenient feature, unless I can download the weights it wouldn’t replace my workflow.<p>This does raise the bar for any future startups though. If your plan was to distill GPT4 outputs and lease the weights to me through a REST API, I probably won’t be interested.