Having worked with AI and LLMs for quite a bit now as a "wrapper", I think the real key is that doing well (fast, accurate, relevant) requires a really, really good ETL process in front of the actual LLM.<p>A "wrapper" will always be better than the the foundation models so long as it can do the domain-specific pre-generation ETL and data aggregation better; that is the true moat for any startup delivery solutions using AI.<p>Your moat as a startup is really how good your domain-specific ETL is (ease of use and integration, comprehensiveness, speed, etc.)
'In recent years, innovative AI products that didn’t build their own models were derided as low-tech “GPT wrappers.” '<p>The ones derided were those claiming to be 'open-source XY' while being a standard tailwind template over an OpenAI call or those claiming revolutionay XY while 90% being the proprietary model underneath.
I am not sure how many were truly innovative that weren't cloneable in a very short time. Using models to empower your app is great, having the model be all of your app while you pitch it otherwise is to be derided.
Contrary take: "AI founders will learn the bitter lesson" (263 comments): <a href="https://news.ycombinator.com/item?id=42672790">https://news.ycombinator.com/item?id=42672790</a> the gist: "Better AI models will enable general purpose AI applications. At the same time, the added value of the software around the AI model will diminish."<p>Both essays make convincing points, I guess we'll have to see. I like the Uber analogy here, maybe the winners will be some who use the tech in innovative ways that only leverage the underlying tech.
Is this not consensus yet that people in the model layer are fighting commoditization and so-called wrappers have all the moats? I'd written something similar back in Nov of last year, and I thought I was late in writing it down.<p><a href="https://interjectedfuture.com/the-moats-are-in-the-gpt-wrappers/" rel="nofollow">https://interjectedfuture.com/the-moats-are-in-the-gpt-wrapp...</a>
> <i>Imagine it becomes truly trivial to copy cat another product — something as simple as, “hey AI, build me an app that does what productxyz.com does, and host it at productabc.com!” In the past, a new product might have taken a few months to copy, and enjoyed a bit of time to build its lead. But soon, perhaps it will be fast-followed nearly instantly. How will products hold onto their users?</i><p>It's actually not that easy to copy/paste AI agents, prompts take quite a lot of tweaking and it's a rather slow and manual process because it's not that easy to verify that they're working for all the possible inputs.
This gets even more complicated when you get a number of agents in the same application and they need to interact with each other.
Business as usual. While electricity is remarkable, no one gets extremely rich selling it.
End-user value is the only value that can be sold at a profit.
If everyone has incredibly good AI, then perhaps the unique asset will be training data.<p>Not everyone will have the training data that demonstrates precisely the behavior that your customers want. As you grow, you'll generate more training data. Others can clone your product immediately... but the clone just won't work as well. In your internal evals, you'll see why. It misses a lot of stuff. But they won't understand, because their evals don't cover this case.<p>(This is quite similar to why Bing had trouble surpassing Google in search quality. Bing had great engineers, but they never had the same data, because they never had the same userbase.)
I predict this article to be embarrassingly wrong. The moat of models is compute, wrappers are just software engineering, one of the first things to be commoditized by AI in general.
Probably worth thinking more about what we mean by "wrapper". A year or so ago, it often meant a prompt builder UI. There's no moat for that. But if in 2025 a "wrapper" means a proprietary data source with a pipeline to deliver it along with some proprietary orchestration along with the UI (and the LLM API being called), then it likely warrants looking at it differently.
If you believe a prompt of the form "hey, GPT A make yourself behave like GPT B" can be articulated to be a Chinese room, I put it to you the amount of missing information between what informs A and what informs B will make this a mountain of work.<p>Do you think it's less work than just making GPT B and why? What quality in the system (inductance aside) is this simply additive?<p>My strawman reads as "wishing for fairytales" basically. But this strawman to me, is the reductive intent inside the article. "Ask a GPT to perform like another later different GPT" epitomises magical thinking.<p>Why bother training if the recursive application is that simple? Because... it's not that simple.
its truly interesting to see this come around full circle from 2023 when i started writing about the role of the AI Engineer, and now this <a href="https://www.latent.space/p/gpt-wrappers" rel="nofollow">https://www.latent.space/p/gpt-wrappers</a>