I agree with the sentiment, but if anything is made clear by how ChatGPT plugins work, it's that standardization of a protocol here is unnecessary when you are talking about tooling designed to be used by LLMs. They can just figure it out!<p>Even if you have 10 different ways of describing plugins all by different teams, you're not writing declarative code for each one, you're throwing them to the LLM all the same and saying "you figure it out," and it does.<p>Finetuning a model for certain schemas (as the author at one point suggests) should be entirely unnecessary, given my experience. You just need access to a model more at par with gpt-3.5-turbo, which we'll surely see in open source in no time!