Over the past few days we did an investigation of the main LLM providers, and have observed up to a 40% difference in average speed (tokens / second) from the leading LLM providers like GPT4.
Looking ahead, I suspect as AI becomes even more ubiquitous/mainstream, AI service providers will offer various levels of analysis at different price points. E.g. the cheapest service will provide reliably accurate answers, but only to simple queries that consume little compute power.<p>Also envisioned is the all too common race-to-the-bottom scenario where services will simply tune their service to respond with the least compute power needed while harvesting and capitalizing on it users data.