Our team operates several real-time AI applications, where both latency(TTFT) and throughput(TPS) are critical to most of our users. Unfortunately, nearly all of the major LLM APIs lack consistent stability.<p>To address this, I developed YPerf—a simple webpage designed to monitor the performance of inference APIs. I hope it helps you select better models and discover new trending ones as well.<p>The data is sourced from OpenRouter, an excellent provider that aggregates LLM API services.