TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Show HN: YPerf – Monitor LLM Inference API Performance

2 pointsby xjconlyme5 months ago
Our team operates several real-time AI applications, where both latency(TTFT) and throughput(TPS) are critical to most of our users. Unfortunately, nearly all of the major LLM APIs lack consistent stability.<p>To address this, I developed YPerf—a simple webpage designed to monitor the performance of inference APIs. I hope it helps you select better models and discover new trending ones as well.<p>The data is sourced from OpenRouter, an excellent provider that aggregates LLM API services.

1 comment

Oras5 months ago
Nice one. It would be great to use filtering. For example, I want to check the TPS of Llama 3.3 across multiple providers.
评论 #42493552 未加载