I've been playing around with deploying different large models on various platforms (HF, AWS etc) for testing and have been underwhelmed by the inference speeds I've been able to achieve. They're fine (though considerably slower than OpenAI) but nothing like what I feel I've been led to believe by others who talk about how frighteningly fast their self-hosted models are.<p>For reference, I get responses in:
~1200ms from gpt-3.5-turbo,
~1600ms from gpt-4o
~5000ms from llama-70b-instruct on dedicated HF endpoint<p>I've been using standard Nvidia A100, 4x GPU, 320 GB for these deployments and so I'm now wondering, am I missing something or were my expectations just unreasonable? Curious to hear any of your thoughts, experiences, and tips/tricks, thanks.
You can try Groq API for faster inference. They use custom hardware to speed up the inference. Supported open models can be found here: <a href="https://console.groq.com/docs/models" rel="nofollow">https://console.groq.com/docs/models</a> (includes llama-70b)
We are getting a forward pass time of ~100ms on Meta's original Llama2 70B (float16, batch size 8) PyTorch implementation on 8xA100. Those results are very underwhelming in terms of fully utilizing the GPU flops. If we are doing something wrong, let me know.<p>The vllm implementation is much faster, I think 50ms or better on either 4 or 8 A100s, forget the exact number.
TensorRT-LLM with Triton Inference Server is the fastest in Nvidia land.<p><a href="https://github.com/triton-inference-server/tensorrtllm_backend">https://github.com/triton-inference-server/tensorrtllm_backe...</a>