TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

CTranslate2: An efficient inference engine for Transformer models

2 pointsby wsxiaoysalmost 2 years ago

1 comment

wsxiaoysalmost 2 years ago
A less hyped inference engine with INT8&#x2F;FP16 inference supports on both CPU &#x2F; GPU (cuda).<p>Model supports list: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, LLAMA, T5, WHISPER<p>( Found this library during my research on alternatives to triton&#x2F;FasterTransformer in Tabby <a href="https:&#x2F;&#x2F;github.com&#x2F;TabbyML&#x2F;tabby">https:&#x2F;&#x2F;github.com&#x2F;TabbyML&#x2F;tabby</a>)