TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Zero-Shot Text Classification on a low-end CPU-only machine?

8 pointsby backend-dev-338 months ago
I want to do zero-shot text classification either with the model [1] (711 MB) or with something similar. Want to achieve high throughput in classification requests per second. Classification will run on low-end hardware: some Hetzner [2] machine without GPU (Hetzner is great, reliable and cheap, they just do not have GPU machines), something like this:<p>* CCX13: Dedicated vCPU, 2 VCPU, 8 GB RAM<p>* CX32: Shared vCPU, 4 VCPU, 8 GB RAM<p>Now there are multiple options for deploying and serving LLMs:<p>* lmdeploy<p>* text-generation-inference<p>* TensorRT-LLM<p>* vllm<p>There are more and more new frameworks for this. I am a bit lost. Would you suggest the best option for deploying the above-listed model (No-GPU hardware)?<p>[1] https:&#x2F;&#x2F;huggingface.co&#x2F;MoritzLaurer&#x2F;roberta-large-zeroshot-v2.0-c<p>[2] https:&#x2F;&#x2F;www.hetzner.com&#x2F;cloud&#x2F;

5 comments

kkielhofner8 months ago
The model you linked is not an LLM either by architecture or size.<p>A few thoughts:<p>1) TensorRT anything isn’t an option because it requires Nvidia GPUs.<p>2) The serving frameworks you linked likely don’t support the architecture of this model, and even if they did they have varying levels of support for CPU.<p>3) I’m not terribly familiar with Hetzner but those instance types seem very low-end.<p>The model you linked has already been converted to ONNX. Your best bet (probably) is to take the ONNX model and load it in Triton Inference Server. Of course Triton is focused on Nvidia&#x2F;CUDA but if it doesn’t find an Nvidia GPU it will load the model(s) to CPU. You can then do some performance testing in terms of requests&#x2F;s but prepare to not be impressed…<p>Then you could look at (probably) int8 quantization of the model via the variety of available approaches (ONNX itself, Intel Neural Compressor, etc). With Triton specifically you should also look at Openvino CPU execution accelerator support. You will need to see if any of these dramatically impact the quality of the model.<p>Overall I think “good, fast, cheap: pick two” definitely applies here and even implementing what I’ve described is a fairly significant amount of development effort.
评论 #41764256 未加载
pilotneko8 months ago
Hugging Face does maintain a package named Text Embedding Inference (TEI) with GPU&#x2F;CPU-optimized container images. While I have only used this for hosting embedding models, it does appear to support Roberta architecture classifiers (specifically sentiment analysis).<p><a href="https:&#x2F;&#x2F;github.com&#x2F;huggingface&#x2F;text-embeddings-inference">https:&#x2F;&#x2F;github.com&#x2F;huggingface&#x2F;text-embeddings-inference</a><p>You can always run a zero shot pipeline in HF with a simple Flask&#x2F;FastAPI application.
评论 #41766407 未加载
Terretta8 months ago
Have you considered doing it <i>off</i> machine?<p><a href="https:&#x2F;&#x2F;github.com&#x2F;GoogleCloudPlatform&#x2F;cloud-shell-tutorials&#x2F;blob&#x2F;master&#x2F;ml&#x2F;cloud-nl-text-classification&#x2F;tutorial.md">https:&#x2F;&#x2F;github.com&#x2F;GoogleCloudPlatform&#x2F;cloud-shell-tutorials...</a><p><a href="https:&#x2F;&#x2F;cloud.google.com&#x2F;natural-language&#x2F;docs&#x2F;samples&#x2F;language-classify-text-tutorial-classify?hl=en" rel="nofollow">https:&#x2F;&#x2F;cloud.google.com&#x2F;natural-language&#x2F;docs&#x2F;samples&#x2F;langu...</a><p>I&#x27;d suggest v2:<p><a href="https:&#x2F;&#x2F;cloud.google.com&#x2F;natural-language&#x2F;docs&#x2F;classifying-text" rel="nofollow">https:&#x2F;&#x2F;cloud.google.com&#x2F;natural-language&#x2F;docs&#x2F;classifying-t...</a><p>Here are built in content categories (which feel consumer advertising oriented, natch), but it handles other classifications as well:<p><a href="https:&#x2F;&#x2F;cloud.google.com&#x2F;natural-language&#x2F;docs&#x2F;categories#categories_version_2" rel="nofollow">https:&#x2F;&#x2F;cloud.google.com&#x2F;natural-language&#x2F;docs&#x2F;categories#ca...</a>
评论 #41776803 未加载
backend-dev-338 months ago
UPDATE: how to do the same classification task using some hosting provider with GPU?<p>Let us discuss it here -&gt; <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=41768088">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=41768088</a>
leeeeeepw8 months ago
Setfit