I want to do zero-shot text classification with this model [1] or with something similar (Size of the model: 711 MB "model.safetensors" file, 1.42 GB "model.onnx" file )
It works on my dev machine with 4GB GPU. Probably will work on 2GB GPU too.<p>Is there some hosting provider for this?<p>My app is doing batch processing, so I will need access to this model few times per day.
Something like this:
start processing
do some text classification
stop processing
Imagine I will do this procedure... 3 times per day. I don't need this model the rest of the time. Probably can start/stop some machine per API to save costs...<p>[1] https://huggingface.co/MoritzLaurer/roberta-large-zeroshot-v2.0-c
I was just told about this thing:
<a href="https://aws.amazon.com/ec2/instance-types/g4/" rel="nofollow">https://aws.amazon.com/ec2/instance-types/g4/</a><p>one NVIDIA T4 GPU, 16 GB RAM, and, this is an EC2 instance, it means "install anything"
all this for $0.526 /Hour<p>do you see any hidden gotchas?