Question behind the question. I'm building a startup which offers "AI Compute Box", essentially Hardware loaded with a fine-tuned LLM model which will be deployed to enterprises.
I am looking to understand whether (a) enterprises install models on their own rack, (b) deploy on AWS/Azure cloud, (c) use private instances offered by offered by AWS or Azure