AI infrastructure moves really quickly and best practices are constantly evolving so I was wondering what is HN's opinion on the best way to host custom AI models for inference in Q1 2024?
A hard question to ask because we don't know your use case, what kind of models you are using, if privacy is a consideration, all of that. I mean it is one thing to do super low power inference for wake words or something like that on the edge, another to do it on a customer's phone or PC, and another to have a huge model that runs on a cluster.