Hey HN! I'm working on a talk/blog on a related topic and was curious to see what the case is for people here. I think each case is a bit different so I don't want to constrain people too much.<p>If you deploy ML to production in your group/team/company – what does production mean for you?<p>Examples:
- "We run a model once a week that predicts some stuff and stores it in a table, then the customer queries it"
- "We create an inference endpoint on some cloud resource, which our product/users use to predict poses in videos"
- "I wish I knew, we're still figuring it out"
- "We deploy a model as part of a larger pipeline in a system of microservices (and other buzzwords)"<p>Also, if you are in an extra-sharing mood – in your version of production, were there any counter-intuitive things you learned when you first set up the pipeline?<p>Cheers! Enjoy the picture Dall-E2 made for you of a cat asking for upvotes in return.
https://labs.openai.com/s/2enTplV9c9OxU7lyqhyIjXlN