This is very confusing and meandering. It gives flow charts and lists of steps that don’t map to my experience building deep learning models at scale, and spends a strange amount of time passive aggressively dismissing Lua Torch and extolling virtues of TensorFlow that aren’t very important.<p>As with all of these purported pipelining systems, I’m skeptical and happy to let a bunch of other people deal with the headches of making it adequately general for a few years before I’ll even start caring about grokking it for my use cases.<p>In the meantime, creating build tooling, data pretreatment tooling and deployment tooling is pretty valuable for me to understand business considerations and make sure all my modeling & experimentation aren’t just time wasting ivory tower projects, particularly in terms of customizing performance characteristics on a situation-to-situation basis, free to design the deployed system without a constraint to a particular serving architecture.<p>It also makes me very disinterested in applying to work for the Cortex team, because even though the article is talking about DeepBird v2 as a means to free ML engineers to do more research, it seems pretty obvious that there’s a huge surface area of maintenance and feature management for this platform. Your job is probably going to be <i>less</i> about research, which is scarce work that people compete over anyway.<p>Possibly attractive for people who just like deep C++ platform building, which is an internal drive not often found in people wanting to solve business problems with ML models.