This is fun. I managed a data entry project at a previous employer that was kind of like this.<p>We had non technical subject matter experts that I sneakily got to write several thousand unit tests. We had some quantitative and qualitative data we wanted annotated on a substantial dataset of scanned documents. I built a tool that let them type natural language descriptors for everything. I then converted that to javascript unit tests by way of some regex and coffeescript.<p>It was possible to bootstrap classifiers that way! Seems like kind of old tech in an era of unsupervised megaML, but I've not been using that part of my resume for a year or so.
> Testing model implementation details such as statistical and mathematical soundness are not part of the TFML strategy. Such details should be tested separately and are specific to the family of the model under consideration.<p>For anyone who clicked this thinking this would be related to model validation, it ain't. The article is promoting test-driven development for systems that have an ML component.
I think you missed TDD as in Test Driven Development.<p>The article mentions nothing about specific methodologies regarding testing in a ML context.<p>Maybe on the next article? :)
Testing model code tends to be very difficult unless you design your training loop with lots of abstractions and dependency injection which makes the code less explicit and difficult to understand. For example, try to look at the Tensorflow Estimator framework. Absolutely awful to use but is well tested.