Hi, I am having problem explaining to my team members and employer the efficiency of my ML models.<p>People in software companies tend to think that since machines can perform perfect logical and mathematical operations, anything run within them will also inherit this property, including AI.<p>Currently no matter how hard I try to improve my AI model (generating more data, applying different augmentation, model architecture,..etc) they will still trying to find input data that proved my model is not "generalized" enough.
This is a very good visual introduction to Machine Learning created by the team at R2D3 and helps even non-tech people understand how ML works and that training a model is balancing bias and variance error rates. Part 1[0] goes over what decision trees are, and Part 2[1] goes over Bias-Variance Tradeoff.<p>[0] <a href="https://r2d3.us/visual-intro-to-machine-learning-part-1/" rel="nofollow">https://r2d3.us/visual-intro-to-machine-learning-part-1/</a><p>[1] <a href="https://r2d3.us/visual-intro-to-machine-learning-part-2/" rel="nofollow">https://r2d3.us/visual-intro-to-machine-learning-part-2/</a>
What's there to explain? Tell people that it isn't perfect, and they seem to know this already.<p>Propose to share a proportional number of correctly predicted examples for every incorrect example they come up with? (Okay, don't actually do this)<p>Who is criticising your work? Manager/colleague/stakeholder/C-level executive? Does it matter if they are criticising it? What happens if you just shrug and keep doing what you're doing?<p>The whole thing seems like a communication/political problem, not a technical one, and it's hard to give advice when we don't know the specifics.