> Naive Bayes classifiers, a family of classifiers that are based on the popular Bayes’ probability theorem, are known for creating simple yet well performing models, especially in the fields of document classification and disease prediction.<p>But this is simply not true! They _don't_ perform well. There's really no reason to teach people Naive Bayes any more, except as a footnote when explaining log-linear/MaxEnt models.<p>MaxEnt is not so complicated, and it makes Naive Bayes fully obsolete. And if MaxEnt is in some way too complicated/expensive, Averaged Perceptron is generally much better than NB, can be implemented in 50 lines of Python, and has far fewer hyper-parameters.<p>A common way for machine learning courses to suck is to teach students about a bunch of crap, obsolete algorithms they should never use, simply for historical reasons --- they used to be in the course, so they stay in the course.