TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Do We Need Hundreds of Classifiers to Solve Real World Classification Problems? [pdf]

108 pointsby msherryover 10 years ago

3 comments

benhamnerover 10 years ago
This is consistent with our experience running hundreds of Kaggle competitions: for most classification problems, some variation on ensembled decision trees (random forests, gradient boosted machines, etc.) performs the best. This is typically in conjunction with clever data processing, feature selection, and internal validation.<p>One key exception is where the data is richly and hierarchically structured. Text, speech, and visual data falls under this category. In many cases here, variations of neural networks (deep neural nets&#x2F;CNN&#x27;s&#x2F;RNN&#x27;s&#x2F;etc.) provide very dramatic improvements.<p>This study does have a couple limitations. The datasets used are very small &amp; form a very biased selection of real-world applications of machine learning. It doesn&#x27;t consider ensembles of different model types (which I&#x27;d expect to provide a consistent but marginal improvement over the results here).
smu3lover 10 years ago
Here&#x27;s some commentary on the article: <a href="http://www.win-vector.com/blog/2014/12/a-comment-on-preparing-data-for-classifiers/" rel="nofollow">http:&#x2F;&#x2F;www.win-vector.com&#x2F;blog&#x2F;2014&#x2F;12&#x2F;a-comment-on-preparin...</a>
评论 #8720779 未加载
评论 #8721656 未加载
hooandeover 10 years ago
I think the lesson here has been evident for some time: There is no one best classifier, only classifiers that perform better on particular problems. Random Forests work well because as ensemble methods they can be optimized to explore many different aspects of a data set, but they don&#x27;t perform significantly better than SVMs or most of the other methods.<p>In practice the solution is often to <i>use a combination of multiple methods</i>. Trees, support vector machines, multilayer perceptrons, gaussian kernels, bagging and boosting. In most applications you don&#x27;t have choose. Combining the results of all of them together using a weighted average will out perform any of them individually. And in most cases, the whole is greater than the sum of its parts. Each classifier fits a given data set differently and provides its own perspective on the prediction problem. The goal isn&#x27;t to choose the best one, but to find an ensemble of methods that best explain the patterns and relationships in the data.<p>There are many cases where resource and speed limitations dictate that only one classifier can be tuned and implemented, and in those situations it&#x27;s good to know which one is &#x27;best&#x27;. But when it&#x27;s possible to build an ensemble out of many different methods it&#x27;s almost always the best way to go.
评论 #8721180 未加载
评论 #8721751 未加载