TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Rules of thumb to test feasibility of Machine learning applications?

29 点作者 adamwi超过 8 年前
With the recent developments in the field of Machine learning there are plenty of problems that can now be solved that were not possible a couple of years ago. As someone with only a basic understanding of the field I find it hard to judge the feasibility of product ideas involving machine learning.<p>Given that unique and relevant data sets are often hard to come by (for obvious reasons) I&#x27;m wondering if there are any good rules of thumb to judge feasibility of different ideas.<p>Let me give you a concrete example; build a system that looks at medical records and approximate the risk for a certain illness. It is possible to fairly easy get overall data on the how common the illness is, which symptoms that are relevant and even if those symptoms are commonly recorded in medical records. But the granular data in the actual medical records are fairly hard to come by and would require a significant effort to collect. In this situation it would preferable to do some approximations on e.g. how many medical records are needed to get certain precision before pursuing the idea and start collecting data.<p>A less defined example would be; build an application that identifies if a picture contain a golden retriver dog with a red scarf around the neck. Also here relevant to have rough numbers on the number of data points needed etc (even if the actual data in this case is probably much easier to come by).<p>In the first case I could probably get ok approximations using statistics assuming normal distributions, but less straight forward in the second example.

2 条评论

PaulHoule超过 8 年前
If you&#x27;re interested in commercialization you should start from day one with some estimate of the value the application creates. That is, &quot;saves $X dollars&quot; or &quot;creates $X in revenue&quot;.<p>I do work in the natural language and item matching areas and in those cases I do what I call &quot;preliminary evaluation&quot; by working a small number of cases (say 10-20) in depth and putting together some story about what kind of outputs would be expected, what the actual requirements are, and what a decision process is going to have to take into account. You&#x27;ve got to put together a plausible story that the decision process exists.<p>For your case I would say the dog example is more feasible than the health care one. The caveat is what the negatives are like for the dog: are we looking at photos that have a lot of yellow and red? Are we looking at photos of dogs, etc? As for health care, prediction just adds to the health care boondoggle unless you can make the case of making a difference in outcomes and cost as opposed to just getting a better score at Kaggle.<p>In the case of text examples I&#x27;d say you want 10,000 examples of items in the class and at least that many out of it if you are doing a problem that bag-of-words is able to do to get results that you&#x27;d really be proud of. You might get that down to as little as 1,000 if some dimensional reduction is in use.<p>The center of my approach, when precision matters, is case-based reasoning, where you really find that there is one simple strategy that works say, 70% of time, and then a patch that gets you to 80% and then you keep adding exceptional cases to work up the asymtope. In a lot of cases like that you can establish a proof as to a lower bound of how accurate the results are and work up to handling more and more cases.<p>A core issue though is evaluating what matters, which is why I say follow the money. There is no better way to destroy evaluators than making them split hairs that don&#x27;t matter.
评论 #12987607 未加载
bioboy超过 8 年前
A lot of what machine learning offers is beyond correlation and more about interaction between variables to get a result. So think multivariate analyses. If you can do a multivariate analysis to get to something that is statistically significant for a certain disease, then it would probably be worth checking out.<p>Think of it this way: machine learning is all about grabbing features from what we can normally say &quot;duh its right there that&#x27;s what is causing it.&quot; but in an automated manner. So how do we make the rules for it?<p>We need many, MANY, examples. If you can provide CONCRETE examples for each occurrence, then you MAY have a chance at giving it some sort of predictive capability.<p>The more important issue is HOW you plan to extract these features, the things that make you go &quot;duh, that&#x27;s whats causing it.&quot; So focus on this last part, and the rest will come easier.