TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Is there a concept of safety trademark in ML black box models?

1 pointsby aptrishuover 7 years ago
How would someone trust a black box model that it is not biased or things like that specially when it comes to its deployment in mission-critical applications such as health care? How do we manage the tradeoff between accuracy and intelligibility?

1 comment

PaulHouleover 7 years ago
I think that the quest for interpretability is often a red herring. What people need is an ability to tell a model what to do when it is straightforward to do so.<p>For instance, at the convenience store near me there is a sign that says &quot;To buy alcohol you must (A) be 21 years of age and (B) not visibly intoxicated&quot;<p>The first of those is easily addressed as a rule, the latter is a statistical kind of thing -- different raters are going to disagree whether certain people are &quot;visibly intoxicated&quot; and (in real life) it is contextual. It is one thing to send somebody to drive home drunk, it is another if you are not.<p>An interpretable model might tell you that it learned it is OK to serve people that are 20.95 years of age, but that is for chumps. You should just punch in it is 21 from the beginning.<p>Similarly, a model that is biased against black people might not have a reference to &quot;X is black&quot; but rather might learn to discriminate based on geography or other characteristics. On the other hand, in health care, there are some cases you do want to take race into account. Blacks react differently to drugs for heart failure, while Naltrexone seems to be highly effective for alcoholism in Asians, moderately effective in Whites, and barely effective in Blacks.