TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

There is a blind spot in AI research

98 点作者 _lm_超过 8 年前

12 条评论

hyperpallium超过 8 年前
I wish HN discouraged clickbait titles, even when the article itself uses that title. One solution is to append the answer to the original title, separated by a &quot;|&quot;, as used by: <a href="https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;savedyouaclick&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;savedyouaclick&#x2F;</a><p>For example:<p><i>There is a blind spot in AI research | Auto­nomous systems are already ubiquitous, but there are no agreed methods to assess their effects</i><p>If it&#x27;s interesting, <i>I&#x27;ll still click</i>.
评论 #12716740 未加载
评论 #12716834 未加载
评论 #12716640 未加载
评论 #12717231 未加载
Houshalter超过 8 年前
This is vastly overblown. People already vastlh distrust algorithms. Psychologists have studied it and called it &quot;algorithmic bias&quot;. That when given a choice between computer and human, even when the computer makes much better predictions, people distrust it.<p>In almost every domain where there is data and a simple prediction task, even really crude statistical methods outperform &quot;experts&quot;. This has been known for decades. Yet in almost every domain algorithms are resisted. Because people distrust them so much, or fear losing their jobs, or all of the above.<p>But humans are vastly more biased. Unattractive people get twice as long sentences. People heavily discriminate based on political denomination. Not to mention race or gender. Judges give way harsher sentences when they are hungry. Interviews negatively correlate with job performance.<p>Humans are The Worst. Anywhere they can be replaced with an algorithm, they should be.<p>The referenced propublica result has been criticized here: <a href="https:&#x2F;&#x2F;www.chrisstucchio.com&#x2F;blog&#x2F;2016&#x2F;propublica_is_lying.html" rel="nofollow">https:&#x2F;&#x2F;www.chrisstucchio.com&#x2F;blog&#x2F;2016&#x2F;propublica_is_lying....</a> &quot;almost statistically significant&quot;
评论 #12717341 未加载
评论 #12716733 未加载
评论 #12716471 未加载
Itsdijital超过 8 年前
Look at Tesla with it&#x27;s &quot;autopilot&quot; feature. It&#x27;s not really a true autopilot, more just a driving assist, but people treat it like such. I think it&#x27;s easy for people to fall into the trap of relying really hard on something that is shiny, new, and works well despite being imperfect - even if it is explicitly stated to be.<p>Nano tech has a similar problem at hand. There are indications that nanoparticles could have serious health related issues. Despite this researches are pushing ahead full steam with bringing nanotech to market. The money going to development far exceeds whats going to test safety. In an AMA with a nano materials researcher I asked if he ever has concerns about the safety of what he is making. His response was along the lines of &quot;Sure I do, but it&#x27;s not my job to deal with that. I just get paid to develop the tech.&quot;<p>Tech development has always had a shoot first ask questions later approach.
评论 #12715884 未加载
评论 #12717446 未加载
ggchappell超过 8 年前
Key quote:<p>“People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.”<p>-- Pedro Domingos, in <i>The Master Algorithm</i> (2015)
评论 #12715610 未加载
评论 #12715661 未加载
评论 #12715611 未加载
评论 #12716791 未加载
matco11超过 8 年前
Key thought: AI&#x27;s social and cultural impact, in other words AI&#x27;s implications on &quot;social mobility&quot; (your rags to riches stories, the American dream).<p>The authors are asking: in a hypothetical world where many decisions are AI assisted, what is the risk that AI systems slow social change because they are too dumb to understand exceptions, peculiarities, positive externalities? What can we do to establish parameters that will allow us to know when a certain AI system is trained well enough to be used in real world, with minimal risk of undesired social and cultural implications?
visarga超过 8 年前
Systematic analysis of AI biases is certainly needed. We train models on data, but how is the data collected, and how biased is it? At least in AI we can compensate for biases, but in human society they are much harder to counter. There&#x27;s hope for a better future if we can make fair AI.
评论 #12715733 未加载
Animats超过 8 年前
Scary robot video of the month.[1] This is not just a blind, repetitive operation; previous X-rays and laser scans tell it what to do.<p>[1] <a href="https:&#x2F;&#x2F;youtu.be&#x2F;MZIv6WtSF9I?t=245" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;MZIv6WtSF9I?t=245</a>
评论 #12716815 未加载
SFJulie超过 8 年前
With or without AI we already have issues with too much automation&#x2F;assistance, and it get bad when automation they are failing&#x2F;lack of maintaining.<p>Basically if you can drive a manual car, it is easy to drive an automatic one, but the opposite is not true.<p>Old GPS and even new one got 15% of the time the address wrong when I was a mover. Not only GPS failed, but what do you do when you have no usable maps?<p>Well, we have fired the people doing the maps, they are hardly updated at the pace mayors and real estates promoters are changing the territory, if you have an awesome GPS with no updated maps your GPS is useless, no?<p>We are forgetting to do the heavy underlying costly maintaining of maps, directions, forming drivers to read signs figuring GPS made them obsolete. Now we have to maintain: maps, satellite, computers and to live with people unable to use a map and a compass that are distracted when they drive by potentially wrong information and to dumb to read the sign saying there are entering a one way street in counter sense relying on their GPS.<p>Then too, the automation in Airbus&#x2F;tesla and Boeing have proven to be less valuable then pilots&#x27; experience when computers fail due to false négative (frozen Pitot probes) or false positive (sun blinding cameras). I think civil and military records about accidents are a nice source of information about &quot;right level of automation&quot;.<p>The problem is keeping up to date workers requires constant, heavy practice without too much automation. And human time nowadays is expensive.<p>That is one of the reason France (at the opposite of Japan) kept automation in nuclear plant rudimentary. Because when a system is critical, you really prefer human that can handle stuff at 99.999% than a computer that do great 100% of the time if and only if its sensors do works or nothing too catastrophic happens (flood, tsunamin, earthquake)<p>The problem is industry wants to spare on costly formations and educations (not the one from the university, I mean the one that is useful) but knowledge you have not yet crafted because of change of circumstances (I will be delighted to see how self driving car are behaving in massive congestion with dead locks) will be hard to program if we lose the common sense of doing the stuff by ourselves. How do you correct a machine misfunctioning to do something you have forgotten to do correctly yourself? You may even ignore when it will fail. Not because of it, but because of your lack of referential.
cs2818超过 8 年前
I&#x27;m not too sure how practical the suggested &quot;social-systems analysis&quot; approach is. It is summarized as:<p>&quot;A practical and broadly applicable social-systems analysis thinks through all the possible effects of AI systems on all parties.&quot;<p>which seems incredibly difficult to do completely. Hopefully the authors will further describe their approach in future publications.<p>Also, somewhat of a nitpick, but the article states:<p>&quot;The company has also proposed introducing a ‘red button’ into its AI systems that researchers could press should the system seem to be getting out of control.&quot;<p>in reference to Google, but cites a paper which discusses mitigating the effects of interrupting reinforcement learning [0]. The paper makes a passing reference to a &quot;big red button&quot; as this is a common method for interrupting physically situated agents, but that is certainly not the contribution or focus of the work.<p>[0] <a href="https:&#x2F;&#x2F;intelligence.org&#x2F;files&#x2F;Interruptibility.pdf" rel="nofollow">https:&#x2F;&#x2F;intelligence.org&#x2F;files&#x2F;Interruptibility.pdf</a>
ccvannorman超过 8 年前
This is another angle of the &quot;Weapons of Math Destruction&quot; argument, and it looks very relevant. Those who work on big data (esp. public sector) would be wise to consider the implications.
Animats超过 8 年前
For a good analysis of the problems of living with AIs, read the web comic Freefall.[1] This long-running comic has addressed most of the moral issues over the last two decades. Of course, that&#x27;s about 3000 comics to go through.<p>[1] <a href="http:&#x2F;&#x2F;freefall.purrsia.com&#x2F;" rel="nofollow">http:&#x2F;&#x2F;freefall.purrsia.com&#x2F;</a>
frozenport超过 8 年前
The problem is the AI community is treating itself like non-experts. Explaining that AI needs to be controlled by telling horror stories of robot domination is good to motivate research work to lay people, but is a distraction for professionals.
评论 #12715890 未加载
评论 #12715666 未加载