TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Courts reshape the rules around AI

43 点作者 morisy超过 5 年前

4 条评论

akersten超过 5 年前
This is a pattern we see time and time again: companies hiding behind &quot;we can&#x27;t tell you how it works, it&#x27;s a trade secret, you just have to trust us that it does.&quot; And these companies land exclusive government contracts. Police drug tests, electronic voting machines, and now police face recognition.<p>We need to demand transparency of any company that receives this kind of special treatment, and require them to disclose statistical analysis of their solution at a minimum. How often is it wrong? How was it verified? If they can&#x27;t do that, then no deal, and no taxpayer funded boondoggle.
anon1m0us超过 5 年前
Three of the cases cited are incidents where state organizations are buying AI, which then benefits that state organization at the expense of its citizens: People were arrested others got less benefits.<p>These systems are black boxes. The software companies have a financial incentive to sell them. The programmers have a financial incentive to get the customer what they <i>want</i> not what is honest and true. If this software meant the customer would have to pay out <i>more</i> benefits, how many states would buy it?<p>The same thing happens when the product is <i>not</i> AI. AI is a product. Manufacturers of the product are liable. The product should be open to investigation.<p>When Ford Pintos were killing people, the Ford Pinto could be examined. The 737MAX can be examined.<p>AI can&#x27;t be examined. The decisions it makes can&#x27;t even be <i>explained</i> a lot of times.<p>Companies are using AI as a shield. Someone here said the other day that they think people are actually making a lot of the decisions Google makes and saying it was Algorithmic absolves them of the responsibility to explain the decision.<p>This is not a good way forward. You can&#x27;t say, &quot;I don&#x27;t know <i>why</i> the machine is hurting people.&quot;<p>It&#x27;s hurting people. Shut it off.
YeGoblynQueenne超过 5 年前
&quot;Algorithmic Decision System&quot; is not a good term. Some of the systems that are described this way do not make decisions, strictly speaking. For example, facial recognition systems don&#x27;t make decisions- they make <i>identifications</i>. They are classifiers, yes? Planners, game-playing algorithms, decision trees and decision lists, etc, those are systems that are commonly thought of as making &quot;decisions&quot;- but those are very rarely the subject of scrutiny of AI systems these days.<p>Take for instance a system that is used to determine whether a person is in risk of recidivism. The system will cough up some number, probably a float from 0 to 1. The number will be _interpreted_ as a probability that the person will recidivate. Then, based on this _interpetation_ a decision will be made by the person or persons using the system, whether to treat the person as having a high risk of recidivism or not. The system hasn&#x27;t decided anything at that point- it&#x27;s the person using the system that has made a decision.<p>The matter is complicated somewhat by the existence of systems that <i>incorporate</i> AI algorithms in a more general automated decision process. For example, self-driving cars use image recognition algos to identify objects in their path but navigation decisions are not taken by the image recognition algos! However I&#x27;d wager that those kinds of integrated systems are not what most people think of when they speak of &quot;algorithms&quot; making &quot;decisions&quot;. But I may well be wrong.
评论 #21350215 未加载
AlanYx超过 5 年前
The title is misleading -- few or none of the examples given in this article, as far as I can tell, use AI or machine learning. They&#x27;re just &quot;automated systems&quot; in the sense of computer systems that execute regular business rules. For example, the system at issue in the K.W. v. Armstrong case discussed in the article wasn&#x27;t an AI system, it was just a pretty amateurish ad-hoc Excel spreadsheet.<p>The report quoted in the article (the 2019 AI Now &quot;Litigating Algorithms&quot; report) also shares the same basic problem, making no serious attempt to distinguish between AI and non-AI systems.
评论 #21348998 未加载
评论 #21349174 未加载