TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Human-AI Partnership for Mixed-Initiative Fact-Checking [ACM UIST]

1 点作者 100ideas超过 6 年前

1 comment

100ideas超过 6 年前
<i>Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact-Checking An T. Nguyen, Aditya Kharosekar, Saumyaa Krishnan, Siddhesh Krishnan, Elizabeth Tate, Byron C. Wallace, Matthew Lease</i> (UIST &#x27;18: ACM User Interface Software and Technology Symposium Session: Crowds and Human-AI Partnership)<p>Abstract: Fact-checking, the task of assessing the veracity of claims, is an important, timely, and challenging problem. While many automated fact-checking systems have been recently proposed, the human side of the partnership has been largely neglected: how might people understand, interact with, and establish trust with an AI fact-checking system? Does such a system actually help people better assess the factuality of claims? In this paper, we present the design and evaluation of a mixed-initiative approach to fact-checking, blending human knowledge and experience with the efficiency and scalability of automated information retrieval and ML. In a user study in which participants used our system to aid their own assessment of claims, our results suggest that individuals tend to trust the system: participant accuracy assessing claims improved when exposed to correct model predictions. However, this trust perhaps goes too far: when the model was wrong, exposure to its predictions often degraded human accuracy. Participants given the option to interact with these incorrect predictions were often able improve their own performance. This suggests that transparent models are key to facilitating effective human interaction with fallible AI models.<p>video: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=3GDA3jSzRgs" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=3GDA3jSzRgs</a><p>DOI: <a href="https:&#x2F;&#x2F;doi.org&#x2F;10.1145&#x2F;3242587.3242666" rel="nofollow">https:&#x2F;&#x2F;doi.org&#x2F;10.1145&#x2F;3242587.3242666</a><p>PDF: <a href="http:&#x2F;&#x2F;www.cs.utexas.edu&#x2F;~atn&#x2F;nguyen-uist18.pdf" rel="nofollow">http:&#x2F;&#x2F;www.cs.utexas.edu&#x2F;~atn&#x2F;nguyen-uist18.pdf</a><p>code: <a href="https:&#x2F;&#x2F;github.com&#x2F;thanhan&#x2F;uist18" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;thanhan&#x2F;uist18</a><p>author: <a href="https:&#x2F;&#x2F;www.cs.utexas.edu&#x2F;~atn&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.cs.utexas.edu&#x2F;~atn&#x2F;</a>