TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Why the Military Can't Trust [LLM] AI

2 点作者 temporarely大约 1 年前

1 comment

temporarely大约 1 年前
<a href="https:&#x2F;&#x2F;archive.is&#x2F;3av3I" rel="nofollow">https:&#x2F;&#x2F;archive.is&#x2F;3av3I</a><p><i>&quot;LLMs develop most of their skills during pretraining—but success depends on the quality, size, and variety of the data they consume. So much text is needed that it is practically impossible for an LLM to be taught solely on vetted high-quality data. This means accepting lower quality data, too. For the armed forces, an LLM cannot be trained on military data alone; it still needs more generic forms of information, including recipes, romance novels, and the day-to-day digital exchanges that populate the Internet.&quot;</i><p>Being reminded of that fact it occurs that LLMs are to decision making systems what Mortgage Backed Securities (MBS) were to investment. &quot;AAA&quot; rating with tranches full of crap.<p>That didn&#x27;t end well, did it?