TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Why the Military Can't Trust [LLM] AI

2 pointsby temporarelyabout 1 year ago

1 comment

temporarelyabout 1 year ago
<a href="https:&#x2F;&#x2F;archive.is&#x2F;3av3I" rel="nofollow">https:&#x2F;&#x2F;archive.is&#x2F;3av3I</a><p><i>&quot;LLMs develop most of their skills during pretraining—but success depends on the quality, size, and variety of the data they consume. So much text is needed that it is practically impossible for an LLM to be taught solely on vetted high-quality data. This means accepting lower quality data, too. For the armed forces, an LLM cannot be trained on military data alone; it still needs more generic forms of information, including recipes, romance novels, and the day-to-day digital exchanges that populate the Internet.&quot;</i><p>Being reminded of that fact it occurs that LLMs are to decision making systems what Mortgage Backed Securities (MBS) were to investment. &quot;AAA&quot; rating with tranches full of crap.<p>That didn&#x27;t end well, did it?