TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Anthropic: Expanding Access to Claude for Government

113 pointsby Luuucas11 months ago

10 comments

alach1111 months ago
There's no doubt that LLMs massively expand the ability of agencies like the NSA to perform large-scale surveillance at a higher quality. I wonder if Anthropic (or other LLM providers) ever push back or restrict these kinds of use cases? Or is that too risky for them?
评论 #40802839 未加载
评论 #40803110 未加载
评论 #40802798 未加载
评论 #40802812 未加载
评论 #40803305 未加载
noodlesUK11 months ago
I can imagine that for many government tasks, there would be a need for a reduced-censorship version of the AI model. It&#x27;s pretty easy running into the guardrails on ChatGPT and friends when you talk about violence or other spicy topics.<p>This then begs the question of what level of censorship reduction to apply. Should government employees be allowed to e.g., war-game a mass murder with an AI? What about discussing how to erode civil rights?
评论 #40803296 未加载
ryanackley11 months ago
I find all of the virtue signalling from AI companies exhausting.
评论 #40803061 未加载
评论 #40803133 未加载
评论 #40803644 未加载
nameless10111 months ago
So, basically all &quot;confidential&quot; information, if you are a subject &quot;of interest&quot;, will be in the cloud and used to train models that can spit it out again. And the models will confabulate stories about you.<p>The can call themselves &quot;sonnet&quot;, &quot;bard&quot;, &quot;open&quot; and a whole plethora of other positive things. What remains is that they go into the direction of Palantir and the rest is just marketing.
评论 #40803905 未加载
andrepd11 months ago
&gt; Claude offers a wide range of potential applications for government agencies, both in the present and looking toward the future. Government agencies can use Claude to provide improved citizen services, streamline document review and preparation, enhance policymaking with data-driven insights, and create realistic training scenarios. In the near future, AI could assist in disaster response coordination, enhance public health initiatives, or optimize energy grids for sustainability. Used responsibly, AI has the potential to transform how elected governments serve their constituents and promote peace and security.<p>&gt; For example, we have crafted a set of contractual exceptions to our general Usage Policy that are carefully calibrated to enable beneficial uses by carefully selected government agencies. These allow Claude to be used for legally authorized foreign intelligence analysis, such as combating human trafficking, identifying covert influence or sabotage campaigns, and providing warning in advance of potential military activities, opening a window for diplomacy to prevent or deter them.<p>Sometimes I wonder if this is cynicism or if they <i>actually</i> drank their own cool-aid.
评论 #40804019 未加载
tootie11 months ago
Is the announcement just that they&#x27;re on the AWS marketplace for govcloud? Do people ever actually make use of AWS marketplace? It just seems like a way to skirt procurement.
potwinkle11 months ago
I wonder if they really intend to control ethics of Sonnet&#x27;s use in government or if it&#x27;s just a nice thing to say.
bionhoward11 months ago
Meanwhile, the best models with sensible OSI-approved licenses are from China.<p>What are the security implications if American corpos like Google DeepMind, Microsoft GitHub, Anthropic and “Open”AI have explicitly anticompetitive &#x2F; noncommercial licenses for greed&#x2F;fear, so the only models people can use without fear of legal repercussions are Chinese?<p>Surely, Capitalism wouldn’t lead us to make a tremendous unforced error at societal scale?<p>Every AI is a sleeper agent risk if nobody has the balls and &#x2F; or capacity to verify their inputs. Guess who wrote about that? <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2401.05566" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2401.05566</a>
danlitt11 months ago
Is there really anyone who thinks this is a good idea? AI systems routinely spit out false information. Why would a system like that be anywhere near a Government?<p>Perhaps (optimistically) this is just a credibility-grab from Anthropic, with no basis in fact.
评论 #40803913 未加载
localfirst11 months ago
Going forward be very very wary of inputting sensitive information in Anthropic, OpenAI products, especially if you work for a foreign government, corporation.<p>Listen to Edward Snowden. This guy is not fucking around.
评论 #40803317 未加载
评论 #40803294 未加载