There's no doubt that LLMs massively expand the ability of agencies like the NSA to perform large-scale surveillance at a higher quality. I wonder if Anthropic (or other LLM providers) ever push back or restrict these kinds of use cases? Or is that too risky for them?
I can imagine that for many government tasks, there would be a need for a reduced-censorship version of the AI model. It's pretty easy running into the guardrails on ChatGPT and friends when you talk about violence or other spicy topics.<p>This then begs the question of what level of censorship reduction to apply. Should government employees be allowed to e.g., war-game a mass murder with an AI? What about discussing how to erode civil rights?
So, basically all "confidential" information, if you are a subject "of interest", will be in the cloud and used to train models that can spit it out again. And the models will confabulate stories about you.<p>The can call themselves "sonnet", "bard", "open" and a whole plethora of other positive things. What remains is that they go into the direction of Palantir and the rest is just marketing.
> Claude offers a wide range of potential applications for government agencies, both in the present and looking toward the future. Government agencies can use Claude to provide improved citizen services, streamline document review and preparation, enhance policymaking with data-driven insights, and create realistic training scenarios. In the near future, AI could assist in disaster response coordination, enhance public health initiatives, or optimize energy grids for sustainability. Used responsibly, AI has the potential to transform how elected governments serve their constituents and promote peace and security.<p>> For example, we have crafted a set of contractual exceptions to our general Usage Policy that are carefully calibrated to enable beneficial uses by carefully selected government agencies. These allow Claude to be used for legally authorized foreign intelligence analysis, such as combating human trafficking, identifying covert influence or sabotage campaigns, and providing warning in advance of potential military activities, opening a window for diplomacy to prevent or deter them.<p>Sometimes I wonder if this is cynicism or if they <i>actually</i> drank their own cool-aid.
Is the announcement just that they're on the AWS marketplace for govcloud? Do people ever actually make use of AWS marketplace? It just seems like a way to skirt procurement.
Meanwhile, the best models with sensible OSI-approved licenses are from China.<p>What are the security implications if American corpos like Google DeepMind, Microsoft GitHub, Anthropic and “Open”AI have explicitly anticompetitive / noncommercial licenses for greed/fear, so the only models people can use without fear of legal repercussions are Chinese?<p>Surely, Capitalism wouldn’t lead us to make a tremendous unforced error at societal scale?<p>Every AI is a sleeper agent risk if nobody has the balls and / or capacity to verify their inputs. Guess who wrote about that? <a href="https://arxiv.org/abs/2401.05566" rel="nofollow">https://arxiv.org/abs/2401.05566</a>
Is there really anyone who thinks this is a good idea? AI systems routinely spit out false information. Why would a system like that be anywhere near a Government?<p>Perhaps (optimistically) this is just a credibility-grab from Anthropic, with no basis in fact.
Going forward be very very wary of inputting sensitive information in Anthropic, OpenAI products, especially if you work for a foreign government, corporation.<p>Listen to Edward Snowden. This guy is not fucking around.