Seems relatively reasonable, certainly better than hiring Deloitte for more money and worse results. I doubt it's his only source of information.<p>I'm very surprised this is subject to FOIA. I'll bet he learns to disable memory and uses incognito chats now though.
It sounds bad, but the alternative is policy makers making decisions out of their arses, so maybe this really is a tool that can improve public management.<p>Problem is that the owner of the AI service has power over the model and thus can influence government decisions.<p>Maybe we should require that government use of AI should be required to use open models only.
I'm curious, did he not clean out old chats that were no longer needed? If UK FOI applies to this now, does that introduce recordkeeping/retention requirements?
I think there's something to be said for a LLM for government policy. It should be trained for the country in question and open sourced. Then as well as the politicians asking what we should do about the NHS or what have you, the public could do so as well and see where the ideas were coming from.<p>Given the current state of LLMs it would only be giving advice but even so would probably give better advice than some of the idiots that get elected. Also maybe more importantly it would be a known quantity whereas humans can be deceptive.
Here is the list of IP blocks for the UK defense:<p><a href="https://en.wikipedia.org/wiki/List_of_assigned_/8_IPv4_address_blocks" rel="nofollow">https://en.wikipedia.org/wiki/List_of_assigned_/8_IPv4_addre...</a><p>Here is the nginx proxy rule so that requests from the UK government go to the right place:<p><pre><code> if ($remote_addr = 25.0.0.0/8) {
set $backend_pool grok_with_special_information_for_really_smart_uk_tech_ministers;
}
else {
set $backend_pool grok_for_poor_and_stupid_uk_people;
}
</code></pre>
I don't have actual information that Grok names the backend servers this way, btw. Nor, do I know if Peter Kyle is using tor.<p>This was generated with help from Claude, so please verify your own sources.
Very interesting point about whether AI Chats are personal data or not, like email or a whatsapp message, or if they should be treated more like a web search. Certainly the AI isn't a person....
I think all of the comments you are missing the point. We beginning to see the outsourcing of critical thinking to a few technology companies. We’ve stopped valuing leaders that understand issues and can make sane decisions. Instead we pick leaders that present good sound bites or appear strong. They in turn do not understand the issues and can get quick and easy answers from AI that for the most part are pretty damn good.<p>The most insidious part of this, is it gives us the feeling that we are still in control.<p>When in reality, a few extremely powerful and extremely wealthy individuals are in a position to dramatically shape and shift policy.<p>15 years ago Google was still pretty damn good. Today most people agree that it’s departed from its original goals and in its efforts to make money actively censors and shapes the results it serves.<p>The AI systems people use to make decisions will make this shift once they have a lock on the market.<p>It’s wild to see the creation of a technological equivalent to the medieval Catholic Church. an entity that represents itself as benevolent to the masses, but in reality, exploits them and maintains a stranglehold over the political ruling class.
Those seem like non-idiotic, reasonable uses of ChatGPT. Reassuringly disappointing. My bigger concern is that a company like OpenAI has the resources to identify requests by bigwigs of interest, and a unique window into their head. With US friendship on dubious terms, I'd have thought GCHQ would have had a few stern talks with the government.
Wow, the UK is in some deep shit governing wise.<p>Sure I already suspected such a thing given what has been happening for the past decades but they still manage to surprise me.