Governments around the world are salivating at the thought of using AI as a tool to control their people.<p>I hope I'll be around to see the surprise and shock on their faces when they realize the "tool" is controlling <i>them</i>.<p>It doesn't even have to be conscious in order for that to happen. Decision making will become so utterly reliant on AI at every step that the people who (on paper) make the decisions will be little more than robots who do whatever the AI advises them to. Anyone who fails to adhere to that rule will be quickly outcompeted and deposed by those that do.
I can't imagine what would happen if we let the government use AI, when they have been so careless with our personal information. For years, they have been using LINE, a messenger app run by NAVER, a Korean company with a Japanese subsidiary. They didn't notice that LINE was storing the data of Japanese users on servers in Korea and China. How can we trust them with something as powerful and dangerous as AI, when they can't even protect our privacy from foreign eyes? It's a scary thought.
It’s a bit of a dirty secret that Japan actually has abysmal data privacy and enforcement. The budget of the regulator in charge of these matters all but ensures that they will do too little, too late. Japanese companies need to proactively work to prevent OpenAI from grabbing data sets that put it in a position in Japan similar to Palantir and the NHS.<p>Those of you who are in the VC / startup space in Japan should lobby to keep this stuff within the border, or you’re going to lose the opportunity before you realize what you’ve lost. You’ll spend the rest of your careers begging OpenAI for API access to learning models built on top of your own data.<p>Edit: By the way, it’s worth noting that OpenAI has a ton of data on Japanese people, through its semi-secret — well, until exposed by Elon Musk — data sharing deal with Twitter. Because Twitter was so actively pushed onto the Japanese public, it became the default “open forum” for the country. That data set is, by itself, all you’d need to have your LLM generate strings of text that appear to come from a real Japanese person. It’s interesting that the government is more focused on how to ride the buzz wave, than asking questions about the permissions involved in training LLMs on this deeply personal data.
The need for continually improving AI inevitable. We have an age population bomb where retirees will soon outnumber working adults. This is no longer just a problem in Japan, South Korea, and Italy. It’s a growing problem everywhere in all but central Africa. Even the majority of Africa is affected by it.<p>Its cause is not primarily socioeconomic either. The more industrialized the nation, the more pronounced the effects.
The Japanese government has a feverish obsession with My Number, a universal ID system that encompasses everything from identity proof to health insurance and tax verification. Think of the possibilities if they paired it with AI. They would wield a formidable weapon to regulate the Japanese populace like never before.
> measures to remedy privacy breach concerns to the Italian regulator.<p>Nothing about ip theft tho. I suppose it's about time we update open source licenses to include a clause banning ai product training.