If GPT emulations of social experiments are not correct, policy decisions based on them will make them so.<p>“GPT said people would hate buses, so we halved their number and slashed transportation budget… Wow, do our people actually hate buses with passion!”<p>“A year ago GPT said people would not be worried about climate change, so we stopped giving it coverage and removed related social adverts and initiatives. People really don’t give a flying duck about climate change it turns out, GPT was so right!”<p>This is an oversimplification, of course; to say it with more nuance, anything socio- and psycho- is a minefield of self-fulfilling prophecies that ML seems to be nicely positioned to wreak havoc in. (But the small “this is not a replacement for human experiment” notice is going to be heeded by all, right?)<p>As someone wrote once, all you need for machine dictatorship is an LLM and a critical number of human accomplices. No need for superintelligence or robots.