Absolutely. When my mom's instagram got hacked it was instantly apparent that the automated message it sent to me wasn't from her.
Train an AI on our conversation history instead, and I'd likely click any link it sent me.
I tend to agree. Maybe I am getting old but I feel a lot of new technology is mainly used for manipulating us instead of helping us. This is especially noticeable when it comes to information retrieval. I think things like ChatGPT are fantastic if used the right way. The world could be come much more productive with less repetitive work if the benefits got shared by all. But in reality I suspect the benefits will go to only a few and the rest will only see negative effects.
It's sad how far we've strayed from Bellamy's Looking Backwards style utopias and straight into Orwellian dystopias.<p>I predict the rise of more seasteaders, Oneida style communes, digital retreats, maybe even extreme cosplay like <i>The Village</i> ... a ton of people are just going to nope out of this hellscape.<p>Maybe the metaverse is onto something...
Just when we thought that "AI, write me a speech to convince citizens that raising taxes/invading a country/restricting liberties/etc. is a good thing" was the worst possible outcome, we'll have "AI, write a speech to convince citizens that raising taxes/invading a country/restricting liberties/etc. is a good thing, then use your generated public images to flood the media, including social, with the message".
Yeah. You think social media is bad now? You just wait until the bot farms have full personas with personalities that can argue with everyone constantly to convince them of whatever nonsense someone is trying to push.<p>Marketing has already forced people to become defensive about communication. We don't answer phone calls, we throw out half or more of our mail without ever reading it, we scrutinize our email, we ignore the first 20 results of a search query, etc.<p>Now marketing has a new tool that will be much more difficult to distinguish from an actual human being. Society will suffer greatly and trust will erode even further.<p>What a bright future we've built for ourselves with technology.
Did anyone else miss the sleight of hand from OpenAI with all this? I thought their goal was safety in AI usage? <a href="https://openai.com/charter/" rel="nofollow">https://openai.com/charter/</a><p>But then they just sold to Microsoft and the race began, how is that not a violation of their charter there?
AIs don’t manipulate people; people manipulate people.<p>Every AI has a legal owner, author, and an agenda set by their human creators. Or at least there is always somebody to blame.<p>This is the basis for future legislation.
The movie wolf of wall st shows Jordan’s “straight line persuasion” technique. i.e. keep the human on a straight line to your end goal of a sale and when they deviate from the straight line correct them back on it. AI will be fantastic at this with no moral dilemma!
I agree with the article that regulation will work in the near term while it is still expensive to train and operate such systems but for the long run we will need the AI equivalent of ad blockers that can be deployed by individuals to defend themselves against this type of exploitation. Arming everyone with their own personal advocate and security auditor AI is the only way to defend against such exploits. Unfortunately this opens another can of worms similar to how some ad blockers allow certain "good" ads through.
Even if you had pure provenance of each response, it still will not stop large tech companies from abusing the public's desire of "not knowing".<p>Consider search engines as an old example. It took multiple decades to even provide a concept of a "why am I seeing this?" affordance. That only became a recent topic after majority of people started caring about the legitimacy of the news as their values also changed. Even then, it isn't even helpful nor provides actual context based on your search history or advertisement profile.<p>Why? Because keeping people in the dark makes great advertisement targets. When people become aware of why they were targeted, it's like holding up a mirror revealing our major flaws. We don't want to identify with certain things although an algorithm may slap the label on us. We are ashamed of such labels and want to immediately distance ourselves from them.<p>I believe the exact same will be injected into these models to the point where you will be suggested consumeristic practices when you're just trying to accomplish something else entirely. So in a sense, you'd be manipulated and hijacked of your original intent. Search engines and social media do a great job at this already today.<p>Now that there's a strategic shift across the industry, it is especially unlikely that any provenance will be provided for trained models. How exactly will you know that the model you're using isn't biased with certain leanings?<p>> Regulators must consider this an urgent danger.<p>How will regulators even know nor care if they don't see it already happening today?
As bad as the later seasons of Westworld are, the one part where Aaron Paul's character gets a rejection phonecall from a job "recruiter" was a real prime example warning of this sort of thing.<p>He asked the recruiter if there was anything he could do to be a better fit for the job. But the AI recruiter was designed to let him down easy about being not good enough. Not to help him improve.
"Just tell me, are you a real person?"
Maybe this could be solved by teaching philosophy and logic in our schools. No matter how smart a conversational AI can be, you can still frame its argument and decide on an outcome.<p>But I suspect that the governments won't want that. At least in Brazil, where I live, this was aggressively pushed back a few decades ago. I guess they want voters that can't think straight :)
I don't see how to solve this. This is the same root cause that causes people to be 911 truthers or qanon or to vote for clearly nonsense candidates.<p>It's a lack of critical thinking, and it's an epidemic.<p>When you have people believing whatever they want on Facebook or whatever because someone makes minimal effort to photoshop something to look like a news post, of course that's something AI can take advantage of.<p>But it isn't a new problem, and the solution is the same.<p>But how do you educate a population that is aggressive and resistant to acquiring knowledge?<p>I'm just here to watch the band play as the ship goes down.