TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Conversational AI will be used to Manipulate Us in real-time

92 点作者 bonkerbits超过 2 年前

16 条评论

aqme28超过 2 年前
Absolutely. When my mom's instagram got hacked it was instantly apparent that the automated message it sent to me wasn't from her. Train an AI on our conversation history instead, and I'd likely click any link it sent me.
评论 #34911432 未加载
评论 #34911747 未加载
评论 #34911520 未加载
rqtwteye超过 2 年前
I tend to agree. Maybe I am getting old but I feel a lot of new technology is mainly used for manipulating us instead of helping us. This is especially noticeable when it comes to information retrieval. I think things like ChatGPT are fantastic if used the right way. The world could be come much more productive with less repetitive work if the benefits got shared by all. But in reality I suspect the benefits will go to only a few and the rest will only see negative effects.
评论 #34915709 未加载
评论 #34915200 未加载
kthejoker2超过 2 年前
It&#x27;s sad how far we&#x27;ve strayed from Bellamy&#x27;s Looking Backwards style utopias and straight into Orwellian dystopias.<p>I predict the rise of more seasteaders, Oneida style communes, digital retreats, maybe even extreme cosplay like <i>The Village</i> ... a ton of people are just going to nope out of this hellscape.<p>Maybe the metaverse is onto something...
评论 #34911411 未加载
评论 #34915198 未加载
评论 #34911671 未加载
评论 #34911668 未加载
评论 #34911819 未加载
评论 #34915266 未加载
评论 #34911642 未加载
squarefoot超过 2 年前
Just when we thought that &quot;AI, write me a speech to convince citizens that raising taxes&#x2F;invading a country&#x2F;restricting liberties&#x2F;etc. is a good thing&quot; was the worst possible outcome, we&#x27;ll have &quot;AI, write a speech to convince citizens that raising taxes&#x2F;invading a country&#x2F;restricting liberties&#x2F;etc. is a good thing, then use your generated public images to flood the media, including social, with the message&quot;.
评论 #34911729 未加载
评论 #34914625 未加载
AnIdiotOnTheNet超过 2 年前
Yeah. You think social media is bad now? You just wait until the bot farms have full personas with personalities that can argue with everyone constantly to convince them of whatever nonsense someone is trying to push.<p>Marketing has already forced people to become defensive about communication. We don&#x27;t answer phone calls, we throw out half or more of our mail without ever reading it, we scrutinize our email, we ignore the first 20 results of a search query, etc.<p>Now marketing has a new tool that will be much more difficult to distinguish from an actual human being. Society will suffer greatly and trust will erode even further.<p>What a bright future we&#x27;ve built for ourselves with technology.
评论 #34912261 未加载
评论 #34915523 未加载
评论 #34911944 未加载
iou超过 2 年前
Did anyone else miss the sleight of hand from OpenAI with all this? I thought their goal was safety in AI usage? <a href="https:&#x2F;&#x2F;openai.com&#x2F;charter&#x2F;" rel="nofollow">https:&#x2F;&#x2F;openai.com&#x2F;charter&#x2F;</a><p>But then they just sold to Microsoft and the race began, how is that not a violation of their charter there?
评论 #34912064 未加载
评论 #34912376 未加载
评论 #34912209 未加载
评论 #34912112 未加载
评论 #34912576 未加载
atemerev超过 2 年前
AIs don’t manipulate people; people manipulate people.<p>Every AI has a legal owner, author, and an agenda set by their human creators. Or at least there is always somebody to blame.<p>This is the basis for future legislation.
评论 #34912317 未加载
评论 #34911440 未加载
评论 #34912478 未加载
评论 #34911521 未加载
评论 #34911619 未加载
andrewfromx超过 2 年前
The movie wolf of wall st shows Jordan’s “straight line persuasion” technique. i.e. keep the human on a straight line to your end goal of a sale and when they deviate from the straight line correct them back on it. AI will be fantastic at this with no moral dilemma!
throwaway4aday超过 2 年前
I agree with the article that regulation will work in the near term while it is still expensive to train and operate such systems but for the long run we will need the AI equivalent of ad blockers that can be deployed by individuals to defend themselves against this type of exploitation. Arming everyone with their own personal advocate and security auditor AI is the only way to defend against such exploits. Unfortunately this opens another can of worms similar to how some ad blockers allow certain &quot;good&quot; ads through.
评论 #34911840 未加载
thenerdhead超过 2 年前
Even if you had pure provenance of each response, it still will not stop large tech companies from abusing the public&#x27;s desire of &quot;not knowing&quot;.<p>Consider search engines as an old example. It took multiple decades to even provide a concept of a &quot;why am I seeing this?&quot; affordance. That only became a recent topic after majority of people started caring about the legitimacy of the news as their values also changed. Even then, it isn&#x27;t even helpful nor provides actual context based on your search history or advertisement profile.<p>Why? Because keeping people in the dark makes great advertisement targets. When people become aware of why they were targeted, it&#x27;s like holding up a mirror revealing our major flaws. We don&#x27;t want to identify with certain things although an algorithm may slap the label on us. We are ashamed of such labels and want to immediately distance ourselves from them.<p>I believe the exact same will be injected into these models to the point where you will be suggested consumeristic practices when you&#x27;re just trying to accomplish something else entirely. So in a sense, you&#x27;d be manipulated and hijacked of your original intent. Search engines and social media do a great job at this already today.<p>Now that there&#x27;s a strategic shift across the industry, it is especially unlikely that any provenance will be provided for trained models. How exactly will you know that the model you&#x27;re using isn&#x27;t biased with certain leanings?<p>&gt; Regulators must consider this an urgent danger.<p>How will regulators even know nor care if they don&#x27;t see it already happening today?
genman超过 2 年前
Yes, on large scale and by state sponsored hostile actors with the goal to destabilize our society as a whole. A very difficult time is ahead for us.
korroziya超过 2 年前
As bad as the later seasons of Westworld are, the one part where Aaron Paul&#x27;s character gets a rejection phonecall from a job &quot;recruiter&quot; was a real prime example warning of this sort of thing.<p>He asked the recruiter if there was anything he could do to be a better fit for the job. But the AI recruiter was designed to let him down easy about being not good enough. Not to help him improve. &quot;Just tell me, are you a real person?&quot;
评论 #34911666 未加载
haolez超过 2 年前
Maybe this could be solved by teaching philosophy and logic in our schools. No matter how smart a conversational AI can be, you can still frame its argument and decide on an outcome.<p>But I suspect that the governments won&#x27;t want that. At least in Brazil, where I live, this was aggressively pushed back a few decades ago. I guess they want voters that can&#x27;t think straight :)
评论 #34911155 未加载
评论 #34911049 未加载
评论 #34911386 未加载
评论 #34911183 未加载
mlatu超过 2 年前
i read Us as &quot;plural U&quot; ... as in universities .?.
Zurrrrr超过 2 年前
I don&#x27;t see how to solve this. This is the same root cause that causes people to be 911 truthers or qanon or to vote for clearly nonsense candidates.<p>It&#x27;s a lack of critical thinking, and it&#x27;s an epidemic.<p>When you have people believing whatever they want on Facebook or whatever because someone makes minimal effort to photoshop something to look like a news post, of course that&#x27;s something AI can take advantage of.<p>But it isn&#x27;t a new problem, and the solution is the same.<p>But how do you educate a population that is aggressive and resistant to acquiring knowledge?<p>I&#x27;m just here to watch the band play as the ship goes down.
评论 #34911304 未加载
评论 #34911877 未加载
评论 #34911766 未加载
评论 #34912218 未加载
评论 #34911781 未加载
评论 #34911322 未加载
ofirg超过 2 年前
I suggest editing will to is