TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

It’s Game over on Vocal Deepfakes

94 点作者 burlesona大约 2 年前

25 条评论

geuis大约 2 年前
Once the generation times get to real-time, I dunno what&#x27;s going to happen. I follow the voice acting community a lot and this is a big existential threat level of worry in that community.<p>But I also see some positives for the narrative voice field, but at the expense of actual actors.<p>The latest sequel to a favorite audio book series has the professional narrator pronouncing different character names and town names entirely different than the previous 7 books.<p>In another book the narrator completely changed the character voices in the sequel compared to the first.<p>The positives for listeners is eventually we can guarantee that voices are completely consistent between narrations. An editor will soon be able to describe the emotions of a character and nuance how the AI performs certain scenes.<p>It&#x27;s really sad but I think the end of human audio storytelling is coming to and end quite rapidly.
评论 #35255361 未加载
评论 #35255208 未加载
评论 #35255285 未加载
评论 #35255391 未加载
评论 #35255290 未加载
blueridge大约 2 年前
I think he&#x27;s right about this—things are going to get weird and ugly and people aren&#x27;t prepared for what&#x27;s coming. Here&#x27;s my suggestion for safeguarding your sanity in the years ahead: find the people and the blogs you love, embrace RSS, block or avoid everything else, read old books, stop listening to podcasts, revert to email as the primary channel for occasional &quot;social&quot; correspondence, abandon all side projects that require additional screen time, go outside.<p>This is the way.
评论 #35255172 未加载
评论 #35255287 未加载
评论 #35255206 未加载
DethNinja大约 2 年前
Those fake AI voices are perfect tools for phishing campaigns.<p>I wonder if ChatGPT combined with a voice generator can mount successful phishing campaigns.<p>It looks like Cyberpunk future we never asked is already here.
评论 #35255376 未加载
评论 #35255375 未加载
评论 #35255217 未加载
评论 #35255183 未加载
评论 #35255274 未加载
评论 #35255127 未加载
评论 #35255373 未加载
评论 #35255252 未加载
评论 #35255101 未加载
101008大约 2 年前
I tried ElevenLabs and it is truly amazing. You don&#x27;t need much audio to train it and the final result can be incredible. I shared some of the snippets with friends and relatives and after the intitial scare (same as John) we agreed that the outcome wouldn&#x27;t be different than a few years ago...<p>We had impersonators for decades, so what have stopped political parties to hire an impersonator to create fake audios? The IA in this case is going to make it more accesible, but in political parties you don&#x27;t want thousands of fake audios (they would lose credibility), you need only one.<p>Also, photoshopping photos would have a similar effect, and that technique has been available for years as well.
评论 #35255220 未加载
评论 #35256341 未加载
评论 #35255539 未加载
Rastonbury大约 2 年前
I&#x27;ve used a tool to make myself sound like a girl, it is freakily realistic and uncanny hearing what it produces. Chinese love scam rings are going to be all over this in a year, coupled with image&#x2F;video generation
评论 #35256616 未加载
smegsicle大约 2 年前
&gt; I don’t think the general population is prepared for this<p>i dont know how many videos it will take of trump and biden shit talking eachother on xbox live but it looks like several on youtube are near 1m views
lazyeye大约 2 年前
Would a Roger Stone or Steve Bannon type fake a Russian collusion dossier and then run it through the media for literally years?<p>What are the &quot;types&quot; that would do something like this?<p><a href="https:&#x2F;&#x2F;www.bbc.com&#x2F;news&#x2F;world-us-canada-59168626" rel="nofollow">https:&#x2F;&#x2F;www.bbc.com&#x2F;news&#x2F;world-us-canada-59168626</a><p><a href="https:&#x2F;&#x2F;www.theguardian.com&#x2F;us-news&#x2F;2022&#x2F;oct&#x2F;11&#x2F;russian-analyst-igor-danchenko-steele-dossier-sources" rel="nofollow">https:&#x2F;&#x2F;www.theguardian.com&#x2F;us-news&#x2F;2022&#x2F;oct&#x2F;11&#x2F;russian-anal...</a><p><a href="https:&#x2F;&#x2F;www.nytimes.com&#x2F;2022&#x2F;10&#x2F;18&#x2F;us&#x2F;politics&#x2F;igor-danchenko-russia-acquittal-trump.html" rel="nofollow">https:&#x2F;&#x2F;www.nytimes.com&#x2F;2022&#x2F;10&#x2F;18&#x2F;us&#x2F;politics&#x2F;igor-danchenk...</a><p><a href="https:&#x2F;&#x2F;www.npr.org&#x2F;2021&#x2F;11&#x2F;12&#x2F;1055030223&#x2F;the-fbi-arrests-a-key-contributor-to-efforts-trying-to-link-trump-with-russia" rel="nofollow">https:&#x2F;&#x2F;www.npr.org&#x2F;2021&#x2F;11&#x2F;12&#x2F;1055030223&#x2F;the-fbi-arrests-a-...</a>
MonkeyMalarky大约 2 年前
So who&#x27;s going to build the sex phone line where you can talk to celebrities like Marilyn Monroe?
评论 #35255109 未加载
评论 #35255296 未加载
评论 #35255331 未加载
评论 #35256858 未加载
lapitopi大约 2 年前
It is scary for sure. We were heading towards a brave new world even before chatGPT, where what&#x27;s real was no longer to be taken for granted and you couldn&#x27;t trust your eyes and ears. ChatGPT has just hastened things. I&#x27;m sure a new market will emerge for authentication services that verify the speaker or the person in the video similar to how we had twitter verified checkmark.<p>On the flip side I can&#x27;t wait for someone to build a product where I record a few conversations with my parents while they are alive, and then when they&#x27;re gone, through chatGPT + vocalfakes, I can have a parent forever. Sure, I&#x27;ll know it&#x27;s not the real thing, but when you really miss them, it certainly can make the pain a little less.
评论 #35255499 未加载
评论 #35255586 未加载
smitty1e大约 2 年前
Meh. We achieve a state of complete systemic disbelief, and alternate trust mechanisms develop.<p>[Clip of Abe Lincoln in RayBans saying &quot;Don&#x27;t believe everything you see and hear on the Internet&quot; goes here]<p>Even the stuff that wasn&#x27;t deepfaked was mostly bollocks anyway.
jareds大约 2 年前
Was anyone else not particularly impressed by this? I haven&#x27;t listened to much Steve Jobs stuff but it sounded just stilted enough for me to think something was up. I&#x27;m not sure if it was because I came into it with a skeptical mindset because of the article and context around it. It also may be do to the fact that I have been using screen reading software for about 30 years. The only people who I&#x27;ve heard more then synthetic speech may be my close family and I&#x27;m not sure about that. Is there anywhere that offers a test where you have to determine what is generated and what is not with random clips lacking background info?
评论 #35255366 未加载
2bitencryption大约 2 年前
On the flip side, if these deepfakes are so simple to generate, they will surely bombard us constantly throughout the day - radio, TV commercials, youtube videos, podcasts.<p>And people quickly become desensitized to that kind of thing. It could be the case that, after some initial &quot;ramping up&quot; period, these deepfakes are so cheap and abundant that no one falls for them.<p>When was the last time you got a call from a number you didn&#x27;t recognize, and you actually picked it up? If you&#x27;re like me, probably not in a long time, because you&#x27;re aware it&#x27;s almost certainly some scam&#x2F;robo call.
MBCook大约 2 年前
If you know Casey Liss from ATP or his other podcasts, James Thompson made a deepfake of him with permission and it is extremely good.<p><a href="https:&#x2F;&#x2F;mastodon.social&#x2F;@jamesthomson&#x2F;110062947060928918" rel="nofollow">https:&#x2F;&#x2F;mastodon.social&#x2F;@jamesthomson&#x2F;110062947060928918</a><p>I had no idea things had gone this far.
stephc_int13大约 2 年前
It is difficult to predict what will happen, but nefarious use of something that powerful is a given.<p>Generated images and audio clips are still pretty easy to detect, we&#x27;re not there yet, but close.<p>What I worry about is not the obvious usage and risks, but the second and third order effects, those we can&#x27;t predict.<p>I will be interesting.
slake大约 2 年前
I&#x27;m worried somebody will call my parents and tell them to transfer money to a weird bank account using my deepfaked voice.
gnicholas大约 2 年前
Does anyone have suggestions on verbal ways to authenticate family members, for example, on the phone?
评论 #35255439 未加载
评论 #35255610 未加载
评论 #35255342 未加载
评论 #35255377 未加载
georgeoliver大约 2 年前
What&#x27;s the feasibility of signed ads&#x2F;speech verified on the device (tv, phone, etc.)?
评论 #35255498 未加载
somsak2大约 2 年前
This feels so over-the-top. &quot;A recording of Joe Biden forgetting his own name or what year it is, or Kamala Harris claiming to be running an abortion clinic?&quot; Give me a break, you could already have done this with a high-quality voice impersonator -- cost is no real concern at that level anyway.<p>I think this is only really a risk at the low end -- people scamming others with fake references from quasi-celebrities. Not great, but overall feels pretty minor of a concern. We already allow the carriers to scam everyone in the US by allowing anyone to call your cell phone and try to tell you you&#x27;re behind on your insurance, or that your computer has a virus. There&#x27;s plenty of scams out there, if we really thought this was a problem, we&#x27;d care more about the existing ones.
puglr大约 2 年前
When this topic has come up with family and friends, folks often say that they aren&#x27;t worried (yet) because while a human can be fooled, they can&#x27;t yet fool readily available forensic tools, and perhaps can&#x27;t ever.<p>I can&#x27;t speak to the veracity of that claim, but as the post points out, the past several years has shown us that it doesn&#x27;t matter, not in the least.<p>The author goes on to say how it feels inevitable that we&#x27;ll see a Bannon or Stone type use this technology to create fake scandals.<p>I&#x27;m more worried about the grass roots efforts. Crowdsourced conspiracies like QAnon. Now they&#x27;ll have more capable tools to radicalize people.
TylerE大约 2 年前
I was with him right up until the last paragraph. The pussy grabber tape was VIDEO. You can lip read it.
评论 #35255469 未加载
评论 #35255467 未加载
评论 #35255480 未加载
soneil大约 2 年前
fair play to him for calling out his own &quot;claim chowder&quot;
Rodeoclash大约 2 年前
Everyone&#x27;s worried about people using this to generate things that people haven&#x27;t said (The &quot;grab em&quot; comment by Trump) but inversely to this is that you can claim any REAL content about someone was AI generated by someone else.
antibasilisk大约 2 年前
I&#x27;ve already been seeing issues with AI art being used to fake Trump&#x27;s arrest, people are genuinely confused and it even took me a second to realise I was being had
rvz大约 2 年前
&gt; It’s all fun and games in these demos, but this is inevitably going to be put to use by ratfuckers to create fake scandals in political campaigns...And it feels inevitable that a Roger Stone or Steve Bannon type will use this technology to commission, say, a recording of Joe Biden forgetting his own name...<p>The road to hell is paved with good intentions, and all politicians are going to use this for their political campaigns, not just one specific side. Both.<p>Let&#x27;s not veer off the wider point and believe that only one side will use it for bad things. All politicians are liars, and no matter what side they are on, if it benefits their agenda to influence the electorate to gain power, they will use it; even if it is used for spreading lies or false and misleading claims.
评论 #35255134 未加载
surrealize大约 2 年前
This is one of those things where Balaji is ahead of the curve - the way to guarantee metadata (e.g. who the speaker is) is to do it cryptographically on-chain.<p><a href="https:&#x2F;&#x2F;twitter.com&#x2F;balajis&#x2F;status&#x2F;1583495595737481217" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;balajis&#x2F;status&#x2F;1583495595737481217</a>
评论 #35255442 未加载