TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

US House of Representatives Hearing on the Dangers of Deepfakes and AI

98 pointsby MintChocoisEwalmost 6 years ago

12 comments

parksyalmost 6 years ago
Beyond geopolitical concerns - as the tech becomes more widespread and easier to use, to the point where teenagers start using it to punk one another, no one will believe anything they see, hear or read ever again without an extremely robust digital provenance system (and who built that system? are they to be trusted? does it rely on intrusive measures? etc)<p>It&#x27;s great the developers and researchers are taking pause - this is great tech that could be used positively and creatively in so many areas - and with any new tech comes pain from misuse.<p>If history is anything to go by, legislation is likely catch up slightly too late to stop the cultural impact, be overly broad, drag in a lot of relatively harmless outliers, undermine the positive use-cases and won&#x27;t deter dedicated high-stakes bad actors.
评论 #20256656 未加载
评论 #20255957 未加载
评论 #20256423 未加载
评论 #20256695 未加载
评论 #20258448 未加载
评论 #20258007 未加载
iamnotherealmost 6 years ago
It feels like we&#x27;re having the 1990s debate on &quot;weapons-grade&quot; encryption all over again. Yes, some software has the potential for abuse. But there&#x27;s no reasonable way to stop it from being written and used. New technology springs up to counter whatever harms arise, society adapts, and we move on.
评论 #20256236 未加载
评论 #20256714 未加载
ipnonalmost 6 years ago
Imagine someone calling your grandparents on the phone with your deepfaked voice crying and saying you&#x27;ve been kidnapped and need $10,000 to come back home.
评论 #20258233 未加载
评论 #20258163 未加载
jl6almost 6 years ago
What are the top ten “bombshell” audio or video recordings from the 21st century so far which ignited action because we believed them (probably rightly) to be truth, which might these days (or soon) be plausibly denied as deep fakes, leading to inaction?
评论 #20256015 未加载
评论 #20256001 未加载
评论 #20256843 未加载
评论 #20259714 未加载
Merrillalmost 6 years ago
Media that is not cryptographically signed and authenticated by inclusion of the signature in a trusted register or blockchain cannot be trusted to be authentic.
评论 #20254736 未加载
评论 #20255017 未加载
评论 #20254819 未加载
dillonmckayalmost 6 years ago
“Citron is currently writing a model statute that would address wrongful impersonation and cover some deepfake content.”<p>Existing libel laws would cover this, no?
评论 #20254720 未加载
评论 #20255029 未加载
dclowd9901almost 6 years ago
This might be a good opportunity for camera manufacturers to introduce a feature to sign video with some sort of authentication. If a video is true and real, then it could be authenticated by Canon, Sony, Apple, etc. Any video without this authentication should be suspect.
kobollalmost 6 years ago
Photoshop is more than <i>30</i> years old. And in three decades, the impact of manipulated images on people&#x27;s reputations hasn&#x27;t been quite as devastating as people once feared.<p>Why won&#x27;t it be the same this time? It&#x27;s much easier to fool a human than to fool tools designed to detect manipulation, and that&#x27;s limited fake photos&#x27; impact severely. As long as we have ways to detect fakes, and media responsible enough to do the cursory investigation required to verify authenticity, I don&#x27;t see why manipulated videos poses any greater risk than manipulated photos.
评论 #20257021 未加载
polyominoalmost 6 years ago
This page has an invisible Facebook like button on top of the text
evanbalmost 6 years ago
I&#x27;ve been thinking about this some recently. Is some scheme like the following remotely plausible?<p>- Have a piece of hardware in-frame that can show hashes that is hooked to the operating cameras.<p>- In each frame show the hash of the previous frame (low latency would be required, as would the transmission of the full-quality original).<p>- Publish the original seed and the final hash (and, since the original is broadcast, the whole chain of hashes can be verified).
评论 #20258532 未加载
jaimex2almost 6 years ago
We&#x27;ve had fake news and media for years. People wisen up, even if deep fakes come about reputable media outlets will call it out.<p>If you want to do something more without hurting free-speech improve existing slander and defamation laws to specifically target deep fakes.<p>Hold the platforms accountable the same way copyright has, its worked wonders.
评论 #20255170 未加载
评论 #20255357 未加载
评论 #20256679 未加载
intendedalmost 6 years ago
This particular issue is “vexing” to put it mildly. It’s nightmare fuel to be honest, because when you start trying to think of solutions to deep fakes and fake news, you very <i>very</i> quickly, run up against the assumed norms of our enlightenment era ideals. Free speech, expression, and government control.<p>Because it’s clear, that there’s no private solution to essentially an evolutionary war between carnivore (malicious humans) and herbivore(people who don’t want to be manipulated).<p>Right now, the Chinese total control model is the model that’s working, and while on HN we may find it abhorrent, people in high places are being forced to make pragmatic choices. For them a Chinese styled information system will win out.<p>To prevent this, I think it’s increasingly time to re-examine our norms on information and expression.<p>In general, taking a step back - human society (markets, media, reporters, books, news) are effectively a giant solution to finding out what is “true” and what is “something else”.<p>We are excellent at solving these problems, we build families (parents are decision makers and know what information and implications make sense), clans (what is ideal for this group of people with similar genes I can trust), businesses (contracted people with relatively aligned interests), and more. You get the drift.<p>So the question resolves to: How, do we organize ourselves, to verify information, to at least keep parity with the verification rates before the internet era.<p>The key difference, is the mass production of information&#x2F;content. The rate of content generation outstrips the ability to verify.<p>This last part will not change. We will always lose as an information society, if we get into a pure verification war with computer generated information.<p>This leads to the first conclusion:<p>1) clear measures to control the rate of generation of fake data.<p>to be blunt: this means jail. Punitive and clear response to stop this behavior, across borders.<p>Here’s where a decent chunk of HN will be aghast. This is what I meant by our values coming into conflict with the necessities of the solution.<p>Which brings us to problem 2&#x2F; addendum to solution 1.<p>2) who watches the watchers?<p>If govt has the power to punish people for “fake news” how do we know that’s not misused for “inconvenient news”.<p>Well, the weak solution that presents itself is a fire walled agency, with guaranteed ability to be funded, in order to seek out and identify manipulation and spread of faked information.<p>Hopefully, this agency won’t crumble on day 1, on the inherent contradictions of its role and responsibilities.<p>Some structure of this sort, gives us a pathway through the near future to deal with this issue.<p>Hopefully, it can be used to buy the time to solve the issue we are facing.<p>The irony, of advocating for a ministry of truth, in order to save the truth, is not lost on me.<p>But unless action is taken to stop propaganda, mass produced information creation, then it is a guarantee that our old human ways of assessing information will fail.<p>The only option which will be left on the table is dystopia, or some bizarre world where nothing and anything is true.
评论 #20255724 未加载
评论 #20255341 未加载
评论 #20255889 未加载
评论 #20256939 未加载