TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Stop developing this technology

263 点作者 chetangoti超过 2 年前

55 条评论

jakelazaroff超过 2 年前
Regardless of what I think about this technology in particular, I want to respond to this line from the second comment: [1]<p><i>&gt; on the contrary, it must be developed.</i><p>No, it mustn’t. There’s not a gun to your head, forcing you to do this. You <i>want</i> to develop this technology. That’s why it’s happening.<p>Technology isn’t inevitable. It’s a choice we make. And we can go in circles about whether it’s good or bad, but at least cop to the fact that you have agency and you’re making that choice.<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;iperov&#x2F;DeepFaceLive&#x2F;issues&#x2F;41#issuecomment-1047017440">https:&#x2F;&#x2F;github.com&#x2F;iperov&#x2F;DeepFaceLive&#x2F;issues&#x2F;41#issuecommen...</a>
评论 #34623500 未加载
评论 #34625777 未加载
评论 #34623212 未加载
评论 #34625170 未加载
评论 #34625527 未加载
评论 #34625132 未加载
评论 #34630945 未加载
评论 #34627057 未加载
评论 #34626877 未加载
评论 #34624063 未加载
评论 #34626474 未加载
评论 #34626086 未加载
评论 #34625095 未加载
评论 #34623965 未加载
lucumo超过 2 年前
There&#x27;s a book in Dutch literature, The Assault by Harry Mulisch. It&#x27;s a great read if you&#x27;re interested in multi-faceted morality. It deals with guilt and responsibility when there&#x27;s a chain of things that need to happen for a bad outcome.<p>During WWII the protagonist&#x27;s parents are shot by Germans after a body of a collaborator was found in front of their house. The parents were arguing with the soldiers about something when arrested and ended up shot during the arrest.<p>The collaborator was shot by the resistance in front of their neighbours house, and the neighbours moved the body in front of the protagonist&#x27;s house.<p>Over the years, he encounters many people involved in this event, and starts seeing things from many sides. One of the themes that&#x27;s explored is who bears moral responsibility for his parents&#x27; death? The Germans for shooting them? His mother for arguing? The neighbours for moving the body? The resistance for shooting the collaborator? The collaborator for collaborating? All of their actions were a necessary link in the chain that lead to their death.<p>One of the characters utters a simple and powerful way of dealing with that morality: &quot;He who did it, did it, and not somebody else. The only useful truth is that everybody is killed, by who he is killed, and not by anyone else.&quot;<p>It&#x27;s a self-serving morality, because the character was part of the resistance group that shot the collaborator in a time where reprisals where very common. But it&#x27;s also very appealing in its simplicity and clarity.<p>I find myself referring back to it, in cases like this. In the imagined future, where this tech is used for Bad Things, who is responsible for those Bad Things? The person that did the Bad Thing? The person that developed this tech? The people that developed the tech that lead up to this development?<p>I&#x27;m much inclined to only lay blame on the person that did the Bad Thing.
评论 #34625052 未加载
评论 #34624961 未加载
评论 #34626794 未加载
评论 #34624852 未加载
评论 #34627673 未加载
评论 #34635178 未加载
评论 #34630788 未加载
评论 #34743666 未加载
评论 #34635224 未加载
评论 #34631383 未加载
LeanderK超过 2 年前
Again and again I am astonished that people without ethics exist, that they are confident in what they are doing and that they appear to be completely unable to reflect upon their actions. They just don&#x27;t care and appear to be proud of it.<p>If this is used to scam old people out of their belongings then you really have to question your actions and imho bear some responsibility. Was it worth it? Do the positive uses outweigh their negatives? They use examples of misuse of technology as if that would free them of any guilt. As if previous errors would allow them to do anything because greater mistakes were made.<p>You are not, of course, completely responsible for the actions others take but if you create something you have to keep in mind bad actors exist. You can&#x27;t just close your eyes if your actions lead to a strictly worse world. Old people scammed out of their savings are real people and it is real pain. I can&#x27;t imagine the desperation and the helplessness following that. It really makes me angry how someone can ignore so much pain and not even engage in an argument whether it&#x27;s the right thing to do.
评论 #34628947 未加载
评论 #34626835 未加载
评论 #34627414 未加载
评论 #34628409 未加载
评论 #34631243 未加载
评论 #34632595 未加载
mihaic超过 2 年前
HN seems to actively work on a cognitive dissonance: on the one hand producing inspirational stories of entrepreneurs changing the world and on the other abandoning all hope that technology&#x2F;market forces can be controlled in any way.<p>I&#x27;m thinking now this is to justify away the collective guilt of bringing into the mainstream harmful products.<p>It seems to come from the same origins as &quot;crypto can&#x27;t be regulated&quot;, &quot;government can&#x27;t to anything&quot;, &quot;it&#x27;s ok because it&#x27;s legal&quot; and it&#x27;s always worrying me to not really see any sort of moral stance being taken anymore.
评论 #34630794 未加载
评论 #34630708 未加载
评论 #34624291 未加载
评论 #34623875 未加载
dusted超过 2 年前
I think it&#x27;s a good thing. Not that it&#x27;s being used for evil things, but because it should help make it obvious that you can&#x27;t trust anything you see on a screen.<p>Using fake media to trick people into believing anything used to be a privileged reserved for nation states and the ultra rich. Now that _ANYONE_ and their cat can do it, it should follow that nobody can believe anything that&#x27;s on a screen anymore (this comment included).
评论 #34625138 未加载
评论 #34626399 未加载
评论 #34623242 未加载
评论 #34625570 未加载
yipbub超过 2 年前
I&#x27;m convinced that this idea that technology is completely neutral is wrong. It is not neutral in the face of human psychology. The human species is a different animal that the human individual, and it is powerful, but does not make truly conscious decisions.<p>Once you let the genie out of the bottle, a wish will be made. A technology might not be inherently bad, but neither are knives, and we don&#x27;t leave those lying around.<p>That said, it is the human species that develops technology, rarely is one human individual capable of holding back a technology.
评论 #34623890 未加载
评论 #34627856 未加载
评论 #34625230 未加载
kefka_p超过 2 年前
Can anybody demonstrate a legitimate use of deepfake software? Has it ever been used to facilitate a socially positive or desirable outcome? While I recognize my experiences are far from definitive, I hazard most would be hard pressed to name anything positive that came out of deepfake technology.<p>edit: I’ll take your knee-jerk DV, and any others, as an admission of an inability to speak to positive utility of this technology.
评论 #34623810 未加载
评论 #34624183 未加载
评论 #34623953 未加载
评论 #34624991 未加载
评论 #34636455 未加载
评论 #34636393 未加载
评论 #34623967 未加载
评论 #34631832 未加载
kstenerud超过 2 年前
This kind of technology is far too useful to repressive regimes and those who wish to do nasty things with it.<p>This means that the incentive to develop this technology is already there, and so it WILL be developed no matter how much people wish it wouldn&#x27;t.<p>The only difference at this point is whether some of the implementations are developed in public view or not. If none are public, then all of them will be done in secret, and our opportunities to develop countermeasures will be severely hampered by having fewer eyes on it, and a smaller entry funnel for potential white hats.
评论 #34625091 未加载
college_physics超过 2 年前
Its hard to take seriously the argument that &quot;tech is neutral&quot; when it concerns software. One could <i>maybe</i> make this argument for the hardware underneath (the chips and the cables). They are called after all &quot;general purpose computing&quot; devices and the packets moving around are general purpose streams of bits as well [0].<p>But <i>software</i> is not &quot;tech&quot;. It is the explicit expression and projection of cultural objectives and values onto a particular type of tech. You can take the exact precise hardware we have today and reprogram a million different worlds on it, some better, some worse.<p>Developers are simply the willing executioners of prevalent power structures. Deal with it. If you have a moral backbone (i.e., you don&#x27;t agree with the prevalent morality as expressed in what the software industry currently does) do something about it.<p>[0] Ofcourse upon deeper examination overall system design (e.g. how client or server heavy the configuration, what kind of UI is promoted etc) is not neutral either. Cultural&#x2F;political&#x2F;economic choices creep * everywhere*
评论 #34637143 未加载
rvieira超过 2 年前
Particularly ironic using the defence of &quot;it’s not the technology that’s to blame, but the person, not the machine gun, but the person&quot;.<p>Machine guns, an advanced piece of engineering widely known to being developed purely as an academic exercise. No one could expect other uses.
评论 #34623936 未加载
proto-n超过 2 年前
Here&#x27;s what I would have answered to the OP in the link:<p>This technology is going to be developed regardless of what we do here. Please realize that you are not advocating for it not to be developed: rather, you are advocating for it not to be developed <i>in the open</i>.
评论 #34626070 未加载
wruza超过 2 年前
<i>if someone blame this technology, why not to blame guns, warships, tanks, airplanes, shotguns, machineguns before blaming this technology?</i><p>We actually blame them, except for airplanes. Most of these were invented at the times when lives had much less value and are of no use unless some half-minded pig attacks you or tries to undermine your defenses.<p>I’d like to see how this line of reasoning changes when someone releases a virus for your DNA in your backyard, made with funnyjokes&#x2F;easy-create-virus-for-a-drone-app.
评论 #34626034 未加载
tgv超过 2 年前
Remember: if you helped develop it, you&#x27;re responsible for it. If it kills people, you share the blame.<p>So, what&#x27;s the possible scenario for that outcome? Well, look at the upcoming elections in Nigeria. The BBC writes: &quot;With an estimated 80 million Nigerians online, social media plays a huge role in national debates about politics. Our investigation uncovered different tactics used to reach more people on Twitter. Many play on divisive issues such as religious, ethnic and regional differences.&quot; ABC News writes: &quot;At least 800 people died in post-election violence after the 2011 polls.&quot;<p>Adding deepfakes into this mix can trigger violent reactions. Should that happen, the creators of deepfakes are obviously to blame, but also those who enabled them, and that includes the original researchers, are responsible. Ignoring that is just putting your head in the sand.
评论 #34625719 未加载
评论 #34624506 未加载
评论 #34626966 未加载
Hard_Space超过 2 年前
I think it&#x27;s pretty obvious that autoencoder deepfake tech and similar technologies are going to be useful, maybe even essential in visual effects. The perceived problem seems to be that the &#x27;irresponsible rabble&#x27; also have access to it.<p>But as the barrier to entry for really convincing output goes up (768px &#x2F;1024px training pipelines, and beyond), and it suddenly becomes something that one person alone can&#x27;t really do <i>well</i> any more, the &#x27;amateur&#x27; stuff is going to look far worse to people than it does now. You just have to wait for that barrier to rise, and I can tell you as a VFX insider that that is happening right now.<p>Deepfakes are the reverse of CGI, which <i>began</i> as an inaccessible technology and gradually became accessible, before the scale of its use in VFX reversed that again.<p>Now, assuming you can either afford or will pirate the right software, you could probably match any CGI VFX shot in a major blockbuster if you gave up your job and worked on it non-stop for a year of 18-hour days (assuming you&#x27;d already been through the steep learning curve of the pipeline). So it&#x27;s out of reach, really, and so will the best deepfakes be.<p>This stuff everyone is so scared of will end up gate-kept, if only for logistical reasons (never mind any new laws that would address it) - at least at the quality that&#x27;s so feared in these comments.
jablala超过 2 年前
What utterly horrible minds in that comment thread. The justification that other bad things happen means we can reasonably create more evil is disgusting.<p>Those posses a complete lack of a moral compass.
评论 #34625870 未加载
评论 #34625979 未加载
评论 #34637954 未加载
AlexAltea超过 2 年前
Instead of blocking technology, what about addressing the root problem: People need to understand concepts such as &quot;chain of trust&quot;.<p>Do you trust videocall participants because you recognize their faces and voices? ...Or because a server certified by a root CA has authenticated the other participants?<p>The age of deepfakes has started, nobody can stop it. Improving our mental security models will become as essential as literacy.
Lacerda69超过 2 年前
an age old discussion. ultimately i prefer for this technology to be developed in the open on github and be aware of it (and able to combat nefarious use of it).<p>the alternative is that its being developed hidden and used by the most vile and evil without many being aware of it (which it most definitely will)<p>As with nuclear, the cats out of the bag or the babys bathwater(?) already spilled, no way to turn back the clock on technological innovation.
评论 #34623705 未加载
评论 #34623258 未加载
epups超过 2 年前
The big confusion here is that this particular development effort is the one creating this technology, rather than diffusing it.<p>If somehow we could get the United Nations to agree to ban deepfake development worldwide, then surely we should enter an ethical discussion about whether we should do it or not. In a world where sophisticated actors already have access to these (and much better) tools, having an open-source GitHub repo is a good thing in my view.
JasonFruit超过 2 年前
I think it&#x27;s inevitable that sub-societies will form that accept technology only in ways that preserve the humanity of our interactions. We use the term &quot;disruptive&quot; all the time around here, but what&#x27;s being disrupted is increasingly close to the heart of human life: the ability to engage with the ideas, emotions, and opinions of another human being, and to know them as a person. If a computer can convincingly impersonate anybody, putting any words you choose in their mouth, and at the same time can generate convincing words on any given subject, it can flood electronic communication with noise that&#x27;s impossible to remove. It can destroy the trust we have in any interactions not had in person, and we&#x27;ve allowed our society to develop in such a way that interacting only in person is no longer practical.<p>I know that it&#x27;s expected that a man of a certain age will begin to say these things, but I think it&#x27;s true now in a way that it was not true about &quot;those young whippersnappers with their motorcars&quot;: we&#x27;ve taken a wrong turn, and I don&#x27;t like where we&#x27;re headed.
评论 #34644928 未加载
blondin超过 2 年前
haven&#x27;t paid much attention to the deepfake community. but this one is debatable. one of their linked forums has a section for flagging uncredited videos or work.<p>so, deepfake authors want credits for their work. that&#x27;s perplexing.<p>what&#x27;s more, this is happening while they seem to be ignoring the ethical concerns raised in the issue. citing that people can do whatever they want with the tech.
评论 #34623301 未加载
ttsalami超过 2 年前
Quite often in science fiction media, despite the advancement of technology, I see text and graphics(but not pictures) only as User Interfaces. I wonder if this is the path we will be going down. Zero trust towards images and video
zzo38computer超过 2 年前
Technology can have good and bad uses.<p>If they do not make them FOSS in public, then the Conspiracy will invent their own and use it for bad uses only.<p>Furthermore, even if a program is written, you can decide not to use it; that it is written (as FOSS) means that you can read its working, now that someone else wrote about it. You can also execute it on a computer, if that is what is desired. Also, if it is well known enough, then hopefully if someone does use it deceptively against you, then you might be able to guess, or to figure it out (although it might be difficult, at least it might be possible if it is known enough).<p>I have no intention using such a thing, but someone else might figure out uses of it.<p>(For example, maybe there are some uses that can be used with movies, for example, if the original actor has been injured for an extended period of time (including if they are dead) or if they want to make up a picture of someone who does not exist. (Although, they should avoid being deceptive. For example, include in the credits, the mention of using such a thing.) Even if it is considered acceptable though, some people will prefer to make movies without it, and such a thing should be acceptable too anyways.)<p>(I think even in Star Trek, in story, in some episodes they made deepfake movies of someone. And even in Star Trek, both good and bad uses are possible. Or, am I mistaken?)<p>Nevertheless, there may be some dangers involved, but there are potential danger with anything; if you are careful, then you can try to avoid it, hopefully.
评论 #34638016 未加载
qwerty456127超过 2 年前
The deepfake thechnology is awesome and should be available to everybody. Because this is the only way everybody can be finally taught to think critically about everything thay hear&#x2F;see.<p>Can you believe a politician saying something on TV? Hell no! You should exercise logic about the whole political play he is a part of. Should you think bad about a person you find on a porn site? Absolutely no, what good could result out of this in any case?<p>This has always been like this but now there is a thing which can push this into the common sense.
评论 #34625297 未加载
评论 #34625399 未加载
nathias超过 2 年前
There are a lot of completely benign usecases.<p>1. I want to deepfake myself to have an avatar for online interaction.<p>2. I want to generate videos instead of filming by pasting people into existing videos.<p>3. Prevent a Face&#x2F;Off of scenario.
amarant超过 2 年前
Lots of comments here and in the GitHub thread claiming there are no legitimate uses for this, so I thought I&#x27;d drop a legitimate use just to have an example: I saw an unrelated article today where someone had used some deepfake technology to change the spoken language of an actor.<p>Imagine what that would mean for dubbed movies&#x2F;TV if it gets good enough.<p>There are legit usecases, and that justifies the technologies existence. The bad actors don&#x27;t make it immoral to develop a technology IMO.
评论 #34625854 未加载
DennisP超过 2 年前
Everything people fear about deepfakes has been true of text for the entire history of writing. We&#x27;ve a brief period in human history during which you could mostly believe data you received from afar, without having to trust the source, because telling a lie with video was much harder than telling a lie with text. Now it&#x27;s almost as easy to tell a lie with video so we&#x27;ll have to check sources again. Somehow I think we&#x27;ll survive.
givemeethekeys超过 2 年前
I can see a few good uses for this tech:<p>Face swapping + voice swapping + auto translate = your customer support can be anyone on the planet but look and sound familiar to you. Maybe you&#x27;re getting over a facial injury.<p>Face swapping = you no longer have to put on make up. Just swap your made up face for meetings.<p>Face swapping + voice recordings + AI that learns = that scene in Contact where Jody Foster talks to the alien - but he takes the form of her father to make her feel more comfortable.
kderbyma超过 2 年前
There is no good for this technology. Nothing good that truly outweighs the bad. I agree with the Sentiment. These deepfakes are not good...they cheapen everything and lower the standard for all....it&#x27;s literally scammers who want this stuff and people who want to take shortcuts.... essentially you can morally judge a person by this technology and their approach
CyborgCabbage超过 2 年前
A lot of people in the comments here are saying that this is beneficial because it will teach people that they can&#x27;t trust video or audio. But I don&#x27;t see how that makes sense because this isn&#x27;t some neutered or weakened form of the technology. That&#x27;s like saying shooting people makes them more aware of gun violence.
评论 #34638132 未加载
lonelyasacloud超过 2 年前
The problem here is how our societies handle the arrival of new technology, not the technology itself.<p>Here it&#x27;s obvious what&#x27;s going to happen without robust legislation protecting the likeness of all individuals (and not just special-case celebs) from the non-consensual generation of new material.<p>The fact that such legislation is unlikely to happen before an awful lot of suffering has occurred is a testament both to naive belief that everything new is good. And to legislative processes with a bandwidth from the age of sail that is riddled with vested interests to handle the downsides when scaling breaks the happy path assumptions.<p>Focusing efforts on the legislative process seems likely to be more productive than point solutions that rely on techies and scientists not to develop tech that can be used for nefarious purposes.
dmingod666超过 2 年前
How much are you going to regulate and stop? Game engines currently give photorealistic realtime content.. stop unreal and metahumans too? What about v6 of unreal?<p>What if this same repo was owned by Nvidia and it had some commercial interest on that product and were ready to litigate.. would everyone still pileup on it?<p>Is it not on some level disdain that it&#x27;s just run by a bunch of guys that can be pushed around without much consequence.<p>Would we have a thread saying, screw it shut down ChatGPT, doesn&#x27;t fit my moral world-view.. why is that absurd but this is fair discussion?
bondarchuk超过 2 年前
At the end of the day, it&#x27;s just pixels on a screen. It&#x27;s not fair to compare it to machine guns or atomic bombs which cause real physical harm.<p>You might argue that the technology to make pixels on a screen resembling real humans is bad, but then you have to actually make that argument (and &quot;some people got scammed&quot; is indeed such an argument, albeit a pretty weak one), not just shift it to &quot;this is technology, machine guns are technology, machine guns are bad&quot;.
55555超过 2 年前
I don’t think any specific technology is good or bad, but I do think you could attempt to quantify its impact better by measuring how often it’s used for “good” versus “bad”.<p>It’s not going to end the argument though, not least because you’ll then have to assign a relative value to abstract things like “personal freedom” or “artistic expression”.<p>Even if you could quantify the losses to crypto scams, you can’t put an objective value on some of its more ideological benefits.
gloosx超过 2 年前
Why is it such a big concern? As far as I know deep fake can be recognised with solid confidence using other neural net model? It&#x27;s only a matter of time this will be detected in every kind of real-time communication app if it poses some kind of a threat and it is rising. Sure, it must be developed, and it&#x27;s a good thing it is done in public, so the defenders can prepare to defend on open-source material, no?
winrid超过 2 年前
On a side note, is this really all in Python? I imagine it&#x27;s offloading some stuff to the GPU right? Maybe the GPU instructions are also stored in python??
评论 #34623963 未加载
halicarnassus超过 2 年前
Regarding the downvotes and the second comment on the linked page:<p>I guess the non-sensical argument &quot;it&#x27;s not the &lt;insert technology&gt; that &lt;insert and thing&gt;, but the person using the &lt;technology&gt;&quot; will never die out.<p>If you don&#x27;t have &lt;the technology&gt;, it&#x27;s much harder to &lt;do the bad thing&gt;, has to be done hands-on from a very close distance with much higher risk for the perpetrator.
deniscepko2超过 2 年前
I wonder if some form of regulation is coming to tech. We do not allow people to spread heroin freely or slavery or some other sort of horrible stuff.
评论 #34626219 未加载
caporaltito超过 2 年前
This reminds of the whole crowd of artists calling for a ban of AI generated art because &quot;it&#x27;s stealing&quot;. The change will happen, whether you want it or not. So those guys&#x27;d better raise the prices for their unique, by-hand manufactured, hard worked pieces of work and leave the low-quality, generic and industrially generated ones to AI. Embrace the change, as they say.
评论 #34624248 未加载
WFHRenaissance超过 2 年前
You will never stop the march of technology, especially when it requires so few developers to create it. It will emerge.
baal80spam超过 2 年前
Technology isn&#x27;t inherently good or evil, it&#x27;s neutral.<p>If Company A &#x2F; Country A &#x2F; Person A won&#x27;t do it, then Company B &#x2F; Country B &#x2F; Person B will do it and use it to bankrupt you &#x2F; attack and possibly kill you &#x2F; take advantage of you.<p>It&#x27;s that simple.
评论 #34625241 未加载
评论 #34637158 未加载
评论 #34624450 未加载
jmnicolas超过 2 年前
You can&#x27;t stop these kinds of double edge swords from being developed but what I&#x27;d like is for people to get together and dev a counter to that, something like DetectDeepFace.<p>For me everything (image, text, sound etc) that comes from a computer is suspect nowadays.
spaceman_2020超过 2 年前
Clear that we&#x27;re moving to a post-truth society. Visuals can be deepfaked, voices AI-generated.<p>Could crypto unironically be the way out of this mess? If a document isn&#x27;t signed by a wallet associated with you, it should not be considered authentic?
评论 #34626441 未加载
评论 #34625412 未加载
EVa5I7bHFq9mnYK超过 2 年前
Has any deep technology benefited ordinary people so far? It&#x27;s mostly used by totalitarian governments, by big tech to fine tune ads, by seo spammers etc. Can&#x27;t wait for web to fill up with deep nonsense and &quot;art&quot;.
Sevii超过 2 年前
The unfortunate part is that the alternative is for only government entities to have this technology. If open source can make a credible attempt at creating live deep fake technology, the government already has a team working on it.
azubinski超过 2 年前
Official discord channel: English &#x2F; Russian. 中文交流论坛,免费软件教程、模型、人脸数据<p>LOL<p>BTW.<p>It&#x27;s not a &quot;technology&quot; in any classical sense of this word.<p>This is a funny and technologically useless rattle that can be used as a Chinese-made Kalashnikov assault rifle.
dtx1超过 2 年前
This is akin to developing Bioweapons. Can it be done? Yes. Should it be done? Absolutely not.<p>&gt; “Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.”
评论 #34710681 未加载
ncr100超过 2 年前
The sci-fi horror fantasy of engineers being assaulted for working on future-dangerous technology seems a predictable outcome of this kind of rhetoric, soon.
SergeAx超过 2 年前
Let me see how crypto is banned first. Same case, even worse: not just used to scam people out of their belongings, but to buy and sell illegal things too.
return_to_monke超过 2 年前
if this was a discussion on generative AI, I&#x27;d agree that the cat is out of the bag and there&#x27;s no stopping this now.<p>BUT.<p>what are people using deepfakes for, in good faith? can someone provide one example that isn&#x27;t malicious?<p>the best I could imagine is maybe amateur filmmakers deepfaking their faces onto existing footage to cut costs - but it doesn&#x27;t seem that this outweighs the drawbacks
评论 #34630407 未加载
sva_超过 2 年前
It&#x27;s crazy how we experienced this (in the future) very small time window where you could more or less trust digital media.
renewiltord超过 2 年前
All the communities that are against this are also against everything else.<p>They&#x27;ve cried wolf enough times. Everything is dangerous and everything is a crisis.<p>Consequently, I will ignore their warnings about this as well. It&#x27;ll be okay. Tomorrow the community will forget about it. Tomorrow the crisis will be that some one-person blog is not GDPR compliant.
arein3超过 2 年前
This doesn&#x27;t produce any physical harm, so I see no problem in developing it. It will not spiral out of control.<p>There will be a break in period, but the conclusion will be check the source of the information.<p>Making this easily available will make the break in period easier.
ge96超过 2 年前
Hmm can&#x27;t close this issue
qthrowayq0909超过 2 年前
Tangent, but I think things like that will spell the end of remote working and remote interviewing.
评论 #34623603 未加载
评论 #34623886 未加载
badrabbit超过 2 年前
All the negative responses are slippery slope fallacies.