To those saying "Oh I can tell it's fake, obviously!", consider: will your parents/grandparents?<p>And will you still be able to tell in five years, when this tech has had 20 new iterations, each addressing the very tells that right now let you notice it's fake?<p>I know, I know, photoshop and fake pictures have always been around. But now, everyone can do it in 30 seconds. That changes things.
We're only about 6 months into AI tools being openly available and this is what they are producing - and there is still a basic bar for using them.<p>What's interesting about the soldier image is it has the same emotional impact at first glance as if it were real, before your critical faculties engage to sort it, which means its effect has already occurred. If your feeds are full of indignation and outrage, it doesn't matter whether it's real or not, you're going to have a physical and uncritical association with the sensations it creates. It's straight Pavlovian response. Your perceptions literally come through a feed.<p>Maybe we recognize how thoroughly propagandized we are already, and these examples are a merciful uncanny valley that can let us step back and really question the shit we are letting pile up in our psyches. Even as a self check, do a word association exercise and then ask how closely your associations reflect objective or even an ideal reality. I do these occasionally to test the quality of my beliefs, the results are reliably poor. Apprehending anything close to reality at all requires constant vigilance and asking how you know the things you know, and we're just at t=6mos, what does t=36 look like?
Very interesting. But still the most concerning thing about propaganda is more basic than that. It is that people don't know or are in denial about it's use against them and more generally in all wars.<p>Wars are strategic actions by nations. But humans generally will not engage in mass killing for strategic reasons. They need moral reasons. So propaganda is necessary for warfare in order to frame war in a moral manner. The enemy or enemy leaders are depicted as evil or inhuman. Or their most despicable acts are emphasized to create a sense of 'morally-justified' hatred or the idea that they must be stopped or punished at all costs. Such as killing millions of people if necessary or destroying a country.<p>Actually, if it serves their interests and especially if all of their neighbors are not protesting, humans will go along with pretty much anything. But you do need to at least give them a cover story.<p>Technology should theoretically be able to help reduce the influence of propaganda through things like new types of decentralized news distribution.
Off topic. Scrolled a little bit past the thread and almost got caught up in Twitter’s hate bait algorithm. Sexual imagery and I didn’t ask for. This is why I need to remember never to follow links to Twitter, no matter what, ever.<p>One of the hazards of browsing HN: old darlings like Twitter will be tolerated even after they become NSFW by default.
The source in the last tweet says its AI generated. I'm also not completely sure whether the writer was trying to pass the image off as legit, they don't directly say it.<p><a href="https://uproar-crowned-964778.appspot.com/2023/04/24/my-dlya-komandirov-kak-spichki-odna-sgorela-vzyal-druguyu/" rel="nofollow">https://uproar-crowned-964778.appspot.com/2023/04/24/my-dlya...</a>
The source[0] marks the image as such: "Иллюстрация на обложке: изображение сгенерировано искусственным интеллектом", which I think is probably the best "tell"<p>0: <a href="https://uproar-crowned-964778.appspot.com/2023/04/24/my-dlya-komandirov-kak-spichki-odna-sgorela-vzyal-druguyu/" rel="nofollow">https://uproar-crowned-964778.appspot.com/2023/04/24/my-dlya...</a> via <a href="https://twitter.com/ChrisO_wiki/status/1653118082766852097?s=20" rel="nofollow">https://twitter.com/ChrisO_wiki/status/1653118082766852097?s...</a>
Amnesty International are using AI-generated images also, though they are at least honest enough to admit it: <a href="https://gizmodo.com/ai-midjourney-image-art-amnesty-international-colombia-1850393124" rel="nofollow">https://gizmodo.com/ai-midjourney-image-art-amnesty-internat...</a><p>Archive of the posts which have since been deleted: <a href="https://archive.md/20230501183435/https://twitter.com/Amnesty_Norge/status/1651879572944691201" rel="nofollow">https://archive.md/20230501183435/https://twitter.com/Amnest...</a><p>I don’t support it.
I have little fear about AI generating propaganda. It's cheap to write a crappy article and fake a photo or two - or choose a real photo but twist the story around the photo.<p>What I worry about is "artificial / generated consent". You read some upsetting story, and your skeptical brain holds it at arms length. Then you read commentary in a forum you trust and you see message after message of thoughtfully worded support for some position. I think reading gobs of "informed real people" commentary is far more persuasive - and subtly so - than reading an article from someone you KNOW is pushing a specific perspective.<p>I like to believe I'm an independent thinker, but a big part of my process is to seek out many different points of view and judging which feel well supported and well reasoned. Consensus DOES play a role in my judgement forming. If consensus is easily faked, yikes.
Political ads serve no meaningful purpose at this point; it would be better to ban all of them, most do more harm than good by presenting missleading information, or using fear, hyperbole, etc to provote an emotional reaction. When I receive any political adverstisements in the mail I don't look at them, they go straight into the recycling bin - even for issues, politicians I would vote for.
If it's fake, and the strange head bandage immediately raised some alerts, I find both exciting and worrying that AI can now create credible hands.
GOP has already started using AI to generate propaganda: <a href="https://www.vice.com/en/article/bvjz9a/republican-ai-ad-gop-beat-biden" rel="nofollow">https://www.vice.com/en/article/bvjz9a/republican-ai-ad-gop-...</a>
There are all kinds of propaganda; white, grey, and black. The use of deepfakes probably belongs to the black category. But what I've always found more intriguing is the white propaganda; it provides interesting historical insights into major events in world politics.<p>Thus, can someone with access to DALL-E 2 or similar feed it the public archives of propaganda posters [1], generate a few samples, and deliver these for a discussion here at Hacker News? It would be interesting to see what kind white propaganda an AI and its users would generate!<p>[1] <a href="https://www.reddit.com/r/PropagandaPosters/" rel="nofollow">https://www.reddit.com/r/PropagandaPosters/</a>
Maybe I'm chronically online but I can see instantly if something was AI generated
Even had AI generated linkedin profiles message me I guess for scams or hacking or something
I have a theory that propagandists' jobs are safer than they would seem.<p>Thing is, when you're arguing online with some political consultant who four years ago was convinced Joe Biden was senile (because they supported Amy Klobuchar), who is now just as convinced he's sharp as a razor and ready for another term, are they really trying to convince you with their <i>arguments</i>?<p>No, I don't think so. I think the whole point is what they're doing to themselves. Look at me, how loyal I am. I won't stop at anything to win. I don't care if my past words indict me, that was then, this is now and everything is at stake! Winners never quit! Never mind what you <i>think</i>, don't you <i>feel</i> my determination?<p>So it's not about the quality of the words that come out of their mouth. GPT4 can surely produce much better words, but that's not what's supposed to convince. It's the <i>example</i>, the example of fanatical organizational-personal loyalty, that's supposed to convince. GPT4 would need to lie and convince people it's a real person - or rather, many real people - in order for that sort of thing to work. But political consultants are already doing that at scale. It's probably not any better at lying, and even if it is, what they have is good enough.<p>Substitute Biden and Klobuchar for any other politicians, obviously. It's not a left/right thing.
We need ai regulation so quickly, and one thing that would be interesting for ai articles to be required to have a validation check marking it as AI, and ai platforms should be required to prove their content came from that ai<p>Maybe a watermark?
Is the accompanying story untrue, or is this just a case of someone using an AI tool to create a generic stock image for an otherwise-truthful article, and then the provenance of the image being lost?
Nobody should be trusting anything they see online anyways. Trump and his whole cohort were real people spewing real lies all the time, so is it really that different if its a midjourney photo with some GPT text? We've been in an age of needing to check your facts from multiple reliable sources for some time, and I think this might actually accelerate some important developments that will help us by not just combating AI but also lying leaders. Information needs to be cryptographically signed by the device/creator/journalist/organization and reputation needs to be tracked.<p>Tucker Carlson's texts have exposed the fact that he was lying throughout the Trump presidency. The number one news anchor on the most watched channel in the US was lying to the public and then privately texting with friends and colleagues about it. He should be shunned from public media for good now.<p>The internet has been full of crap for years, so my prediction is AI generated content has a burst of utility early on for bad actors, but it will quickly be normalized and cast aside like most of the garbage we wade through today.
No amount of propaganda can defeat what’s been implanted by teachers in school. It takes almost zero technology to require a child to believe a lie as a condition of their freedom. Don’t bother with AI; just put it on a test. Belief of anything beyond that point is controlled by cognitive dissonance.
AI will bring out the "devil" in our collective consciousness. Midjourney is filled with dark images. People that gravitate to the dark side found an easy way to evoke it and put it out into the world.
Political news will follow.