The list of people who signed this is amusing. I am supposed to listen to these experts about using AI ethically?<p>Small selection of people who condemn this "unethical use of AI".<p>- Amazon Alexa (internet connected microphone in your house to sell you things)<p>- Google (collect your data and sell targeted ads to the highest bidder)<p>- Facebook (run experiments to make people more engaged/outraged on the website)<p>- HrFlow (automate biased hiring with the power of AI)<p>- Microsoft (collect data from users, launder a ton of GPL and MIT code without adhering to their license with the power of AI)
It’s pretty silly to suggest to suggest running a bot that posts to 4chan “undermines the responsible practice of AI science.”<p>I don’t think the “AI community”—people with access to lots of GPUs—should also get to be the thought police. Too fragile.
As much as 4chan is a cesspool, I fail to see exactly what these guys are expecting out of the whole "AI" thing.<p>Given the recent trends, I suspect they are the gatekeepers, masters of their own Babylon gardens and protectors of the plebes. Just look at the wording and the high pedigree names attached of this.<p>Are we supposed to trust these people and eat whatever cake they throw us from their balconies?<p>Somehow this makes me more sympathetic to 4chan's knuckleheads.
> However, Kilcher’s decision to deploy this bot does not meet any test of reasonableness. His actions deserve censure. He undermines the responsible practice of AI science.<p>What "test of reasonableness" are we talking about?<p>Without any concrete test to look at, doesn't this boil down to "we don't like him and he should be cancelled"?
Huh, I watched GPT-4chan’s creators video and the whole time I was thinking “this is really in the spirit of 4chan, I’m sure they loved it”, I imagine the actual 4chan user who would be upset about it is few and far between.<p>That seems like the only real “test of reasonableness“, would the person be upset to find out they were talking to an AI.<p>Beyond everything else, the entire setup made it clear it wasn’t a real person. This lead the site on a hunt to figure it out.<p>I would imagine few of the signatories have even visited the site let alone spent time there to get a sense of the community.
Language model researchers, like social media developers before them, have built this tech under the presumption that it will be applied only with positive intent.<p>GPT-4chan is about the least worst thing somebody could have done with it.
What exactly do they hope to get out of this? Anyone could have told you this would eventually happen and worse to come. The AI box is open, there is no closing it again. Welcome to the future of the internet.
> a community full of racist, sexist, xenophobic, and hateful speech<p>Rookie mistake: Taking that sea of piss seriously.<p>I'm not even sure if /pol/ is not already AI-generated nonsense.
Most of the people signing that are building systems that they <i>know</i> will be used to manipulate people into, for example, buying stuff that they don't need. Systems that they know will be used to make half-assed decisions about people's lives (even if they're planning to sit around sanctimonously whining about it).<p>... but trolling 4Chan, that's a bridge too far. Everybody knows that 4Chan users are there to escape from trolling.
It is worth pointing out that GPT-4chan was initially publicly released on Hugging Face, but was shut down shortly afterwards:<p><a href="https://huggingface.co/ykilcher/gpt-4chan" rel="nofollow">https://huggingface.co/ykilcher/gpt-4chan</a><p>For those who want to experiment with it, it is still possible to get the model by torrent:<p><a href="https://archive.org/details/gpt4chan_model" rel="nofollow">https://archive.org/details/gpt4chan_model</a><p><a href="https://archive.org/details/gpt4chan_model_float16" rel="nofollow">https://archive.org/details/gpt4chan_model_float16</a><p>There is also a direct download mirror here:<p><a href="https://theswissbay.ch/pdf/_notpdf_/gpt4chan_model/" rel="nofollow">https://theswissbay.ch/pdf/_notpdf_/gpt4chan_model/</a><p><a href="https://theswissbay.ch/pdf/_notpdf_/gpt4chan_model_float16/" rel="nofollow">https://theswissbay.ch/pdf/_notpdf_/gpt4chan_model_float16/</a><p>This is a basic outline of how to run it. You need lots of RAM (24GB+) or a large GPU (12GB+):<p><a href="https://pastebin.com/g7rSaCfw" rel="nofollow">https://pastebin.com/g7rSaCfw</a><p>The claims that this model is a "hate speech generator" are unsubstantiated. It encodes knowledge about a wide range of topics, and most of its output is neutral. GPT-3, with an appropriate pre-prompt, becomes orders of magnitude more dangerous than this.
Political correctness is evil. Taking political incorrectness serious is a sickness which should be treated with all means, including overwhelming ad absurdum. IMHO.
Yeah, I don't think GPT-4chan is any more problematic than contributing to AI technology for mass-surveillance and adtech.<p>The "AI community" can fuck off.
I wonder if GPT-4chan could be used for moderation.<p>/pol/ is a containment board and taking those topics to other boards on 4chan is generally not well received. Could you use GPT-4chan to rate how "/pol/-like" a reply is and automatically flag it for moderation if it passes a threshold?
Is it 2004 again? What's next, a news report with scary music showing how hackers from 4chan are exploding vans all around the nation using GPT-4?[1].<p>[1] <a href="https://youtu.be/128IR21ZQa0?t=168" rel="nofollow">https://youtu.be/128IR21ZQa0?t=168</a>
Interesting that this is what draw their condemnation, and not, for example, certain algos ranking loan applicants or surveillance programs.<p>It almost feels like they are either nearsighted, or that they saw someone they perceive to be an easy target to attack for some cheap publicity.
Wow, do they just assume that it's horrible because it's from 4chan?<p>That's incredibly ignorant and it's actually propagating stereotypes.<p>From what I know anyone can comment there, why automatically catalog them as horrible?
I bet there are multiple GTP type AI accounts posting to hacker news currently with creators who hope the eventual "reveal" will be "impressive." Curious how they are reacting to this.
It is important to note that AI research is mostly ethically neutral. It only becomes an ethical issue once someone actually decides to implement it in real life. At this point, the ethic issues is on the person who decided to do it. Engineers who work on a self-driving car who killed someone did their best; the responsibility is on the person who made the decision "It's good enough, let's roll it on the streets". Well, it wasn't good enough but you decided to do it anyway, and now put the blame on the engineers.
That AI is much less harmful than the posters who created the original posts.<p>There is no anonymity if you connect to 4chan using a Silicon Valley designed processor.<p>The "facts" that wannabe shooters are fed there are highly tailored to what they are predisposed to believe already, because the ones posting have complete surveillance of everyone (including of you who reads this - you can thank Eric Schmidt) and know exactly what to post to create a shooter.<p>Silicon Valley has blood on their hands. 4chan is just one of the places used for these operations. Taking it down doesn't matter, because as long as Silicon Valley continues to spy on everybody and give the data to terrorists, innocent people will continue to be murdered.
What a truly useless, detached from reality statement. Furthermore, this may be a good indication of intent to keep the most sophisticated models away from public, as it really already is, and behind "ethics boards" and such that only our friendly psychopathic corporations and companies such as "Open" "AI" can afford to have — while still objectively being unethical in everything they do from misusing user data in the most malicious ways possible to lobbying regulators, and the list goes on forever.<p>The knowledge to develop those models, as rudimentary as they may be, is already out there and so is the ability to scrape for content to train with online. They most certainly are already being used for malicious purposes that are beyond spamming some website for fun. What are they going to do, condemn criminals with a strongly worded blogpost too?
For those OOTL: <a href="https://thegradient.pub/gpt-4chan-lessons/" rel="nofollow">https://thegradient.pub/gpt-4chan-lessons/</a>
This doesn't add any links to the model or the video in question. They are not letting it to be easy for the people to judge for themselves.<p>This is just virtue signaling and scoring points for the powers that be.<p>It is very trivially easy to download and fine-tune a large language model to virtually any dataset and generate novel content.<p>The fact that he does it openly and publicly makes him an easy target. It's always the small guys who are easy to cancel.<p>I do not understand the actual harm of this. Someone will use this bot to spam other forums or SNSs? They have their own checks and balances. People who post bad comments or troll around, will not even know about this. They will not even bother. They have enough time sitting in their mums' basements. Why use a language model? They simply won't know how.<p>It is just a demonstration. If the people with the same set of values were in the cybersec industry, they would condemn white-hat penetration testing as something unethical.<p>This just shows that this can be done.<p>Enough governments and political parties around the world take help from generative models and bot-farms to sway public opinion. If the party is in power, companies turn a blind eye.<p>Who knows how many of them use open technologies for bad things. I saw something called deepnude ai, that, for a fee, lets you generate naked bodies of women, consistent enough with her face and body to look very real.<p>These people should be arrested and condemned. Lives can be ruined with this tech. But these people don't know them and don't care.<p>Tur k3y government have made ai powered drones that kill without any human in the loop. These half-pant wearing (literally, as you will see in any ai conference) academics can't do anything to them.<p>Cancel culture is high in ai academia. Timnit Gebru got fired for legit reasons, and these "inclusion" advocates made it a race issue.<p>They just want to cancel this guy, because he has a track record of not conforming to cancel-list made by them. Like he hosted Pedro Domingos once. And people literally commented they won't watch his videos again and condemned him.<p>This is one of the follies of being a "hot field". It becomes everyone's playground to implement their agendas. Now that wokism is the Hegelian dominant ideology, everyone must conform to it.<p>If you want to debate in good faith, I am here. Reply to this comment.