<i>>AI is basically text-predict combined with data mining. That’s it. It’s a super-Google that goes into the body of texts and rearranges the words into a very pleasing facsimile of a cogent argument. There’s no “intelligence” behind it, in the sense of a computer actually thinking. </i><p><i>>AI is more or less the same thing — it uses our wonder to convince us of a simulacrum of intelligence when what we are really witnessing is, in a sense, our own childish excitement at a trick of anthropomorphisation </i><p>People keep bringing up the "fake intelligence" vs "real intelligence" argument but it actually doesn't matter.<p>What's important is <i>if it is useful</i>.<p>E.g... create a ad-skipping device that uses so-called "artificial intelligence" to detect commercials during sports broadcasts and automatically mutes the tv. People would embrace AI like that instead of complaining about it.<p>If a pundit tried to advise the consumer who wants to avoid ads, <i>"you know, that ad-skipping technology is _just_ fancy linear algebra and there's no _real_ intelligence behind it! You're dumbing down your brain by letting the AI mute the ads automatically instead of you doing it yourself."</i> ... that's not a compelling argument. The usefulness of blocking ads <i>outweighs</i> any theoretical thresholds for real intelligence.<p>A lot of generative AI is <i>not useful</i>, so people will complain about it by falling back on the "it's not real intelligence" argument.
<i>> I still have not downloaded an AI app of any kind to my phone.</i><p>If he has an iPhone, there’s no need to download it. It’s already there.<p>That said, he has some good points.<p>Unfortunately (or maybe fortunately), we won’t be able to “opt out,” forever. At some point, ML is bound to become endemic.<p>It’s like those stupid scan-guns that supermarkets in my area are starting to ask customers to use. They are scan guns that you pick up, as you go in, and scan each purchase. When you check out, you just scan a barcode on the cashier stand, and Bjorn Stronginthearm’s your uncle.<p>I refuse to use them, as the only reason they exist, is to fire cashiers.<p>Sooner or later, however, I am unlikely to be able to avoid them.
> AI is super evil and will destroy us all<p>Ok why is that<p>> AI is basically text-predict combined with data mining. That’s it.<p>That sounds fine.<p>Its like when you hear a conservative pundit claim that all antifa are weak people who need extra genders, and then in the next breath complain that they are an effective, brutal, brick throwing, pundit punching street militia.<p>Pick a lane.<p>>As for a machine writing a novel for me in a matter of milliseconds — I have no idea how that could possibly generate authentic pride or produce anything other than a cavernous inner emptiness?<p>So now its issue is that it doesnt give you good feelings?
> <i>But I’m still paying the price for that: every time I log in to my bank account now, it’s like peeling barnacles off the hull of a ship to get rid of all the new charges that Apple and Google have concocted.</i><p>That part isn't very convincing. I have no charges on my bank account from Apple or Google.<p>Anyway, i think a potential reason to reject AI is what Kurt Vonnegut has laid out in his 1952 novel "Player Piano": Do we want automation take away jobs we actually <i>like</i>? I highly recommend reading this book, it is once again very relevant today.
I have different reasons for avoiding AI.<p>I enjoy understanding what my programs do to the deepest level, so making or using AI are both boring; they remove the fun part of programming and leave only the boring parts (mainly debugging). I haven't liked the current ML field since the beginning (early 2010s in my case) for this reason.<p>I want a tool, not a slave. I don't want it to be "smart", but an extension of my body. A thinking body part is always more annoying to deal with, because you have to reverse-engineer what it's doing to get it to do what you want.<p>I don't think this reasoning applies to everyone. I think it's fine for other people to use ML algorithms. I just don't want them myself.
> It really would have been social suicide to try to make my way in the 2010s professional class with anything other than a MacBook Pro and an iPhone<p>Can someone explain this part? It's not something I can relate to at all. Maybe because I'm not from the US?
I have a serious question: is there any good collection or list of people who have rejected technologies when introduced?<p>I'm not talking about entire societies of people like the Amish.<p>I'm talking about people who otherwise use technology, but then do something like this publicly etc.<p>I want to know if time even remembers these people beyond that one tid bit. I am curious if they went on to do anything else or what technologies "caused" them to defect.
I refused to use ratcheting spanners and air tools and other useful things when working on cars or bikes. I was so very wrong.<p>There is a time and place for an air powered ratchet or big rattle gun. To think “my muscles will atrophy and I’ll stop thinking of clever ways to loosen that thing” is just wrong. It’s confusing the desired outcome with the method.<p>I did resist copilot up until recently and now laugh at myself. Use it to power through the boring template crap and leave yourself the juicy morsels. It’s faster and more satisfying.
> Because AI is no good for us — no good for our minds, creativity, or competence — and as it gets jammed down our throats, we are the only ones with the power to refuse.<p>I think there are a lot of risks with AI, but I'm not convinced that it's intrinsically bad for "our minds, creativity, or competence". In many ways it's let me be MORE creative in the ways I want to be, by helping me overcome obstacles that were always blockers in the past.
My only use case of ChatGPT:<p>random CLI app I don't know how to use > explain in plain english what I want to achieve > ChatGPT outputs the command I need<p>Ok, not random but I use it for FFmpeg all the time. Example from yesterday when I needed to convert a TrueHD audio file into stereo FLAC: <a href="https://i.imgur.com/5ib99qh.png" rel="nofollow">https://i.imgur.com/5ib99qh.png</a><p>100% works and it's perfect
You may be able to boycott it now, but I am pretty sure there will come a time where that's not feasible if you want to participate in society, the same way it's no longer feasible to "boycott the internet".
> create a ad-skipping device that uses so-called "artificial intelligence" to detect commercials during sports broadcasts and automatically mutes the tv<p>Is someone working on this? I'd love to contribute. I have this idea every time I see someone watching commercial TV. The volume (and volume!) of ads is insane.<p>I was easily able to make a podcast ad remover with LLMs, but real-time ad muting of a video stream is something I haven't tried yet (possibly easier because of closed captioning?).
> But we can, for example, click down below Google’s AI offerings to look at actual links. And we can generally go about living our lives in our sad old way without the benefit of AI “personal assistants”<p>I don't think this will be possible for much longer. Google is getting so unusable, I am actually starting to believe that they are intentionally screwing their traditional search product to force users to "onboard" on to their AI results instead.
I'm not afraid of AI (LLMs). In fact, I think it has many useful applications that will come to light in the next few years.<p>I'm terrified of AI companies. They have no qualms about destroying economies and societies, burning the planet down with carbon emissions, whatever they think will make it more likely for them become the Google or Amazon of AI. They will fuck the rest of us over without a second thought, if it means they might win.
100% agree. AI is trash; increasingly invasive, life-destroying trash. There is no worse category of trash. There is nothing it cannot do which people cannot already do. The difference might be measured with various KPIs, but the material difference is that the output of AI is bereft of meaning. By definition, because it is not sentient it lacks intent. So it is all for nothing, means nothing and is nothing. Reject it.
This really felt like on-point :<p>> Unwary, I fell for the techno-optimism of the past two decades and ended up with a diminished attention span and a bunch of mysterious subscription charges to show for it. Well. Fool me once, shame on me. Fool me twice, shame on Sam Altman. I know, much better now, the folly of turning over my own mental powers to a bunch of techies promising a brilliant future.<p>> I’m not making that mistake again. Nor should you.<p>Context:<p>> The people pushing AI now are the same sorts who spent the 2010s promoting web 2.0 as a new vision of freedom and global connectivity, all while destroying traditional media and ripping off as much private data as they possibly could and cheerfully selling it to advertisers.<p>> This recent history raises an important question: why in the world should we trust these people ever again?<p>> These are remarkable achievements. But they do infantilise us. I’m pretty sure that, if my phone were taken away from me, I couldn’t find my own way from my home to my place of work.
So many issues in this article, but I'll only address a couple. The author seems to be saying that AI will somehow deprive people of their creativity, but I can't fathom that at all. It's a crystallization of the AI-making-art conundrum that so many have. But I see no reason why people won't continue to be creative on their own. Not if their creativity is genuinely springing from within. Nobody is being forced to use AI to create works of art, or even assist in it. Just as nobody is being forced to use the spell check or auto-complete features in word processors. They're just there as an option for those who choose to utilize them. I do see how these things can lower the financial value of artistic works though, which I think is the actual issue that many have. It's the fear that AI doing art will lead to less earning potential by artists, which is a different issue.<p>And the author alludes to the creation of further dependency on Big Tech such as OpenAI. This totally ignores the fact that there are quite a few (actually) open models out there. One can do a fully local setup, use a model service such OpenRouter, or even self-host a model on a GPU service such as Runpod. There are all these options available, depending on user preference and skill, so one doesn't have to become dependent on Big Tech's gated offerings.<p>Overall, the only thing I can really say is I hope the author is very close to retirement or already there, as if they keep this stance on AI they'll eventually fall very far behind, unless they're into plumbing or a similar extremely hands-on field.
> AI is more or less the same thing — it uses our wonder to convince us of a simulacrum of intelligence when what we are really witnessing is, in a sense, our own childish excitement at a trick of anthropomorphisation<p>This claim would stand better if it wasn't able to solve math and physics problems I struggle with.
I guess for every grift for AI hype there's the other side seeking to capture an audience that rejects it and making super hyperbolic statements to get there lol. This doom and gloom is also getting just as tiring.<p>Like imagine rejecting the internet in it's entirety because the "dotcom" era was way overhyped. That's what the author sounds like right now. Whatever rises out of the ashes of this hype will be far more useful than what we had before and we all move forward.
I will always struggle with calling an LLM “AI” as if it somehow has any intelligent thought whatsoever.<p>We don’t have AI. We aren’t particularly close to AGI. This effort has about reached the top of the S-curve.
I used to enjoy learning to understand the small technical details of the design work I did. I spent a lot of time learning HTML, CSS and JS and how to use them properly and effectively. It used to be easy, when my brain was more plastic. Now, not so much (I got old).<p>So, along comes AI, which promises to solve my problems - I won't need to learn the messy details because I just sort of describe what I want and AI handles the details.<p>God, what a disappointment. I don't know what I was thinking that AI was supposed to do for me, but this doesn't seem like what I wanted. I'm missing the details, which is where the interesting bits lie.<p>Probably why I avoid management - sure, managing people and things has details, but those details aren't interesting to me. I probably do need AI to help with that bit … except, hahaha, upper management doesn't need me there, or anywhere, because AI can do that stuff (but probably not the messy details - those were "handled by someone else's code" which was just copied into the system).<p>The future is kind of disappointing right now, in that regard.
Leaving aside the question of whether AI is/will ever be actually capable of what tech bros claim, I don't think there is anything inherent to AI that will cause these issues. The problem will be with society's mode of production. We have a society now where people could be doing less work but are instead forced to do—as Graeber calls them—Bullshit Jobs.<p>Bertrand Russell and John Maynard Keynes said almost a century ago that as technology advanced people would have to do less work. I think if AI lives up to the hype it could be a tremendous boon for everyone <i>if</i> we can restructure our societies to take advantage.
I found one of the comments more interesting. It shows how people can be so terribly wrong in one case and correct in another.<p>>Text predict combined with data mining’…is all 99.9% of [what we have quaintly come to define as] human intelligence is, too.<p>>Learn a trade, Sam. Start a manufacturing company. Go and dig ditches. Perform brain surgery. Fly a chopper. Provide hands-on care for someone who is ill. Be a stay-at-home dad, even. AI can’t do anything,
I agree with boycotting AI, and personaly go as far as refusing to ever speak to a voice recognition system, as one comenter here put's it....I picked my lane
and sure it's sometimes difficult, and I feel a bit bad for forcing and blundering my way through
to the human who then has to deal with my obstinence.....some of who...are truely outraged at my flat out refusal to just "go along to get along"
though to be perfectly honest, I am a pragmatist and so will exploit whatever technology that becomes main stream, for my benifit......but at one remove.....through business only services and identities that I have.
And as a hard core refusenick, I can see how sophisticated fingerprinting accross different platforms has become, and how there are various
AI's getting better at faking that it is a human presence attempting to interact, which with, what?
5 billion people on line, trying but now failing to connect because of AI out clicking them, there is a very real chance of things going horribly wierd and becoming non functional.
Boycott "AI"? Why?<p>See if it's useful, ignoring the religious like hype, and use it for what it's worth. And stop calling it "AI" because it doesn't have anything to do with "intelligence".<p>LLMs will write the code for you is bullshit. LLMs will get you some starter or reference info if it's a widely discussed subject is true.<p>Apple products are crap, software wise. They're just less crap than the competition so I use them.
Junk writing from someone who doesn't understand AI.<p>I use ChatGPT in many useful ways. For instance, I ask ChatGPT to explain my texts back to me to help prevent misunderstandings due to misleading sentences. How's that a bad thing? This is incredibly useful when there's nobody around to proofread my texts.