I’ve been developing software professionally for over 20 years now, and ChatGPT/GH Copilot are the biggest productivity enhancers I’ve seen since code completion.<p>Earlier today, I used ChatGPT to help me bang out a Ruby script to clone a repository, extract its documentation, and add those docs to another site that will serve as a centralized documentation source for a collection of projects.<p>I know Ruby and have been using it since 2007, but I still have to look things up with it all the time. By giving ChatGPT a bunch of poorly worded, lazily proofed commands, I was able to cut the time of development probably in half.<p>It wouldn’t be nearly as good with a language I didn’t know, but saying it’s a waste of time and money feels like it’s really missing the sea change that’s happening.
> generative AI is a mimic of human action, parroting back our words and images. It doesn’t think, it guesses – and often quite badly in what is termed AI hallucination.<p>Human thinking is also guessing to a large extent. It is guessing with many feedback loops until a resolution that is good enough, is found. It indeed mimics human behavior and is quite good at it. What counts is, how well can it guess? Since I use AI quite frequently for things where my output is part of the feedback loop, I must say, AI's guesses are very often spot on.<p>About parroting: I'm very sure AI does not simply parrot, I'm very often still amazed by how well my questions are understood, it gets what I'm actually asking for. A parrot never will even process my question.
As a consumer, I'm devastated by how AI is destroying the quality of services I used to rely on. As a tech professional, I'm stoked about everyone who is investing their time and energy into gambling on text generators, instead of developing their technical skills. The technical debt it creates will just make those skills even more valuable in the future.
Author conflates utility with social cost, as if they were equivalent terms. Everything that follows is a mess.<p>Opinion columns of newspapers are largely platforms to publish nonsense with no review or editorial accountability. This seems to fit the pattern.
That the author does not think generative AI can solve social problems shows a serious lack of imagination. Therapy, medical diagnosis, improved machine translation, and better information retrieval are all social benefits. There are social costs, too, but that doesn't mean the technology is a waste of time -- it means the technology needs to be regulated.
LLMs are amazing for basic coding tasks. Codeium was able to do about 30% of my programming, before I realized it was why my laptop was overheating and turned it off.<p>Maybe that says more about how low-entropy code really is than it does about AIs intelligence, but in any case it works.<p>I'm not sure what else I'd ever use it for though. I have no interest in Replika or anything similar, and I want it to stay out of creative writing and personal communication.
Basically the same talking points that have been repeated to ad-infinitum, quotes from article below:<p>- "generative AI is a mimic of human action, parroting back our words and images"<p>- "[AI] take[s] computing capacity away from other, potentially more useful activities"<p>- "[AI] requires an enormous amount of energy", "environmental costs are well-known"<p>- "AI is underpinned by significant capital investment in computing infrastructure" "This investment could go somewhere else, more useful"<p>- "AI is also sucking up innovation funding, especially venture capital."<p>- "it is threatening to overwhelm us with AI spam"<p>- "AI will necessarily lead to significant social change and associated costs"<p>TLDR: AI is a stochastic parrot, is not environmentally friendly, is expensive and generates spam
> We urgently need the expertise of social scientists to be able to make much-needed collective decisions about the future of generative AI that we want; we can’t leave it to business, markets or technologists.<p>> Kean Birch is director of the Institute for Technoscience & Society at York University.<p>Academic sociologist argues that AI should be controlled by academic sociologists. Color me surprised.
What a mess of an article. Complete ignorance. If you haven't read it yet, I'd suggest not wasting your time. It's just a lot of unfounded assumptions and hand-wringing by someone who appears to have a (very large) axe to grind.