I don't have an issue with AI as it is very useful in specific cases. However my issue is that it now has become the new gold rush, the new Crypto and has attracted way too many opportunists and get rich quickers who are starting a new "AI startup" every other week by slapping chatgpt on top of something. It is getting nauseating and sad to see VCs eating it up as well.<p>So until this settles, I don't trust it much not because of itself but because of the people who are milking it at the moment.
- It costs jobs from fields that didn't need jobs taken away.<p>- It costs more jobs than it creates.<p>- It's the new meme tech, ala SAAS, Cloud, etc. that I have to tolerate now. I love seeing a "Chat with bing!" button, that's great.<p>- It's flush with cash due to the US having tonnes of play money to throw around, enabling irresponsible behavior such as pricing below cost. Vast sums of money are burned on this boondoggle around the world annually to achieve middling results. The new Bing AI assistant does not impress for example.<p>- It's an unsolved ethical problem. Sampling in music requires attribution, comparatively.<p>- It drastically exacerbates the accountability problem in an increasingly automated world with issues around accountability already. Look at all the threads here about people getting screwed by Google with no recourse?<p>- It lowers the barrier to entry for bad actors to gain legitimacy. Good actors never needed great art or anything else AI can do if they made things with care and skills they had.<p>- The world didn't need more content. It is awash with content already. A lot of the content we have now is not good and AI isn't going to make it magically better.<p>- AI as implemented specifically targets jobs that didn't represent real blocking problems for humanity. It doesn't purify water, eliminate filthy, backbreaking or intense and repetitive near-slave labor or anything else at this point, it came out of the desire of Elon and friends to take Shutterstock's annual profits. This is an ignoble goal. The task of automating the worst labor safely and reliably remains extremely challenging even when AI is involved.<p>A better question is what problems does AI really solve? Are those benefits worth the massive cost?<p>When I see something and I know that it was created with child labor, it induces the same disgust that AI products do. Perhaps I can do great and good things with some tool or product made with child labor, but that doesn't change the ethical abomination at the core of that product.<p>If AI isn't paired with UBI, then we are simply on a collision course for the elimination of tens of thousands of, admittedly awful, jobs. What are all those people going to do? Truck drivers, petty artists, call center workers, etc. We don't have Star Trek style replicators yet, and we have not uniformly evolved as a people to believe in a robust set of rights for our fellow man.<p>I understand why capitalism has forced this situation to happen, but it is incumbent on governments to aggressively protect their citizens and workers from the AI menace.
My main reservation with AI tools for generating code isn't so much an issue with the technology, but rather and issue with us. People keep forgetting that they're not actually intelligent and assume that everything it gives them is correct, when in fact they're quite often wrong in very subtle ways.
Consumer GPUs made matrix multiplications orders of magnitudes cheaper and the resulting new wave of AIs are unlocking new cognitive capabilities by just throwing money to the problem, in a way that scales to billions of dollars. This is different than how things were before. Now the capabilities are pretty much just limited by how many AI acceleration chips can be made at TSMC and Samsung or wherever, and by how many billions of dollars of capital can be spent to buy them. Also using these tools isn't any kind of choice, it would be like early humans wondering what reservations or criticism we have regarding the use of speech or writing.
Machine learning is a useful technique. But that's all it is: a technique.<p>"AI" OTOH does not exist unless one adopts a strange definition of "intelligence".<p>An intelligent person can tell us how they arrived at a conclusion. "AI" cannot.<p>That's a massive timekill to (temporarily) "fix" when the conclusion is wrong. The process is a black box.<p>Even with old search techniques one can understand the process used to arrive at the results. When results are not what we want, we can understand why.
Well, judging by the abysmal quality of the "AI art" coming out, the limitation on creativity is clearly not technological. The limitation is the lack of creativity coming from these artists.<p>As such, the predictions that AI will "replace all artists" is obviously way overblown. At best, it will be a helpful tool, along the lines of Photoshop or After Effects.
It creates a level of feature dependency on another company that makes me uncomfortable. It shouldn't because we already depend on products like HubSpot, Plausible, Pendo, and AWS. This feels different though; AI is far more integrated, like a serious drug addiction.
Competition.
Creating an innovative product using AI right now is too risky because of the competition. It is possible that you develop one thing and in 3 months 10 other people have developed the same thing.
Not so much a criticism of the tech that is called AI currently, but more of the impression that the term "AI" itself instills. A thought experiment I've come up with is to look for the term "artificial intelligence" when it shows up in articles and comments and replace it with something like "(advanced) technological progress". After all AI is a term that can be used to refer to a wide variety of specific technologies like LLMs, segmentation and diffusion models, all of which seemed to explode in usefulness and popularity in a relatively short time frame. And I think "AI" is a reactionary term for technology that's so above what came before it in recent memory that it's in parts fascinating and scary.<p>I always thought the term "artificial intelligence" had a sort of disabling effect, like there is an intelligence outside of ourselves that serves to drive us in X direction, good or bad. "Technological progress" implies <i>we</i> are the ones driving the changes and the problems they will invariably bring. We sort of grasp this tech will cause profound impacts on society of some vague quality, enough to leave an "ethics" section in every white paper that comes with freely-distributed code and instructions for use, but continue plowing on regardless of what they could possibly be. How sustainable is this? Will there ever come a time when uploading code or even papers to GitHub for anyone to consume become taboo from the stigma and change that's been inflicted on ourselves?<p>I think the inflection point for those problems creeping into society at a visible everyday level is on a much quicker time scale than AGI. Sometimes I think it's like equipping people with pistols that shoot precision-guided homing bullets - not so much on the scale of a civilization-ending scenario, but it changes the game in its own significant ways. Look at comments accusing others of using ChatGPT to write their responses for them. I think most tech can cause these effects and it's worth questioning what it's meant to accomplish as they're created or used.<p>At times I wonder if the end stage of any given intelligent civilization is to delegate all parts of its thought process to technology that can be engineered to be superior, with all consequences that entails, because there's no point to being stuck with the tech that is already there forever. The thought that scares me the most is that the revolution might not be directed by governments or angry anarchists, but indirectly, by bored machine learning engineers sitting in their rooms contributing just one more paper or PyTorch implementation towards an inflection point in humankind <i>because it's fun and rewarding to them.</i><p>And even if we're supposed to stop advancing this tech to prevent irreversible societal change, would it even be possible if we tried? There's 8 billion of us on Earth and metric tons of GPUs in existence. The question of if progress can ever be halted in a state such as ours for in the name of self-preservation is one I'll probably be keeping in mind for the rest of my lifetime.