The "Owners" will use it to get rich[er], at any cost, and we'll only realize too late the damage it did. Government regulation could stop it, but politicians will only use their position to get rich[er] instead.
Good point, bad rant.<p>Right now, the two real AI issues are 1) improved surveillance in a broad sense, and 2) job elimination. Both relate to what the owners of the tech want.<p>The political problem is that few governments are willing to effectively regulate 1) or 2).
This is the same semantic argument of "Guns don't hurt people". Of course, guns hurt people. A physical object can be harmful without a human being to give it intent. It's a straw man argument that delves into the meaning of words than actually addressing if something is actually harmful or not.<p>I think we can all agree that AI is itself a tool, that can be used for good or bad. However, we can certainly make a judgement call on it, as a whole, by examining the current state of how it's being used, and by looking at what kinds of human behaviors it amplifies.
I watch a fair few Youtube Video Essays and one thing I notice is that there is quite a discontent with AI. Unfortunately there is a lot of preaching to the choir and not enough making a case. Nevertheless that discontent itself is a signal, and I think this article touches on why.<p>People are quite disillusioned. There is growing wealth inequality, political division, Dark patterns (which should simply be called abusive software). We now have a generation that does not expect to ever own a home.<p>You often hear the term now "Late Stage Capitalism" Which while admittedly was a clever term of speech when it was originated, has become a lazy expression of nihilism. It evokes the idea of capitalism as a cancer at a stage where the symptoms are obvious and debilitating, but also that it is at a point of no return and that it will kill itself and its host and there is nothing that can be done.<p>I rarely hear a detailed complaint about the problems of AI without the phrase "Late Stage Capitalism" cropping up. The irony is, of course, that the devices, platforms, and information used to express these ideas were in a large part due to innovations that were enabled by a capitalist economy. I think without the anger and frustration of their current situation, many of those railing against capitalism would admit that what they would like is a well regulated socialist/capitalist system. Something that works for society but affords people the means to innovate.<p>The problem for AI is that people fear that the empowering capabilities will go to the already powerful. As with wealth inequality, the expectation is that the power distribution of AI will also be unequal.<p>There are numerous other complaints about various aspects of AI, but I feel like these are things that should be argued individually but are often too heavily influenced by the issues above.
This feels only vaguely less affronting than "guns don't kill people, bullets do".<p>AI is, for the foreseeable, the work of extremely vast troves of resources. Countless of the most expensive chips on the planet, the best interconnects, the highest paid expert teams. Whatever the output is, these inputs are the firing mechanism that creates & keeps the AI firing.
I can't believe OpenAI employees threatened to mutiny so their CEO could exploit them for more personal wealth. Meanwhile Anthropic now has the highest-rated model.
> We shouldn’t fear AI as a technology. We should instead worry about who owns AI and how its owners wield AI to invade privacy and erode democracy.<p>The entire problem with this article is that it is arguing against technological determinism without even knowing it, and so it has no argument against it. But with the seductive power of technology and the ability for it to provide short-term marginal advantages to people, and combined with the fact that humans are horribly poor at acting to stop long-term detriments and tragedy, it is far more likely that AI and advanced technology will spread even if we can see the death at the end of the tunnel.<p>The only way we will gain more wisdom against advanced technology is to gain more wisdom without using advanced technology, because the immense power it offers is precisely the thing that makes it even harder to acquire that wisdom, simply because the power is too seductive.<p>Sorry to say, but we cannot handle AI and that is abundantly clear: the short-term advantages are too great, even though it promises a horrendous future. Sort of like burning fossil fuels.
"Instead of enabling a world where we work less and live more, billionaires have designed a system to reward the few at the expense of the many. "<p>They say this as if it is a novel outcome.<p>To me, this idea that an AI can specially manipulate humanity into it's own destruction, only comes from the most cynical of people who are already in some form of business that relies on manipulation towards self destructive habits in order to generate profits.<p>I think it's an incredibly myopic view that willfully ignores everyone outside of the small bubble of capital clutching psychopaths that they mistake for a representative sample of "humanity."<p>I genuinely fail to appreciate this idea that AI, particularly as embodied through very expensive and poorly trained LLMs, is somehow "dangerous."