The entire AI trend - long term is based on the idea that AI will profoundly change the world.
This has sparked a global race for developing better AI systems and the more dangerous winner takes all outcome.<p>It is therefore not surprising that billions of dollars are being spent to develop more powerful AI systems as well as to restructure operations around them.<p>All the existing systems we have must fundamentally change for the better if we want a good future.<p>The positive aspects / utopia promises have much more visibility to the public than the negative effects / dystopian world.<p>ARE WE TO pretend that Human greed, selfishness, desires to dominate and control, animalistic behaviour, use of technologies for war and other destructive purposes don't exist?<p>We are living in times of war and chaos and uncertainty. Increasingly advanced technology is being used on the battlefield in more covert and strategic ways.<p>History is repeating itself again in many ways. Have we failed to learn? The consequences might be harsher with more advanced technology.<p>I have read and thought deeply about several anti AI doomer takes from prominent researchers and scientists but I haven't seen any which aren't based on assumptions or foolproof. For something that profoundly changes the world, it's bad to base your hopes on assumptions.<p>I see people dunking on llms which might not be AI's final form. Then they extrapolate that and say there is nothing to worry about. It is a matter of when not if.<p>The thought of being useless or worse being treated as nothing more than pests is worrying. Job losses are minor in comparison.<p>The only hope I have is that we are all in this together. I hope peace and goodwill prevails. I hope necessary actions are taken before it's too late.<p>A more pragmatic perspective indicates that there are more pressing problems that need to be addressed if we want to avoid a doomer scenario.
The AI hype reminds me of the dot-com boom of the late 90's/early 2000's. It's clear that something fairly big is happening and lots of people are trying to ride the wave, either by getting a foot in the door, or predicting the end of days. However, no one can predict what day-to-day life will look like once the dust has settled in a decade or two. Some people are afraid of a future that they can't see clearly, and simply default to their own version of a worst-case scenario.<p>There is a certainly a race to develop the most powerful AI hardware and software, but most of that is to do with market share and bragging rights. AI not build on top of some kind of "magic" that only one person/company can control.<p>Yes, this technology will be used for both good and evil. Just like all the technologies that came before it, and those that will come after.<p>Yes, we are living in times of war and chaos and uncertainty. Humans always have been. Although we are, on average, a much more peaceful species now than ever before.<p>If you want my advice, don't pay attention to doomers in any market or field of study. They don't have crystal balls, they just like worrying about things and derive their satisfaction from sparking fear in others. It's a power/control thing. There is a difference between following news and following drama. Most people can't tell the difference because they're addicted to social media and journalism has all but gone to shit but that's a rant for another day.
Consider just war. Current drones are revolutionary but are dependent on a radio communication link which can be jammed.<p>There’s a strong incentive to make a drone that can compete its mission autonomously if the comm link goes down. If the target is solidly on enemy territory that’s one thing but weapons like tanks and artillery and drones work best when they are closely integrated with each other and with infantry so such a system needs to be able to reliably distinguish enemy and friendly troops as well as civilians with all the moral questions that arise.