They aren't embodied, they don't experience the passage of time, or the physical world, or emotion in the way a human does.<p>I'm also just not sure how ultimately useful more powerful analytical tools will be. Most of the issues in the world are either physical issues and involve resources constraints in such a way that the ephemeral nature of AI is ill suited to handle. Or they are social issues, in which case the lack of wisdom and authentic human experience also makes AI ill suited to the tasks.<p>The primary conceit of the current emphasis on digital technology as a means to solve our problems is likely the idea that if we were just smart enough and thought hard enough or had an ultra intelligent assistant, intelligence would improve things and make the world a better place. That may help up to a certain point, but it doesn't seem obvious to me that is will continue to have positive returns when taken to it's logical extreme. There's also the issue of meaning and goals and teleology. I think that complicates a lot of the stories told by the more philosophically minded proponents of AI. In other words, suppose AI provided us a means to realize some kind of post-scarcity society...what's the point? What next? We are all familiar with the Rat Park experiments. Even in the most fantastically successful version of the future where AI massively surpasses the expectations of even the most dedicated and convinced proponents...that doesn't help us deal with the more fundamental issues. The industrial revolution already probably took us about as far down the path towards "post scarcity" that human psychology will allow. And the internet probably pushed us over the edge to the extent that it may engender the emergence of more serious reactionary impulses, and diminishing returns in the economic sense as it weakens us socially and emotionally and spiritually.<p>It doesn't seem obvious to me that intelligence (in the IQ test sense, or the LLM regurgitating decent responses to a multiple choice test sense) is necessarily always going to be socially, culturally, or evolutionarily adaptive. There is probably some point at which higher analytical intelligence begins to have diminishing returns, and some point after that where it even becomes maladaptive for most people in most contexts.<p>There's also the issue of trust. AI isn't trusted. And the tech community more broadly but especially the "AI community" portion of that is especially not trusted. And the tools aren't really that exciting to people outside of the tech social sphere. And that isn't even to mention the negative potential effects with respect to the issue of social trust between real people (did they really write that and mean it, or did an AI do it for them? Can I trust that these people on HackerNews, that I've just spent 20 minutes writing a heartfelt take on a serious issue for, are even real and not AI bots?).<p>So besides all my high minded speculation above about the nature and value of the specific kind of intelligence AI vendors primarily claim their products to possess...you've also just got a plain old fashioned product problem. People aren't really all that excited about the product and don't see how it could realistically solve the issues they face in their day to day lives. That seems like a very reasonable take to me and probably the most obvious reason why AI hasn't (and maybe won't) radically change the way we live our lives. I'm mostly involved in systems engineering and infrastructure work and consulting on those topics, and I'm kinda in the same position the rest of the population outside of Big Silicon Valley Web Dev Tech land finds themselves in. I've yet to find a killer application for the technology. I'll subscribe for a month to Anthropic or OpenAI when there is a big new release to give them a shot. But I've yet to renew over the past year or so because I just don't find them very useful for my work.<p>TLDR: I think AI developers are way out on their skis and living in a bubble and are optimizing for the wrong things. I don't really believe in "AI doom", but I think a lot of the pro safety people were really just trying to address this more boring and less dramatic social and product problem in the only way they know how (via developing strange new religious eschatology and other related hobbies of people that have been hanging out in these deeply anti social groups for so long). I'm sympathetic to their efforts (when they are legitimate and not marketing). But I'm not sure the prognostications of doom are all that useful at this point, most people just get bad vibes from AI and don't like it and don't see the point and don't really need much convincing in that respect. The products and the marketing that surround them are pretty on the knose about being anti-social and anti-human, and ordinary people have done a good job picking up on those vibes.