People are bad at predicting the impact of technical changes. Particularly when that change is exponential. When it comes to AI, we've of course got a century plus of science fiction material made by people who tried really hard to imagine the future. It ranges from very dystopian to very utopian. And recently, HN and the rest of the internet has been obsessing over AI a lot.<p>My personal view is that we're probably all missing the point in some way. And of course I'm probably missing it too. It's like predicting that a few hundred main frame should just about cover the whole market in the nineteen forties. That's where we are with AI today. We have a handful of companies capable of building somewhat useful AI models. And a lot of under resourced tinkerers trying to do stuff with it.<p>However, a point I like to make regarding work is that we already live in the future. People are worried about losing their jobs not realizing that they've already lost that. By that I don't mean they are out of work. But both the nature of and the amount of work we do to survive has changed massively since the industrial revolution. We keep ourselves busy and it earns us a living. But it mostly isn't all that relevant or important what we do.<p>Most of us here on HN are doing jobs in software, investments, or related things that have relatively little to do with providing food, shelter, or safety for people. It's all interesting and fun and probably a little bit hedonistic. If I look in the mirror, I can't say with a straight face that what I do is particularly important or relevant. It's more about me doing something interesting (to me) that happens to convert into a weird thing called money that I can convert into stuff I need or want. People are now having serious discussions about having a four day work week. And we just came out of a pandemic where startlingly large percentage of people vastly reduced their economic activity without our economy grinding to a halt. A lot of busy people that suddenly weren't so busy. It did not matter.<p>It used to be that people worked until they were to old and broken to work. If they didn't they'd starve. For most people meant 6 day work weeks, long shifts, not a lot of leisure time, etc. You'd start work as soon as you learned to work people died young on average. It still means that for some people in developing markets and even some developed markets that maintain a class society (like the US). However that's increasingly less true for a lot of people. There are a lot of people working in the services industry. More than in things like farming or manufacturing.
Lets assume that current AI type models are capable of approaching the capability of human type neural networks.<p>So yes, AI will be very disruptive to our lives. It will start with a narrow set of capabilities, but will widen as it becomes really good within those constraints.<p>I think we are mostly scared about the capabilities of AI that push us to question, what is my worth then, as a human being? Do I still have worth? And, what is really the point of all this?<p>This “revolution” will push us to look inward and find out what and who we really are. What is really valuable? And most importantly, what is really worth struggling, fighting, striving for, in the context of the human race.<p>Our subjective human experience and what is valuable to us as humans, still remains out of reach for AI, and I think it will never become within reach. Because AI will never share with us the subjective human experience.
Here's a take I havent heard yet. ChatGPT understands language and we've got troves of content archived/realtime that can't be reviewed manually. Now there's a tool that can understand text. Govs must be chomping at the bit to ask, "Hey ChatGPT, does any of this content seem illegal?" where illegal is a broad term, especially the authoritarian ones. Seems like an amazing surveillance tool to flag things for human review.
Here is the thing: The industry will sugar-coat the layoff effect by cutting down work days - not laying off a significant crowd.<p>Initially it will seem like a humane thing to do without cutting down salaries BUT we'll feel the real effect when the salaries are not increased to match the inflation.<p>You will work less, produce more (with AI), paid less (in terms of purchasing power).<p>Products that AI can touch will get cheap so purchasing power on digital products will stay the same to keep your entertained BUT land, housing, food... hard things will get harder.
No, we aren't. There's still a chance that we will enter a new ai cooling if models continue to demand more GPU and energy to become more effective, but if the same rise in quality persists through GPT 5, 6 and 7, our world will be fundamentally changed, both to the better through a massive increase in knowledge and engineering capabilities, and with huge risks to life and freedom of humanity.<p>I'm scared and excited.
Me: "How many jobs will be lost due to new AI technology within the next ten years?"<p>Bing: "According to one source, worldwide, a billion people could lose their jobs over the next ten years due to AI, and 375 million jobs are at risk of obsolescence from AI automation. However, it’s important to emphasize that there is no shared agreement on the expected impacts on the workforce or economy."
Left: "We should enforce AI be biased in the right way."<p>Right: "AI is too biased against conservative views."<p>Worried consumers: "The junk level on the Internet will rise from 90% to 99.99%"<p>Venture capitalists: "We will make tons of money."<p>Business owners: "Soon I can fire more people."<p>Worried employees: "What if AI will replace me."<p>AI zealots: "AI will just help people do their jobs better and easier."<p>Government spokespeople: "Some jobs might be at danger but we are working hard to create new jobs."<p>HN commenters: "We should look more at philosophical implications of AI."<p>SEO spam farm owner: "Hurray!"<p>Gamers: "What if NVidia will make AI chips instead of GPUs?"<p>Stoicists: "We will survive in one way or another."
Not even OpenAI seems to know. They put some perfunctory disclaimer on economics into their risks section of the GPT4 report, but their lack of understanding the consequences didn't seem to prevent them from pushing this development, so uncharted territory it is.<p>It should anyways be up to governments to reign in this technology for the benefit of humanity, though I'm afraid this won't happen.
> She said artificial intelligence could go in one of two directions over the next 10 years. [...] AI development is focused on the common good, with transparency in AI system design and an ability for individuals to opt-in [...] Ms Webb's catastrophic scenario involves less data privacy, more centralisation of power in a handful of companies<p>Umm... so are we just not going to mention the elephant in the room at all? I mean, I know everyone finds “the singularity is near!” people annoying, but there’s not any mention at all of some type of existential risk? The “catastrophic scenario” relates to things like privacy and fairness?<p>Someone commented in another article about AI safety being composed of two types, those focused on “AI alignment” and those focused on “AI ethics”. I increasingly find that the latter group seems to be putting the cart before the horse.<p>Hypothetically, if you were to poll the research scientists at OpenAI (anonymously) on which type of AI safety they are more concerned about, I would be really curious what the results of that survey would look like.
In Iain M Banks's Culture novels, the Minds seem to have an appropriate role in society. Gonna read up on them for some perspective on current developments.
Looking forward to it!<p>Even with sci-fi levels of worst case scenario and it turns out to be an AI that goes sentient and takes control of the world... can't be too much worse than the people in power now. At least it'll act more logically :)<p>Anything less than that seems like a fun change to be living through. This is our industrial/technological revolution. They turned out alright eventually. Might be time to bring up UBI again but we'll figure it out either way. Humans survive, it's what we do.<p>Rule 1 of life: Evolve or die.
"AI storm", journals really don't know what bullshit to put out anymore.<p>Just like all previous AI bubbles, this is a bubble. We'll soon realize that unless you're writing a book, GPT is pretty much useless, and the hype will die down like it always has in the past.<p>To think that AI who's only capability is to generate text can have any significant impact on anything is simply ridiculous.