TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: What do you expect will be the real impact of AI on society in 10 years

16 pointsby doubtfuluser4 months ago
I started to get more curious on this, especially beyond the obligatory “genAI will do everything.” What are your thoughts on societal impact. As I see it currently: knowledge work is (surprisingly?) the first group of people seeing massive job losses and replacements from work. In the past “knowledge” was a scarce resource, now the AI delivering knowledge will become the sparse resource. So people owning &#x2F; hosting it can sell it and make lots of money-&gt; in the current setup this means the few rich in AI will get richer. In addition letting money work for yourself will also stay (investment) so again rich people will become richer.<p>Interesting is the question about physical labor. The economics of pushing atoms in the physical world is nowhere near the economics of pushing electrons (bytes), so if you are not part of group 1 (entrepreneurs) or group 2 (investors), doing physical work is something that will earn you some money (I also expect care work to stay, since people will probably prefer for a long time to have humans care for them). But this means that still group 1 and 2 will be the big winners, paying some money to group 3.<p>Where do you disagree? Where do you see a different outcome? I’m curious to learn about your thoughts

11 comments

scarface_744 months ago
Students in school even post ChatGPT let alone better “AI” will find their growth limited. They will never learn how to solve complex math or logic problems or how to write.<p>You will also see long term affects in the industry as the pre-AI generation leaves the market.<p>It was already hard for entry level developers to break the can’t get a job &lt;-&gt; don’t have experience cycle. It is even harder now.<p>Before there was always some simple busy work that senior developers didn’t have time to do so you would hire a junior level developer who needed to be told exactly what to do. LLMs are already as competent as a junior developer. Why hire them?<p>I see the next level of hallowing out to be mid level experienced “ticker takers” who just take well defined business use cases off the board and do the work. For non software companies, a lot of that work has already been outsourced to SaaS offerings where businesses hire a consulting company to do the implementation (various ERPs, EHR&#x2F;EMR systems, Salesforce, ServiceNow, etc)
评论 #42919485 未加载
Lionga4 months ago
Sam will promise in 2035 that AGI is very close and probably will happen at the end of the year, same as every year (Elon als still promises FSD is close and probably comes out EOY, just that FSD might actually be realistic by then).<p>People use AI a little here and there but nothing much will have changed due to it. Mostly more work for people needing to correct AI mistakes.
评论 #42919443 未加载
mikewarot4 months ago
I strongly suspect that after a few false steps, which always happen when people fall for the hype, AI will just be a labor saving device, like power tools, etc. Certain categories of work won&#x27;t be necessary any more, just like we don&#x27;t have file clerks in the age of databases, but other new jobs will fill their places.<p>The general level of productivity will rise, and most of the benefits of that increase won&#x27;t be seen by the workers. 8(
tempeler4 months ago
We consider the human brain and thought structure to be close to perfect. In fact, we know that it can be easily deceived or misled. It can make wrong decisions because it is very easy to fall into cognitive errors. We are trying to copy this structure in developing artificial intelligence. That is why it seems very common for it to hallucinate or give wrong answers. When the reasons that cause people to make wrong decisions are eliminated, maybe what artificial intelligence gives in the same way can be much more realistic and accurate. Maybe then we can talk about Artificial General Intelligence. They use it very easily. That is why it does not seem possible to predict in the future whether humans will fix artificial intelligence or artificial intelligence will change our thought structure.
burrish4 months ago
I think kids will have a hard time learning and being smart with AI chewing everything for them.<p>I&#x27;ve read a few stories about parents questioning the over-use of AI from their child, adding to that I&#x27;ve seen my fair share of adult who cannot do anything without asking ChatGPT first.
anon25494 months ago
It will accelerate and deepen the alienation that is already epidemic, and will lead to a great deal of societal, economic, and personal trouble.
JCJC7774 months ago
will it lead to an increase in human fertility? less&#x2F;no work pressure, more free time, more financial security
评论 #42930588 未加载
评论 #42938024 未加载
hnthrow871234 months ago
different view but for that I need to pm&#x2F;dm you. Any way that is suitable&#x2F;comfortable to you. I&#x27;m not a scammer and I don&#x27;t have malicious intentions.<p>It&#x27;s not the same old boring responses which bring more uncertainty.<p>It&#x27;s obvious but it&#x27;s not that obvious that you would likely get it from current ai reasoning models.
评论 #42917473 未加载
throwaway888abc4 months ago
Consolidation of power.
评论 #42915736 未加载
hnthrow87124 months ago
everything
_cjse4 months ago
Well the most pressing question is whether it will kill us all. There are good reasons to suspect that; Nick Bostrom&#x27;s <i>Superintelligence: Paths, Dangers, Strategies</i> (2014) remains my favorite introduction to this thorny problem, especially the chapter called &quot;Is the Default Outcome Doom?&quot; Whether LLMs are sufficient for artificial superintelligence (ASI) is of course also an open question; I&#x27;m actually inclined to say no, but there probably isn&#x27;t much left to get to yes.<p>A lot of smart people, including myself, find the argument convincing, and have tried all manner of approaches to avoid this outcome. My own small contribution to this literature is an essay I wrote in 2022, which uses privately paid bounties to induce a chilling effect around this technology. I sometimes describe this kind of market-first policy as &quot;capitalism&#x27;s judo throw&quot;. Unfortunately it hasn&#x27;t gotten much attention even though we&#x27;ve seen this class of mechanisms work in fields as different as dog littering and catching international terrorists. I keep it up mostly as a curiosity these days. [1]<p>That future is boring; our current models basically stagnate at their current ability, we learn to use them as best we can, and life goes on. If we assume the answer to &quot;Non-aligned ASI kills us all&quot; to be &quot;No&quot;, <i>and</i> the answer to &quot;We keep developing AI, S or non-S&quot; to be &quot;Yes&quot;, then I guess you could assume it would all work out in the end for the better one way or another and stop worrying about it. But we&#x27;d do well to remember Keynes: In the long run, we&#x27;re all dead. What about the short term?<p>Knowledge workers will likely specialize much harder, until they cross a threshold beyond which they are the only person in the world who can even properly vet whether a given LLM is spewing bullshit or not. But I&#x27;m not convinced that means knowledge work will actually go away, or even recede. There&#x27;s an awful lot of profitable knowledge in the world, especially if we take the local knowledge problem seriously. You might well make a career out of being the best informed person on some niche topic that only affects your own neighborhood.<p>How about physical labor? Probably a long, slow decline as robotics supplants most trades, but even then you&#x27;ll probably see a human in the loop for a long time. Old knob-and-tube wiring is very hard to find expertise around to distill into a model, for example, and the kinds of people who currently excel at that work probably won&#x27;t be handing over the keys too quickly. Heck, half of them don&#x27;t run their businesses on computers at all (much easier to get paid under the table that way).<p>Businesses which are already big have enormous economic advantages to scaling up AI, and we should probably expect them to continue to grow market share. So my current answer, which is a little boring, is simply: Work hard now, pile money into index funds, and wait for the day when we start to see the S&amp;P500 start to double every week or so. Even if it never gets to that point this has been pretty solid advice for the last 50 years or so. You could call this the a16z approach - assume there is no crisis, things will just keep getting more profitable faster, and ride the wave. And the good news is if you have any disposable capital at all it&#x27;s easy to get a first personal toehold on this by buying e.g. Vanguard ETFs. Your retirement accounts likely already hold a lot of this anyway. Congrats! You&#x27;re already a very small part of the investor class.<p>[1]: [url-redacted]
评论 #42919263 未加载