Well the most pressing question is whether it will kill us all. There are good reasons to suspect that; Nick Bostrom's <i>Superintelligence: Paths, Dangers, Strategies</i> (2014) remains my favorite introduction to this thorny problem, especially the chapter called "Is the Default Outcome Doom?" Whether LLMs are sufficient for artificial superintelligence (ASI) is of course also an open question; I'm actually inclined to say no, but there probably isn't much left to get to yes.<p>A lot of smart people, including myself, find the argument convincing, and have tried all manner of approaches to avoid this outcome. My own small contribution to this literature is an essay I wrote in 2022, which uses privately paid bounties to induce a chilling effect around this technology. I sometimes describe this kind of market-first policy as "capitalism's judo throw". Unfortunately it hasn't gotten much attention even though we've seen this class of mechanisms work in fields as different as dog littering and catching international terrorists. I keep it up mostly as a curiosity these days. [1]<p>That future is boring; our current models basically stagnate at their current ability, we learn to use them as best we can, and life goes on. If we assume the answer to "Non-aligned ASI kills us all" to be "No", <i>and</i> the answer to "We keep developing AI, S or non-S" to be "Yes", then I guess you could assume it would all work out in the end for the better one way or another and stop worrying about it. But we'd do well to remember Keynes: In the long run, we're all dead. What about the short term?<p>Knowledge workers will likely specialize much harder, until they cross a threshold beyond which they are the only person in the world who can even properly vet whether a given LLM is spewing bullshit or not. But I'm not convinced that means knowledge work will actually go away, or even recede. There's an awful lot of profitable knowledge in the world, especially if we take the local knowledge problem seriously. You might well make a career out of being the best informed person on some niche topic that only affects your own neighborhood.<p>How about physical labor? Probably a long, slow decline as robotics supplants most trades, but even then you'll probably see a human in the loop for a long time. Old knob-and-tube wiring is very hard to find expertise around to distill into a model, for example, and the kinds of people who currently excel at that work probably won't be handing over the keys too quickly. Heck, half of them don't run their businesses on computers at all (much easier to get paid under the table that way).<p>Businesses which are already big have enormous economic advantages to scaling up AI, and we should probably expect them to continue to grow market share. So my current answer, which is a little boring, is simply: Work hard now, pile money into index funds, and wait for the day when we start to see the S&P500 start to double every week or so. Even if it never gets to that point this has been pretty solid advice for the last 50 years or so. You could call this the a16z approach - assume there is no crisis, things will just keep getting more profitable faster, and ride the wave. And the good news is if you have any disposable capital at all it's easy to get a first personal toehold on this by buying e.g. Vanguard ETFs. Your retirement accounts likely already hold a lot of this anyway. Congrats! You're already a very small part of the investor class.<p>[1]: [url-redacted]