>> In the next five years, computer programs that can think will read legal documents and give medical advice. In the next decade, they will do assembly-line work and maybe even become companions. And in the decades after that, they will do almost everything, including making new scientific discoveries that will expand our concept of “everything.”<p>Great, I totally believe that. Well, the document reading and advice bit. But will it be valuable or even good advice? Because every rando on the internet can do just that <i>right now</i>. And COVID showed us the results. If anything, we will get more, and more convincing, spam and scam, proudly powered by AI.<p>But then, we talk about someone who wanted to use magic orbs to scan poor peoples biometrics on a global scale by promising get-rich-quick crypto.<p>EDIT: I think a better title would be "Sam Altman's content marketing blog post on how he thinks Moore's law applies to his current start-up".
> In the next five years, computer programs that can think will read legal documents and give medical advice.<p>It's been 10 years since IBM Watson commercially promised the same thing right then and there. And arguably failed. I don't see what changed in these 10 years apart from more systems like that being available to more people. But the capabilities seem in the same ballpark, meaning computer programs don't actually think nor are they even remotely accurate enough for legal or medical advice.
Sadly I dont see many rich world governments discussing this serious or planning. If covid is any guide things will have to get worse / bad before they are addressed.
Discussed at the time (534 comments): <<a href="https://news.ycombinator.com/item?id=26480981" rel="nofollow">https://news.ycombinator.com/item?id=26480981</a>>
Governments that implement these type of policies might see a rapid rise in their GPD. If it works out, I assume others would follow.<p>But what country would be the first one to take the leap?