Problem is that for strawberries or diamonds or SV startup stuff there is a line where if you make something infinitely available it will also loose the value.<p>You cannot equate paper clip maximizer with SV companies. Because fitness functions are different.<p>Not saying that diamonds industry didn’t make world worse off but still somehow it didn’t take over the world to make people eat diamonds for breakfast. So the same there will be no entity that can make all people everywhere eat strawberries for breakfast all year long.<p>Scary part is finance industry that basically already is self conscious with all the rules baked in and no single person being able to grasp it.<p>Finance with AGI could already become paper clip optimizer - but it actually needs energy only. It doesn’t need humans anymore. So it would most likely fill in whole world with power plants and erase all other life just to have the electricity.
>Then I realized that we are already surrounded by machines that demonstrate a complete lack of insight, we just call them corporations. Corporations don’t operate autonomously, of course, and the humans in charge of them are presumably capable of insight, but capitalism doesn’t reward them for using it. On the contrary, capitalism actively erodes this capacity in people by demanding that they replace their own judgment of what “good” means with “whatever the market decides.”<p>Excellent quote. We say we're in a rational world where we make rational decisions in our societal game we've all been told we have to play - capitalism where money is the determinant of success vs failure for corporations, families, individuals.<p>But step back and when looking at the question of whether it's rational to us as humans be playing this game and it is not rational at all. Why are we not deciding that food and places to live for everyone is the determinant of success of a country, society? Or happiness?<p>60 Minutes has a segment about Bhutan from a couple weeks ago [0] about this. They lived by something they named "Gross National Happiness". Which feels weird to type but again stepping back, it's because our whole lives we're told that "Gross Domestic Product", overall money, is the determinant of "best" and that's so engrained for us.<p>On a different note, Ted Chaing's short story books [1][2] are incredibly, incredibly good. I'm reading them again and read "Story of Your Life" earlier today. Being able to write fiction like that makes it much more trusting to listen to what someone has to say on other topics. And saying that seems like another topic - how we're told to downplay fiction compared to non-fiction, when our brains evolved for stories. But that's for another comment.<p>[0] - <a href="https://www.youtube.com/watch?v=7g_t1lzn-1A" rel="nofollow">https://www.youtube.com/watch?v=7g_t1lzn-1A</a><p>[1] - <a href="https://en.wikipedia.org/wiki/Stories_of_Your_Life_and_Others" rel="nofollow">https://en.wikipedia.org/wiki/Stories_of_Your_Life_and_Other...</a><p>[2] - <a href="https://en.wikipedia.org/wiki/Exhalation:_Stories" rel="nofollow">https://en.wikipedia.org/wiki/Exhalation:_Stories</a>
> we are still a long way from a robot that can walk into your kitchen and cook you some scrambled eggs<p>I wonder if he's seen the latest videos of staged demos where humanoid robots can fold clothes<p>edit: didn't say 2017 when I commented.
The problem is not capitalism, it’s corporatism.<p>Or more precisely, it is that we allow a group of people, under the control of a single person or very small number of people, amass incredible power, and do so in an amoral framework in blind service to a goal of stock price growth.<p>The problem is a scale problem in my mind. If you limit the scale then all the other problems become manageable.<p>We wouldn’t need antitrust laws anymore if our tax laws made it unprofitable to own shares in a company with, for example, 50% of search market share or online retailing share.
The idea that AGI doom scenarios are really late-stage capitalism is interesting and strikes me as fundamentally correct. The difference between the AGI takeover and capitalism is just the choice of metric to optimize on.<p>But, I think, it's the act of trying to optimize on a metric itself that is the source of the destruction. Unmeasurable human values can't survive an optimization process focused on measurable ones.
> This summer, Elon Musk spoke to the National Governors Association and told them that “AI is a fundamental risk to the existence of human civilization.”<p>And here I was going to suggest that billionaires, unbridled mega-corporations were the fundamental risk to the existence of human civilization.<p>> Musk gave an example of an artificial intelligence that’s given the task of picking strawberries.<p>Also odd since it's more likely that a corporation, in the name of maximizing profits, would make decisions that threaten humanity. We can start with Bhopal, India. If you find fault with that example I am sure there are plenty of others, some probably a good deal more subtle, that others can suggest.<p>Me, not worried at all about AI.
This article compares a strawberry picking machine killing all of humanity to increase strawberry fields with current corporations. This is stupid because corporations do have guardrails both internally and externally from society. All of the mega Silicon Valley corporations are not expanding by murdering people. Even in the ones that are expanding in ways that people question (social media), they are filled with humans who actually think they are doing the right thing.<p>Humans with morals are still very much in the decision chain and there is obviously a lot of debate about their morals, but them being there makes such a vast difference that the comparison to the strawberry AI is completely invalid. The strawberry AI isn’t even considering humans.<p>The article then builds on that false comparison for the rest of the article so there isn’t much to gain from the rest of it.<p>You can make the same lazy comparison to a completely socialist, centralized decision making by a government optimizing for a single metric (voter approvals, poverty levels, whatever). It has nothing to do with capitalism or the economic system.<p>TLDR; article says mega corps are the same as dangerous AI because they make optimizations in favor of profit that some people disagree with.