Yep - had this discussion today. The problem is that building high quality modern 'AI' tools requires a metric assload of data. Right now everyone is running around scraping the Internet indiscriminately for that data and building the best things that they can. The next step is that the creators of said training data want their cut, and the lawyers step in and shake out the industry for the next few years. Precedent is set, laws are made and content creators get to license their data for training these AI models.<p>Once those laws are in place, the moat is dug - and large corporations with access to capital vacuum up good training data in bulk. The whole time loudly proclaiming that they're 'empowering creators' and 'rewarding original thinkers.' Now the large corporations have the largest and best datasets tied up and neat, deep moat. Any startup wanting to challenge the incumbents could have significantly better models, but without access to the same quality and volume of training data, they'll be at a massive disadvantage.
When AI provides value that formerly would have been provided by workers, that change allows capital to capture a larger share of the revenue. If this isn't counter-balanced, I think we have to call on the tools that Piketty and others have been advocating: institute wealth taxes, create giant sovereign wealth funds, and give every young person universal inheritance. If we really believe that AI will create a bigger pie, but exacerbate inequality, then let's build better tools for dealing with inequality.<p>But the other side of this is that large corporations will have the upper hand in AI only if the success of that AI is dependent on their other advantages (like having tons of data about all of us). The fact that model architectures are getting somewhat universalized seems to suggest that eventually there may be some highly reusable building blocks that will provide good outcomes ... when trained on enough data with enough diversity. And that data comes from everyone; if we insist on different data governance principles, we could support an ecosystem where anyone can create, train, and run their own AI services.
The term the author is looking for to describe their objection is, "radical monopoly" and it was identified in the 70s. The essential concept refers to a technology that shapes its users to its needs. <a href="https://en.wikipedia.org/wiki/Radical_monopoly" rel="nofollow">https://en.wikipedia.org/wiki/Radical_monopoly</a>
You could probably say this about anything that improves productivity. FWIW it would be nice if there was a safety net in place to help people recover from having their jobs automated out of existence.
This is an extension to the idea that automation is a form of "resource curse" that creates social stratification in a similar way to diamonds or oil.<p>Being able to automate valuable work means you need a smaller coalition of power brokers to maintain the bulk of your GDP, meaning a government can remain in power with the support of an ever-shrinking class of influential people.<p>This is something people have been talking about for decades, with semi-serious proposals like a tax for robots being thrown around. The people in power have yet to take a hard look at any of it, though -- and why should they? It's a safer bet to curry favor with the ever-shrinking circle of influential people. The core problem in my opinion is how to create an incentive for politicians today to put into place measures to combat this consolidation of power. The longer we wait, the stronger the incentives against liberal democracy become, and the harder it will be to make a change.
>There are only a handful of companies with enough data to be able to train artificial intelligence algorithms.<p><a href="https://commoncrawl.org/" rel="nofollow">https://commoncrawl.org/</a>
In 2017, I wrote this as a "retrospective interview" supposedly taking place in 2019.<p><a href="https://issuu.com/stanfordchaparral/docs/parody_119_3-4/17" rel="nofollow">https://issuu.com/stanfordchaparral/docs/parody_119_3-4/17</a><p>To summarize: Larry Page is bragging about how, in 2017, Alphabet created a "data REIT" which contained all of Google's data, and licensed it on FRAND terms to all comers. As a "REIT" it's required to pay out 95% of its profits as dividends, and everyone whose data they use is a shareholder in the "REIT."<p><pre><code> Yes, I know that's not what REIT's are for. This would take legislation, as Larry hints in the interview.
</code></pre>
Basically, the AI training data is nationalized, <i>with</i> compensation to the owners, i.e. Google, FB, etc. You could argue, and people would, about who deserves compensation. Congress would have to do its job, for once.<p>In this "interview" Larry is bragging about how well this worked out for Google, since the clients of Google Data can make much better use of the data than Google itself can.
Technology as a whole is more useful for people with more resources. When animal husbandry was invented, the shepherd with one sheep still had one sheep at the end of the season, while the shepherd with two sheep might have three at the end of the season.
> unless AI is open-source and truly owned by the end users<p>This misses the deeper reality, in my view. AI is predicated on and bootstrapped by the free labor of others. Even if it were “open source” and “owned” by end-users, AI fundamentally requires people to do free work. That’s the problem with it.
“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”<p>― Frank Herbert, Dune
ai is definitely providing us with cheaper knowledge labor. but that's not a bad thing for human knowledge workers. ai is enabling societies to have less expenses. this means more money to spend on the quality of life of everyone. we should push for that.
Counter arguing the central point: AI will be terrible for capitalists, and great for most others. Reason being, the surplus value created by AI is hard to defend (e.g. models and trade secrets seem to leak out on a 1-2 year timescale). Anything that requires Google scale in 2-3 years will only need iPhone scale. This will lead to hardcore deflation too fast for capitalism to escape, which will be passed on to consumers in the form of surplus discretionary spending income.
The author doesn't get into whether even capitalists will be able to reliably get their AIs to do what they want. Good news: AI may end up being terrible for everyone!
IMO the title would probably be best rephrased as "Centralized/Proprietary AI Is Useful for ..."<p>I think Stable Diffusion, being FOSS & usable on a decent range of consumer hardware, is a pretty clear counterexample to the article's claim - it doesn't matter if you're a capitalist or not, you can get great value from the technology. Funnily, the article totally avoids mentioning SD.
That seems to be true of almost any significant technological advancement. I would argue that the proper response as a society is to socialize the benefits of these developments in order to stem the otherwise inevitable spreading of the wealth gap. I'm sure someone will eventually call me a commie for such wild notions.
Anything that increases capital-intensity of output, compared to labor intensity (good old Cobb-Douglas production function) will be good for the capitalists.<p>Y = K^(1-x)*L^(x)<p>Here's ONE WEIRD TRICK that CAPITALISTS HATE. Increase X.
It would be useful to a Communist society, one that is able to re-distribute wealth amongst the population. Of course, americans find the very word revolting, but that's what countries like China have the possibility of achieving in the future, while the US is stuck in their ideological dogma and obsession of capitalism, "democracy" and to some extent evangelical christianism.
This is an ancient fallacy in economics going back to the mechanization of farm labor. The beauty of capitalism is that what's good for capitalists is also good for everyone else. Jobs will be lost but the standard of living will increase for everyone and new jobs created. Until we reach AGI, at which point we can all collectively retire and just do whatever we want.
No way, AI will also be useful in medical fields, or anyone who has to pour over more data that can be parsed in human lifetimes in order to make discoveries, such as a search for extra terrestrial life, or even in the fields of law. Think outside the box.