Hi guys,<p>I don't understand why the reason behind the OpenAI movement. If the benefits and fallouts of AI development is anything like nuclear technology, shouldn't its distribution measures be made in the image of the Nuclear Proliferation Treaty (i.e. controlled distribution of enrichment technology with checks and balances)?<p>Or is the open-source sentiment out of necessity because research output is difficult to control?
For Facebook, I know it's because Yann LeCun (their director) is adamant that all their research be open source.<p>For Google, I think their long term play is realizing that Facebook will release equivalent research, so in the long run, there's no benefit in having everything be closed source. Rather, if they publish the best papers, then they can claim the title of the "best AI company", which is great branding for other services like DeepMind Health and Google Cloud ML which would otherwise become a commodity. They can also recruit the best talent with this perception. Also, DeepMind doesn't open source jack, they only publish papers, no code.<p>Microsoft probably has a similar philosophy.
The fact that recent progress has come out of academia has been a big factor in setting the culture.<p>Apple has struggled to hire people since they don't publish much.<p>I think it is actually a minority of researchers publishing code, most implementations you will find online are reimplementations based on reading the papers.<p>I also think that very few researchers share the anxiety about AI that famous people seem to have. I spent some time at an academic ML conference and no-one I talked to thought superintelligence was going to happen in the next few decades.
Don't forget the talent aspect. Already right now there are not enough capable people and by opening their software the tech giants are lowering the entry barrier. Furthermore it is the vast amounts of data that really differentiates them.