Potentially open source models will much better, but it won't happen fast and it may require new forms of crowdsourcing stuff. Wikipedia did not happen by miracle, people build the infrastructure that allowed to compile knowledge in such a way.<p>Similarly we need to think how to develop large models in the form of so-called public goods.<p>Open source models will actually face less resistance (more transparent) and less friction about copyrights etc.<p>The main disadvantage of open source is that building user interfaces to models requires expensive specialists that usually don't work for free. So, just like the UI of linux and other open source apps is forever a bit dated, the UI of models (not just the GUI, but the whole setup) will be less slick.
Yes. My reasoning is it is a bit like the 4-minute mile. Once someone broke it (whoever it was) then suddenly everyone else realized it was possible and other people beat it. The confidence and excitement created by ChatGPT means some clever soul will find a way to make something just as good run on a CPU.
Has Open Source replaces Closed Source? Has Free Software replaces Paid Software? No, yes, maybe? There is no real answer to this, it depends; depends on the area and task and user. Companies have big money, or at least they have some money, and they can use this to speed up development. Volunteer work on the other side usually depends on free work, or donations, which then are used for hardware and other non-human-costs. So the focus and actions of volunteer-based open source-projects is different from the focus of money-flowing companies.<p>And yes, of course there are also many companies who invest in open source. Which is why, it depends. Will enough money flow into open source-AI, will there be some breakthrough which will allow open source-AI to compete with well trained Commercial projects. Will the Musk put some Small Elon Energy into Open Sourcing his Truth AI(?) to piss on OpenAI? It all depends.<p>Though, personally, I think in the long run Open Source will be a relevant force in that space, simply for the reason of transparency and the Technological Singularity. Because whether it can replace them this year or next year doesn't really matter. At some point all AIs will be powerful enough for any task, or even break through to AGI. At which point political reasons will become relevant.
Unless efficiency is significantly improved, I don’t see how community incentives (with community resources) could train a model with that many parameters. Would love it if it happened, though. That would be true Open AI.