I would love to be able to run GPT / ChatGPT on my desktop and remove some of the length limits on text.<p>How can I do that, and where can I download it from?
OpenAI is 'open' in the name only, so no. I don't think they have any plans of opening full access to the public either, considering that their previous model (that ChatGPT builds upon) was sold to Microsoft for exclusive use:<p><a href="https://en.wikipedia.org/wiki/GPT-3" rel="nofollow">https://en.wikipedia.org/wiki/GPT-3</a>
I'm pretty sure the GPT model is huge and does not fit on any conventional GPU. Even if they open-sourced the weights, I don't think most people would be running it at home.<p>Also regarding the text limits, AFAIK, there's just an inherent limit in the architecture. Transformers are trained on finite-length sequences (I think their latest uses 4096 tokens). I have been trying to understand how ChatGPT seems to be able to manage context/understanding beyond this window length
I don't much follow AI news beyond what I randomly happen to see on HN, but this might still be the largest open source model: <a href="https://github.com/yandex/YaLM-100B">https://github.com/yandex/YaLM-100B</a> . There's discussion of it here: <a href="https://old.reddit.com/r/MachineLearning/comments/vpn0r1/d_has_anyone_got_yalm100b_to_run/" rel="nofollow">https://old.reddit.com/r/MachineLearning/comments/vpn0r1/d_h...</a> - at the bottom of that page is a comment from someone who actually ran it in the cloud.
Even if it were freely available, there's no way to run GPT3 or ChatGPT on any existing desktop hardware. The exact hardware requirements aren't public either (yes, very "open") but a full 175-billion-parameter GPT3 instance requires hundreds of gigabytes of GPU memory, and even though ChatGPT is "smaller and better", when it comes to conversational dialogue, there's no way to fit it in current consumer hardware.
You can do gpt-j<p><a href="https://gist.github.com/navjack/32197772df1c0a8dbb8628676bc4e25a" rel="nofollow">https://gist.github.com/navjack/32197772df1c0a8dbb8628676bc4...</a><p>I mean yeah after you set it up like this you still have to prompt engineer to get it to behave like a chat but I mean it's better than GPT - 2
You can download his ancestor here:<p><a href="https://winworldpc.com/product/dr-sbaitso/2x" rel="nofollow">https://winworldpc.com/product/dr-sbaitso/2x</a>
It's not possible post GPT2 for the reasons given by others.<p>Open communities with potential for involving yourself include Hugging Face and EleutherAI, the former perhaps more accessible, the latter an active Discord.<p>It's been a while since I spent time looking at them, I'm not sure if there is something you can easily get up and running with.<p><a href="https://huggingface.co/" rel="nofollow">https://huggingface.co/</a><p><a href="https://www.eleuther.ai/" rel="nofollow">https://www.eleuther.ai/</a>
There are non-OpenAI models based on the same GPT paper as the OpenAI GPT-series, e.g., GPT-NeoX [0], GPT-J, etc., that are actually Open Source, unlike OpenAI which is “open” only in the sense of “we might let you use it, either as a free preview or a paid service”.<p>You probably won't be able to run (or especially train) them on typical desktops, though.<p>[0] <a href="https://www.eleuther.ai/projects/gpt-neox/" rel="nofollow">https://www.eleuther.ai/projects/gpt-neox/</a>
<a href="https://github.com/bigscience-workshop/petals">https://github.com/bigscience-workshop/petals</a><p>Since my other account is shadow banned for some unexplained reason, I just wanted to mention the petal project. It's an attempt to bittorrent style distribute the load of running these large models. Good luck!
Luckily, no. Otherwise you (others) could hack the safeties and ask it how to cheaply kill a lot of people or so .. better not make bad people too intelligent! (Obviously not talking about you)