Morally and for their own benefit, is it best for people to stop using ChatGPT? It seems like the value is not particularly in ChatGPT itself, but the users feeding it data. Should we be using an AI that is controlled by OpenAI?
In my view people should certainly stop using these LLMs as a source of truth.<p>What worries is that they're often so close to the truth that people start taking what it tells them at face value without stopping and thinking to double check.
GPT-4 has fully altered my dev workflow. I don't think I could move away from it unless there was a clear, equivalent, alternative.<p>Not only on the OpenAI UI, but also via the Cursor IDE which I actively use with an OpenAI API Key.<p>Even if you don't use the OpenAI UI you'll likely end up using the produce via other products. Github Copilot, Arc Browser, Duolingo – likely more in the future – all use GPT under the hood.
I’m not going to stop using it but there are a number of open source models that are pretty close to GPT 3.5 and 4.<p>I run Ollama.ai on my Mac with a variety of models — usually Mistral-7B. It’s fast and gives pretty good answers.<p>If you have issues with OpenAI, try those.