I wish I could use it but I am unclear how anyone accepts, “ What you cannot do. You may not use our Services for any illegal, harmful, or abusive activity. For example, you may not: Use Output to develop models that compete with OpenAI.”<p>Yadda yadda, they probably won’t enforce it, enjoy that, I’m in malicious compliance mode, it’s not OK for a business to learn from me and then turn around and say I can’t learn from them, same goes for Anthropic, Gemini, Mistral, and Perplexity, if I can’t use the output for work then I don’t use the service.<p>Have resigned myself to not participate in this aspect of our boring dystopia and feel numb at this point about all the bajillion times someone breaks these rules and gets rewarded for it. I’d insult or mock them but it just gets downvoted and they’re benefitting and I’m probably the one missing out by not just ignoring the rules like them and these companies. Nobody seems to care about these rules.<p>Anyway, I did get burned using Mistral to help draft an RFC where it totally misinterpreted my intent and I didn’t carefully read it and wound up looking/feeling like a fool because the RFC didn’t communicate my true intention.<p>Now I try to think for myself and occasionally use groq. Muted all these company names and their chatbot names on X. Glad you’re having fun. So did I, for a while, but now I just don’t feel like paying for brain rape, I’m tired of writing about it, but folks keep writing about how great LLMs are, so I keep feeling compelled to point out, “the set of use cases is empty because of the fine print legalese.”