Just like about everyone I'm really impressed by GPT-4 and already a happy user of it via ChatGPT Plus, but something that I haven't seen mentioned a lot about GPT-4 is that while OpenAI has put a lot of emphasis on its usage as a chat bot, it doesn't really seem cost effective to use it that way via the API for now, at least not for long length conversations.<p>I'm not sure at all that my calculations are right, but here they are for a conversation I just had with ChatGPT (GPT-4 model):
- It was about 8000 tokens long in total (counting both my messages and the ones of ChatGPT)
- I sent around 45 messages
- The answers of ChatGPT were about 100 tokens each<p>Assuming the token length of the conversation increased at a constant rate with each new message, we can consider that, on average, each sent message costs 4000 tokens (half of 8000, the final total conversation length).<p>The price of context tokens for the 8k model is $0.03 per 1000 tokens, the price of generated tokens is $0.06 per 1000 tokens.<p>If each sent message costs 4000 tokens on average, and that 100 (0.1k) of these tokens are generated tokens, that equates to 3900 (3.9k) context tokens on average, which means that for 45 sent messages you have (0.1 x 0.06 + 3.9 x 0.03) x 45 = $5.535<p>I feel like even for a personal usage, the costs would quickly become very high: even if you just had one conversation like this per day, that would be about $150 at the end of the month. That's too much for me.<p>What's your opinion about this? Did I get the computation right? Is it likely that OpenAI is having big margins? Because I don't see how would Bing Chat even be viable if this was anywhere close to the actual cost of running it?