I'm curious to know which frontend alternatives you've been using to replace ChatGPT when working with the OpenAI API. As ChatGPT premium has a very low limit, I prefer to use the API but still want a similar (and hopefully better) UI.<p>My specific needs include:<p>- Token counting and cost estimation tools.<p>- Visibility into the sliding window of context in chat-style environments, as it's unclear which parts are considered in OpenAI's UI.<p>- Code compression and optimization techniques, such as stripping comments, removing irrelevant utility functions, or other methods often discussed here.<p>- Assistance with frontend coding: For instance, I struggle with managing React projects that have numerous files and require multiple changes for data passing. While the 32k model might help, it can be expensive.
<a href="https://www.chatbotui.com" rel="nofollow">https://www.chatbotui.com</a> works fine for me on desktop (had some issues on iOS -- could be related to my adblockers)<p>I use that with <a href="https://gptokens.com" rel="nofollow">https://gptokens.com</a> in another tab.<p>I'm still looking for a more iOS friendly one, ideally with good text-to-speech support and the ability to use my own API key.
Till now I didn't found any tools. But <a href="https://easyfrontend.com/" rel="nofollow">https://easyfrontend.com/</a> helps me a lot to achieve what I wanted in very short time.
I think the best frontend is a client you write yourself. You can switch effortlessly between models - summarize to compress - edit the context to be more relevant. Strip out irrelevant context and so on. I like using it better than the web client most of the time. I'd share it, but it has lots of personal prompts and workflows that I'd rather keep private.
I am currently using <a href="https://github.com/Bin-Huang/chatbox">https://github.com/Bin-Huang/chatbox</a>. It ticks some of the boxes you've mentioned.
While we are only starting to integrate LLMs into our tool, it was originally started to help deal with the React issue you describe, i.e. having to make related updates across large numbers of files. We wanted to have a single source of truth from which we generate the majority of the implementation. We are working to simplify and reduce the problem for ChatGPT by combining it with our code gen tech, helping to deal with the issues around limited context, hallucinations, and losing focus. I think the current generation of LLMs have fundamental limitations that will make large code bases intractable to them. Hell, even we mere mortals have issues with this and while the machines can be better at some things, they will fail spectacularly in other tasks we find trivial.<p><a href="https://github.com/hofstadter-io/hof">https://github.com/hofstadter-io/hof</a>