I'm sure there will be improvements, but making a custom GPT a few weeks ago I was very unimpressed:<p>1. The GPT builder itself didn't feel like it was a well-tuned prompt (i.e., the prompt they use to guide prompt creation). It created long-winded prompts that left out information and didn't pay attention to what I said. Anything I enter into the GPT builder interface is probably very important!<p>2. The quotas are fairly low, and apply to testing. I was only able to do maybe 10 minutes of playtesting before I ran out of quota.<p>3. There's no tools to help with testing, it's all just vibes. No prompt comparisons.<p>4. The implied RAG is entirely opaque. You can upload documents, and I guess they get used...? But how? The best I could figure out was to put text into the prompt telling GPT to be very open about how it used documents, then basically ask it questions to see if it understood the content and purpose of the documents I uploaded.<p>5. There's no extended interface outside of the intro questions. No way to emit buttons or choices, just the ever-present text field.<p>6. There's no hidden state. I don't particularly want impossible-to-see state, but a powerful technique is to get GPT to make plans or internal notes as it responds. These are very confusing when presented in the chat itself. In applications I often use tags like <plan>...</plan> to mark these, which is compatible with the simple data model of a chat.<p>7. There's no context management. Like hidden state, I'd like to be able to mark things as "sticky"; things that should be prioritized when the context outgrows the context window.<p>These are all fixable, though I worry that OpenAI's confidence in AI maximalism will keep them from making hard features and instead they just rely on GPT "getting smarter" and magically not needing real features.