I’m not a proompter but why can’t LLMs have an initial step to generate an effective prompt from user input, clarifying if intent is not clear enough, and then feed that prompt back to the LLM to better fulfill the user’s request? Seems like people have been busy generating training data on what an effective prompt looks like, and kind of silly to require people to learn some verbal gymnastics when the LLM itself is specialized at writing in specific styles.