> It may additionally be promising to test Chain-of-Thought (CoT) prompting strategies. This entails adding a “rationale” key to the model’s output such that the additional reasoning improves the performance of the model. However, this will require the output to be a composite object with the additional "rationale" key and string-valued response. Our results suggest that this additional output structure may result in lower success rates. Similar in spirit to structured decoding methods, it may also be helpful to prefix the end of the prompt with “{“ or use the key, such as “{“paraphrased_questions”: [".<p>For most of my prompts I want CoT.