I've noticed over time that ChatGPT 3.5 and 4 is now very likely to refuse to finish tasks if they're too long.<p>for example it will output things like:<p>User input example:Give me an object with 50 keys corresponding to X<p>GPT output:<p>{<p><pre><code> output1: "example 1",
output2: "example 2",
output3: "example 3",
//etc....do the same as I did above for the 47 others keys you asked for
</code></pre>
}<p>I've tried a lot of different prompts/custom instructions to try to force him to finish long tasks, but it will either do the same BS after a while, or crash.<p>I think OpenAI is pushing hard to make the ChatGPT outputs as small and cheap as possible(either by finetuning or prompting), that would be acceptable if it was still a free product, but I'm paying for Plus, so this is infuriating.<p>Do you have any methods to force him to complete tasks without skipping anything?<p>Thanks
Just a couple weeks ago it was discovered that you could get ChatGPT to regurgitate training data by having it repeat words forever (source: <a href="https://www.wired.com/story/chatgpt-poem-forever-security-roundup/" rel="nofollow noreferrer">https://www.wired.com/story/chatgpt-poem-forever-security-ro...</a>). Combined with the overall demand, I suspect they're trying to keep it from attempting repetitive tasks that could be exploited.<p>Some redditors have reported success in getting it to finish a job with replies like these, but ymmv:
No, I'd like you to do what I asked.
I’m not sure how to continue from there, can you finish the rest?
this is very important for my career<p>I love the irony here. The sales pitch was that AI is so incredible that it would do our work with minimal effort on our part. No more coding! Just have a conversation with it about what you want done! But it turns out we have to try to trick or coerce it into doing work. Not because the underlying AI is self aware and lazy, but because the guide rails built to keep it from doing things it shouldn't are written in natural language and don't succinctly distinguish between wanted and unwanted behavior.
I have seen that, where I want to just cut and paste the response but the return content has something like a one liner that states ("the rest of your code here") and I can't cut and paste it. I ask for a full return of content, but don't always get it.
Have you tried different prompts with more instructions, like "don't truncate or use "etc..." in your response"? Providing examples in the prompt typically helps as well<p>fwiw, the following prompt works with bard<p>"Give me a JSON object with 50 keys and values"<p>"Give me a JSON object with 50 keys and string values"