I’m a Korean high school student currently preparing for the CSAT (college entrance exam), and I happened to notice some persistent cache-loop behavior while using GPT in document-heavy tasks.<p><i>Repeated PDF failures seemed to create token overload and session slowdowns. So I tried manually analyzing the session, tracking token counts, and testing some user-side “optimizations”—like auto-removing failed outputs and cleaning redundant versions.</i><p><i>I used GPT itself to help write the report and interpret the data. It was a mix of curiosity, frustration, and… maybe procrastination. But it turned into a fun experiment.</i><p>I’ve only been exploring GitHub and ChatGPT for less than a month, so there are still many things I’m unfamiliar with.<p>So if there’s anything I’ve overlooked or could improve, I’d really appreciate your feedback.