IIRC, there was a paper which showed GPT models pay most attention to the beginning and end of context window, and much less attention to what's in the middle. In that regard, they behave like human brains. But I'm wondering if these efforts to increase context window actually make the models pay almost uniform attention to all the prompt?