This fits well with what we know about AI and Bloom's Taxonomy of Learning, which goes from 'remembering' on the lowest step to 'creating' on the top step (remember, understand, apply, analyse, evaluate, create).<p>Undergrad exams are usually somewhere around 'remembering', simple fact or definition regurgitation. Most of these facts should be in chatGPT's training data. As the degree proceeds things get harder and we move up the taxonomy, and that's where we know LLMs fail: there's nothing in there that can really 'understand', let alone 'create'.