Worrying that AI might make exams obsolete is kind of odd, I mean, it is a symptom I guess but only at the very end of a long stupid cascading failure.<p>Students cheat because they want the degree but don’t care to learn the material. Or maybe they want to learn the material, but see employment at the end as requiring better grades than they can get naturally. Either one is the result of bullshit credentialism. (Bullshit credentialism probably comes in part as a result of bullshit jobs where work-product can’t be evaluated because it’s all useless).<p>Hopefully students manage to cheat on so many tests that grades can become completely useless for employers. Then, they can become something useful for the students, a way to evaluate their progress and get feedback.
This fits well with what we know about AI and Bloom's Taxonomy of Learning, which goes from 'remembering' on the lowest step to 'creating' on the top step (remember, understand, apply, analyse, evaluate, create).<p>Undergrad exams are usually somewhere around 'remembering', simple fact or definition regurgitation. Most of these facts should be in chatGPT's training data. As the degree proceeds things get harder and we move up the taxonomy, and that's where we know LLMs fail: there's nothing in there that can really 'understand', let alone 'create'.
This is just a way more reactionary way to communicate model benchmarks.<p>“A computer algorithm performed better than humans on a task it was designed for” sounds like the last forty years in a nutshell.
Yes, the current crop of world knowledge AIs are smarter than any human who ever lived.<p>And big names are calling them useless.<p>This is proof the human race is not generally capable to solving novel problems, so I hope people will stop expecting AIs to solve every novel problem.
Can "educated machines" reproduce on their own yet? How do they fit into the food web, the carbon cycle, the nitrogen cycle, etc? Can they make meaning, towards a purpose in life? What role do they serve other than human ingenuity ego-stroking and for a few to further extract money from the many?<p>Can AI love, yet?<p>To what degree are we just avoiding dealing with existential threats by churning through resources to play god and make robots in our image? (Albeit a subset of humanity, and not without bias)<p>I'm not yet convinced this AI work isn't a waste of time and other resources. I'd far rather we put our efforts into land/water stewardship and a "new" vision for human existence based on many of the old ways that got us this far, so that we might go another several hundred thousand years.<p>In an unbroken oral tradition, what stories might those future people tell about this time?