Do you have a go-to question (or several) to check if an LLM knows its stuff? For me, I ask a simple question:<p>"What is Operation Konrad III"<p>which most LLMs fail due to the (relative) obscurity of the event.
Not really scientific or anything but I tend to give it the task: „Write a simple http server in Go that saves all requests into a SQLite database.“<p>What I am looking for is:<p>- did it forget to import the SQLite driver?<p>- is it doing weird SQL shenanigans like selecting MAX(id) to obtain the next potential id?<p>- is the code rather simple or over-engineered?<p>update:
Most LLMs produce a decent answer, however it you increase the difficulty a little bit by asking it "Write a simple and CGo free http server in Go ...", most LLMs get the sql driver wrong (except for gpt-4-1106-preview)
I give it a large block of code and see if it can find the bug. Amusingly, GPT sometimes passes it with flying colors (finding the bugs I didn't see and seeing unused imports) but at other times it just flat out fails to see anything.