Generative models nowadays are good. They don't make simple mistakes or hallucinations anymore. But that means the gotchas are more subtle, more hidden, and, maybe, more dangerous.<p>Especially in an area I am not an expert in, the answers I get from models seem fine, but to a domain expert, they can be total BS.<p>One way to mitigate this is to instruct the models to reply less generatively. For example, instead of summarizing, I ask the models to highlight or extract the sentences that contain key info from the articles.<p>So HN, am I the only one doing it? What are your non-generative tricks?