Any naive use of an LLM is not likely to produce good results, even with the best models. You need a process - a sequence of steps, and appropriately safeguarded prompts at each step. AI will eventually reach a point when you can get all the subtle nuance and quality in task performance you might desire, but right now, you have to dumb things down and be very explicit. Assumptions will bite you in the ass.<p>Naive, superficial one shot prompting, even with CoT or other clever techniques, or using big context, is insufficient to achieve quality, predictable results.<p>Dropping the resume into a prompt with few-shot examples can get you a little consistency, but what really needs to be done is repeated discrete operations, that link the relevant information to the relevant decisions. You'd want to do something like tracking years of experience, age, work history, certifications, and so on, completely discarding any information not specifically relevant to the decision of whether to proceed in the hiring process. Once you have that information separated out, you consider each in isolation, scoring from 1 to 10, with a short justification for each scoring based on many-shot examples. Then you build a process iteratively with the bot, asking it which variables should be considered in context of the others, and incorporate a -5 to 5 modifier based on each clustering of variables (8 companies in the last 2 years might be a significant negative score, but maybe there's an interesting success story involved, so you hold off on scoring until after the interview.)<p>And so on, down the line, through the whole hiring process. Any time a judgment or decision has to be made, break it down into component parts, and process each of the parts with their own prompts and processes, until you have a cohesive whole, any part of which you can interrogate and inspect for justifiable reasoning.<p>The output can then be handled by a human, adjusted where it might be reasonable to do so, and you avoid the endless maze of mode collapse pits and hallucinated dragons.<p>LLMs are not minds - they're incapable of acting like minds, unless you build a mind-like process around them. If you want a reasonable, rational, coherent, explainable process, you can't achieve that with zero or one shot prompting. Complex and impactful decisions like hiring and resume processing isn't a task current models are equipped to handle naively.