A major problem that we are facing is that the model (GPT-4) is not aware that there are multiple answers present for the query in the context. It tries to pick one possible answer and expands on that. Ideally it should be able to detect the ambiguity and ask a clarifying question.<p>I tried to make this as an "instruction" in the prompt but that doesn't seem to nudge the model towards identifying ambiguity. I am not trying to improve the retrieval to make the context less ambiguous.
Anyone has found a workarond?
My own experience is that when it misinterprets an ambiguity, it's immediately obvious to me what was ambiguous in my question and I can add clarification. Asking for a clarification always adds an extra step, compared to giving me a likely answer which only adds an extra step when it's wrong.
This is a problem I've seen too but I haven't dug into it. Could you try specifically asking what extra information the llm would want in order to give the best answer (or e.g. what information is missing) to push it to answer that question specifically?
Few days abck I chatted with stanford alpaca folks. Got the response that no clear solution as of now. All the transformer models are design to answer the questions in a given context.