Anecdote: neural-cherche seems useful as I have analysts creating positive & negative feedback data (basically thumbs-up/down signals) that we will use to fine tune retrieval models.<p>Assuming not much effort is required to make this work for similar models? (i.e. BGE)
Looks cool. A couple of questions:
1. Does it support fine tuning with different losses? For example, where you don't need to provide negatives and it uses the other examples in the batch as negatives
2. Can you share inference speed info? I know that Colbert should be slow since it creates many embeddings per passage
I like the inclusion of both positive and negative examples!<p>Do you have advice for how to measure the quality of the finetuning beyond seeing the loss drop?