Replies: 1 comment
-
In my mind, Reranking is just about reordering texts such that the order is more accurate - it doesn't constrain how you reorder them. Thus, you can also use Bi-Encoders / embedding models as Rerankers; Cross-Encoders is just more common as they tend to give better performance. It is cool that you have implemented Cross-Encoder support! If you want, you can open a PR and we can merge it :) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
as far as i understand, reranker models, typcially take query + doc as input, and directly give a score as output. so called "cross-encoders".
however, when i read RerankingEvaluator implementation link, it get embeddings from query and doc, and then calc cos similarity.
I tried to modified the code and thus use similarity score directly from model output, as expected, the evaluation results are different (higher) with the default RerankingEvaluator implementation.
My question, why does RerankingEvaluator implementation use embeddings + cos_sim instead of using similarity score from model?
Beta Was this translation helpful? Give feedback.
All reactions