In-Context Learning Embedding and Reranker Benchmark (ICLERB) is a benchmark to evaluate embedding and reranking models used to retrieve examples for In-Context Learning (ICL).
In contrast to the commonly used MTEB, which evaluates embeddings based on their ability to retrieve relevant documents, ICLERB evaluates the performance impact on downstream tasks when using these embedding or reranking models for ICL.
On this page, you'll find the leaderboard of embedding and reranking models evaluated on ICLERB. You can also find the leaderboard on Huggingface.
To learn more about the methodology of ICLERB, you can read our white paper.
Coming Soon: We will be publishing our code on Github so that researchers can replicate our results.
The table below shows the average NDCG@K for various embedding and reranker models when used to retrieve examples for In-Context Learning for a number of different tasks.