ICLERB

In-Context Learning Embedding and Reranker Benchmark

What is ICLERB?

In-Context Learning Embedding and Reranker Benchmark (ICLERB) is a benchmark to evaluate embedding and reranking models used to retrieve examples for In-Context Learning (ICL).

In contrast to the commonly used MTEB, which evaluates embeddings based on their ability to retrieve relevant documents, ICLERB evaluates the performance impact on downstream tasks when using these embedding or reranking models for ICL.

Link to Paper

Getting started

On this page, you'll find the leaderboard of embedding and reranking models evaluated on ICLERB. You can also find the leaderboard on Huggingface.

To learn more about the methodology of ICLERB, you can read our white paper.

Coming Soon: We will be publishing our code on Github so that researchers can replicate our results.

ICLERB Leaderboard

How to Read

The table below shows the average NDCG@K for various embedding and reranker models when used to retrieve examples for In-Context Learning for a number of different tasks.

Empower your enterprise with Crossing Minds' AI-powered platform, engineered to redefine intelligent information retrieval at scale.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Fully customized personalization engine aligns with your complex enterprise needs
Advanced A/B testing elevates decision-making and drives continuous innovation
Seamless business rules easily integrate with existing systems
Dedicated, expert guidance tailored for the challenges faced by enterprise-level IT
Continuous oversight ensures your personalization strategy performs at peak
Officially backed by
trusted by brands like