AIDir.app
  • Hot AI Tools
  • New AI Tools
  • AI Tools Category
AIDir.app
AIDir.app

Save this website for future use! Free to use, no login required.

About

  • Blog

© 2025 • AIDir.app All rights reserved.

  • Privacy Policy
  • Terms of Service
Home
Model Benchmarking
OR-Bench Leaderboard

OR-Bench Leaderboard

Evaluate LLM over-refusal rates with OR-Bench

You May Also Like

View All
🏷

ExplaiNER

Analyze model errors with interactive pages

1
🥇

Deepfake Detection Arena Leaderboard

Submit deepfake detection models for evaluation

3
📉

Testmax

Download a TriplaneGaussian model checkpoint

0
🌸

La Leaderboard

Evaluate open LLMs in the languages of LATAM and Spain.

71
🌖

Memorization Or Generation Of Big Code Model Leaderboard

Compare code model performance on benchmarks

5
🦀

LLM Forecasting Leaderboard

Run benchmarks on prediction models

14
🦀

NNCF quantization

Quantize a model for faster inference

11
🎨

SD To Diffusers

Convert Stable Diffusion checkpoint to Diffusers and open a PR

72
📊

MEDIC Benchmark

View and compare language model evaluations

6
🏆

🌐 Multilingual MMLU Benchmark Leaderboard

Display and submit LLM benchmarks

12
🐨

LLM Performance Leaderboard

View LLM Performance Leaderboard

293
🥇

DécouvrIR

Leaderboard of information retrieval models in French

11

What is OR-Bench Leaderboard ?

OR-Bench Leaderboard is a benchmarking platform designed to evaluate Large Language Models (LLMs) based on their over-refusal rates. It provides a comprehensive framework to assess how often models refuse to respond to prompts, offering insights into their reliability and responsiveness. This tool is particularly useful for researchers and developers aiming to optimize LLM performance and transparency.

Features

• Benchmarking of LLMs: Comprehensive evaluation of models based on their refusal rates. • Performance Metrics: Detailed metrics on refusal rates across diverse scenarios and prompts. • Model Comparisons: Side-by-side comparisons to identify top-performing models. • Scenarios Support: Testing models against a wide range of scenarios. • Transparency: Open and accessible results for community review. • Community-Driven: Continuously updated with new models and data.

How to use OR-Bench Leaderboard ?

  1. Access the Platform: Visit the OR-Bench Leaderboard website or integrate its API into your workflow.
  2. Select Models: Choose the LLMs you want to evaluate or compare.
  3. Review Metrics: Analyze refusal rates and performance across different scenarios.
  4. Compare Results: Use the leaderboard to identify models with the lowest refusal rates.
  5. Consult Documentation: Use provided resources to understand methodologies and improve model performance.

Frequently Asked Questions

What does the OR-Bench Leaderboard measure?
The leaderboard measures the over-refusal rates of LLMs, indicating how often models refuse to respond to prompts.

How are the models evaluated?
Models are evaluated using a standardized set of scenarios designed to test their responsiveness and reliability.

Can I contribute to the leaderboard?
Yes, contributions are welcome. Submit your model or scenario suggestions through the platform's community portal.

Recommended Category

View All
🎤

Generate song lyrics

🎬

Video Generation

💡

Change the lighting in a photo

📄

Extract text from scanned documents

🗂️

Dataset Creation

✂️

Separate vocals from a music track

🔇

Remove background noise from an audio

😂

Make a viral meme

🎵

Music Generation

😊

Sentiment Analysis

💬

Add subtitles to a video

🗒️

Automate meeting notes summaries

💻

Generate an application

🚫

Detect harmful or offensive content in images

🎨

Style Transfer