AIDir.app
  • Hot AI Tools
  • New AI Tools
  • AI Tools Category
AIDir.app
AIDir.app

Save this website for future use! Free to use, no login required.

About

  • Blog

© 2025 • AIDir.app All rights reserved.

  • Privacy Policy
  • Terms of Service
Home
Model Benchmarking
OR-Bench Leaderboard

OR-Bench Leaderboard

Evaluate LLM over-refusal rates with OR-Bench

You May Also Like

View All
🥇

Open Medical-LLM Leaderboard

Browse and submit LLM evaluations

359
🚀

OpenVINO Export

Convert Hugging Face models to OpenVINO format

26
🚀

Titanic Survival in Real Time

Calculate survival probability based on passenger details

0
🥇

Hebrew LLM Leaderboard

Browse and evaluate language models

32
🐠

WebGPU Embedding Benchmark

Measure execution times of BERT models using WebGPU and WASM

60
🦀

LLM Forecasting Leaderboard

Run benchmarks on prediction models

14
😻

Llm Bench

Rank machines based on LLaMA 7B v2 benchmark results

0
🧠

Guerra LLM AI Leaderboard

Compare and rank LLMs using benchmark scores

3
🏆

Vis Diff

Compare model weights and visualize differences

3
🎨

SD To Diffusers

Convert Stable Diffusion checkpoint to Diffusers and open a PR

72
📉

Testmax

Download a TriplaneGaussian model checkpoint

0
🏆

OR-Bench Leaderboard

Measure over-refusal in LLMs using OR-Bench

3

What is OR-Bench Leaderboard ?

OR-Bench Leaderboard is a benchmarking platform designed to evaluate Large Language Models (LLMs) based on their over-refusal rates. It provides a comprehensive framework to assess how often models refuse to respond to prompts, offering insights into their reliability and responsiveness. This tool is particularly useful for researchers and developers aiming to optimize LLM performance and transparency.

Features

• Benchmarking of LLMs: Comprehensive evaluation of models based on their refusal rates. • Performance Metrics: Detailed metrics on refusal rates across diverse scenarios and prompts. • Model Comparisons: Side-by-side comparisons to identify top-performing models. • Scenarios Support: Testing models against a wide range of scenarios. • Transparency: Open and accessible results for community review. • Community-Driven: Continuously updated with new models and data.

How to use OR-Bench Leaderboard ?

  1. Access the Platform: Visit the OR-Bench Leaderboard website or integrate its API into your workflow.
  2. Select Models: Choose the LLMs you want to evaluate or compare.
  3. Review Metrics: Analyze refusal rates and performance across different scenarios.
  4. Compare Results: Use the leaderboard to identify models with the lowest refusal rates.
  5. Consult Documentation: Use provided resources to understand methodologies and improve model performance.

Frequently Asked Questions

What does the OR-Bench Leaderboard measure?
The leaderboard measures the over-refusal rates of LLMs, indicating how often models refuse to respond to prompts.

How are the models evaluated?
Models are evaluated using a standardized set of scenarios designed to test their responsiveness and reliability.

Can I contribute to the leaderboard?
Yes, contributions are welcome. Submit your model or scenario suggestions through the platform's community portal.

Recommended Category

View All
🌍

Language Translation

🩻

Medical Imaging

👗

Try on virtual clothes

🗒️

Automate meeting notes summaries

🧹

Remove objects from a photo

📄

Document Analysis

🎮

Game AI

💹

Financial Analysis

🎤

Generate song lyrics

🚫

Detect harmful or offensive content in images

❓

Visual QA

🎥

Create a video from an image

✂️

Separate vocals from a music track

✨

Restore an old photo

🖼️

Image Captioning