Evaluate LLM over-refusal rates with OR-Bench
View and submit LLM benchmark evaluations
Load AI models and prepare your space
Measure over-refusal in LLMs using OR-Bench
View and compare language model evaluations
Display benchmark results
Convert Stable Diffusion checkpoint to Diffusers and open a PR
Text-To-Speech (TTS) Evaluation using objective metrics.
Explore GenAI model efficiency on ML.ENERGY leaderboard
Evaluate adversarial robustness using generative models
View RL Benchmark Reports
Display LLM benchmark leaderboard and info
Evaluate code generation with diverse feedback types
OR-Bench Leaderboard is a benchmarking platform designed to evaluate Large Language Models (LLMs) based on their over-refusal rates. It provides a comprehensive framework to assess how often models refuse to respond to prompts, offering insights into their reliability and responsiveness. This tool is particularly useful for researchers and developers aiming to optimize LLM performance and transparency.
• Benchmarking of LLMs: Comprehensive evaluation of models based on their refusal rates. • Performance Metrics: Detailed metrics on refusal rates across diverse scenarios and prompts. • Model Comparisons: Side-by-side comparisons to identify top-performing models. • Scenarios Support: Testing models against a wide range of scenarios. • Transparency: Open and accessible results for community review. • Community-Driven: Continuously updated with new models and data.
What does the OR-Bench Leaderboard measure?
The leaderboard measures the over-refusal rates of LLMs, indicating how often models refuse to respond to prompts.
How are the models evaluated?
Models are evaluated using a standardized set of scenarios designed to test their responsiveness and reliability.
Can I contribute to the leaderboard?
Yes, contributions are welcome. Submit your model or scenario suggestions through the platform's community portal.