View LLM Performance Leaderboard
Rank machines based on LLaMA 7B v2 benchmark results
Push a ML model to Hugging Face Hub
Explore and manage STM32 ML models with the STM32AI Model Zoo dashboard
Evaluate reward models for math reasoning
Measure BERT model performance using WASM and WebGPU
Browse and submit language model benchmarks
Launch web-based model application
Create demo spaces for models on Hugging Face
View RL Benchmark Reports
Browse and evaluate language models
Measure over-refusal in LLMs using OR-Bench
Evaluate AI-generated results for accuracy
The LLM Performance Leaderboard is a tool designed to benchmark and compare the performance of various large language models (LLMs). It provides a comprehensive overview of how different models perform across a wide range of tasks and datasets. Users can leverage this leaderboard to make informed decisions about which model best suits their specific needs.
1. How often is the leaderboard updated?
The leaderboard is updated regularly to reflect the latest advancements in LLM performance. Updates occur as new models are released or existing models are fine-tuned.
2. Can I compare models based on custom criteria?
Yes, the leaderboard allows users to filter models based on specific criteria such as task type, dataset, model size, or architecture.
3. What types of tasks are evaluated on the leaderboard?
The leaderboard evaluates models on a wide range of tasks, including but not limited to natural language understanding, text generation, reasoning, and code completion.