Display and submit language model evaluations
Explain GPU usage for model training
Export Hugging Face models to ONNX
Browse and filter ML model leaderboard data
View and submit LLM benchmark evaluations
Display and filter leaderboard models
Convert Stable Diffusion checkpoint to Diffusers and open a PR
Push a ML model to Hugging Face Hub
Evaluate open LLMs in the languages of LATAM and Spain.
Benchmark LLMs in accuracy and translation across languages
Upload ML model to Hugging Face Hub
Teach, test, evaluate language models with MTEB Arena
View and submit LLM benchmark evaluations
Leaderboard is a platform designed for model benchmarking, allowing users to display and submit language model evaluations. It serves as a centralized tool for comparing and tracking the performance of different AI models, providing insights into their capabilities and improvements over time.
What is the purpose of Leaderboard?
Leaderboard is a tool for benchmarking language models, enabling users to compare and track model performance in a structured manner.
How do I submit my model's evaluation?
To submit your model's evaluation, follow the guidelines provided on the platform, ensuring your data is in the correct format and includes all required metrics.
What are the benefits of using Leaderboard?
Using Leaderboard allows you to gain insights into your model's performance, identify areas for improvement, and benchmark against industry standards and other models.