Duplicate this leaderboard to initialize your own!
Merge machine learning models using a YAML configuration file
Request model evaluation on COCO val 2017 dataset
Calculate memory usage for LLM models
Download a TriplaneGaussian model checkpoint
Search for model performance across languages and benchmarks
Push a ML model to Hugging Face Hub
Display model benchmark results
Upload ML model to Hugging Face Hub
Evaluate AI-generated results for accuracy
Compare LLM performance across benchmarks
Evaluate open LLMs in the languages of LATAM and Spain.
Persian Text Embedding Benchmark
The Example Leaderboard Template is a starting point for creating custom leaderboards to track and compare the performance of various models. It allows users to easily duplicate and customize their own leaderboard to evaluate and benchmark different models based on specific criteria. This template is particularly useful for model benchmarking, providing a structured way to view and submit evaluations of large language models (LLMs).
What is the purpose of the Example Leaderboard Template?
The template is designed to provide a foundation for creating custom leaderboards to benchmark and evaluate the performance of various models, particularly LLMs.
How do I add a new model to the leaderboard?
You can add a new model by duplicating the template and then inputting the model's name, description, and performance data across the defined metrics.
Can I customize the metrics in the leaderboard?
Yes, the template is fully customizable. You can modify or add new metrics to suit your specific requirements and use case.