Duplicate this leaderboard to initialize your own!
Track, rank and evaluate open LLMs and chatbots
Launch web-based model application
Evaluate and submit AI model results for Frugal AI Challenge
Quantize a model for faster inference
Convert and upload model files for Stable Diffusion
Evaluate adversarial robustness using generative models
Evaluate AI-generated results for accuracy
Browse and filter machine learning models by category and modality
View and submit machine learning model evaluations
Demo of the new, massively multilingual leaderboard
Explore and manage STM32 ML models with the STM32AI Model Zoo dashboard
View NSQL Scores for Models
The Example Leaderboard Template is a starting point for creating custom leaderboards to track and compare the performance of various models. It allows users to easily duplicate and customize their own leaderboard to evaluate and benchmark different models based on specific criteria. This template is particularly useful for model benchmarking, providing a structured way to view and submit evaluations of large language models (LLMs).
What is the purpose of the Example Leaderboard Template?
The template is designed to provide a foundation for creating custom leaderboards to benchmark and evaluate the performance of various models, particularly LLMs.
How do I add a new model to the leaderboard?
You can add a new model by duplicating the template and then inputting the model's name, description, and performance data across the defined metrics.
Can I customize the metrics in the leaderboard?
Yes, the template is fully customizable. You can modify or add new metrics to suit your specific requirements and use case.