Display and submit language model evaluations
Optimize and train foundation models using IBM's FMS
Convert PaddleOCR models to ONNX format
Explore GenAI model efficiency on ML.ENERGY leaderboard
Explore and manage STM32 ML models with the STM32AI Model Zoo dashboard
View and submit LLM benchmark evaluations
View and submit machine learning model evaluations
View and compare language model evaluations
View and submit LLM benchmark evaluations
Display benchmark results
Rank machines based on LLaMA 7B v2 benchmark results
Quantize a model for faster inference
Export Hugging Face models to ONNX
Leaderboard is a platform designed for model benchmarking, allowing users to display and submit language model evaluations. It serves as a centralized tool for comparing and tracking the performance of different AI models, providing insights into their capabilities and improvements over time.
What is the purpose of Leaderboard?
Leaderboard is a tool for benchmarking language models, enabling users to compare and track model performance in a structured manner.
How do I submit my model's evaluation?
To submit your model's evaluation, follow the guidelines provided on the platform, ensuring your data is in the correct format and includes all required metrics.
What are the benefits of using Leaderboard?
Using Leaderboard allows you to gain insights into your model's performance, identify areas for improvement, and benchmark against industry standards and other models.