Explore GenAI model efficiency on ML.ENERGY leaderboard
Leaderboard of information retrieval models in French
Search for model performance across languages and benchmarks
Compare LLM performance across benchmarks
Demo of the new, massively multilingual leaderboard
Explore and benchmark visual document retrieval models
View and submit language model evaluations
View and submit machine learning model evaluations
Browse and submit model evaluations in LLM benchmarks
Explore and submit models using the LLM Leaderboard
Load AI models and prepare your space
Track, rank and evaluate open LLMs and chatbots
Display genomic embedding leaderboard
The ML.ENERGY Leaderboard is a benchmarking platform designed to evaluate and compare the efficiency of various AI models, particularly in the context of energy consumption and computational resources. It serves as a centralized hub for researchers, developers, and practitioners to explore the performance of GenAI models and understand their environmental impact. By providing detailed metrics and rankings, the leaderboard aims to promote transparency and encourage the development of more sustainable AI solutions.
What does the ML.ENERGY Leaderboard measure?
The ML.ENERGY Leaderboard measures the energy efficiency, computational performance, and cost-effectiveness of various AI models, providing insights into their environmental impact.
Which types of AI models are included in the leaderboard?
The leaderboard includes a wide range of AI models, with a particular focus on GenAI models. It also supports other machine learning models used for tasks like computer vision, natural language processing, and generative modeling.
How often is the leaderboard updated?
The ML.ENERGY Leaderboard is updated regularly to reflect the latest advancements in AI technology, new model releases, and improvements in benchmarking methodologies.
Can I contribute to the ML.ENERGY Leaderboard?
Yes! The platform encourages community contributions, including submissions of new models, dataset suggestions, and feedback on benchmarking methodologies.