Evaluate open LLMs in the languages of LATAM and Spain.
Benchmark AI models by comparison
Track, rank and evaluate open LLMs and chatbots
View and submit LLM benchmark evaluations
Find and download models from Hugging Face
Browse and submit model evaluations in LLM benchmarks
Merge machine learning models using a YAML configuration file
Display leaderboard of language model evaluations
Convert and upload model files for Stable Diffusion
Launch web-based model application
Merge Lora adapters with a base model
Display and filter leaderboard models
Evaluate reward models for math reasoning
La Leaderboard is a model benchmarking tool designed to evaluate and compare open large language models (LLMs) in the languages of Latin America (LATAM) and Spain. It provides a comprehensive platform for researchers and developers to assess the performance of different language models across various tasks and languages, ensuring a tailored approach for the Spanish-speaking regions.
• Multilingual Support: Evaluate models in multiple languages across LATAM and Spain. • Customizable Benchmarks: Define specific tasks and metrics to suit your evaluation needs. • Interactive Dashboards: Visualize model performance through intuitive and detailed graphs. • Real-Time Tracking: Monitor model updates and compare their performance over time. • Comprehensive Reporting: Access detailed analysis and insights for each evaluated model. • Model Comparisons: Directly compare multiple models side-by-side.
What languages does La Leaderboard support?
La Leaderboard supports Spanish, Portuguese, and other languages widely spoken across Latin America and Spain.
How often are new models added to La Leaderboard?
New models are added regularly as they become available in the open LLM ecosystem.
Can I customize the benchmarks for specific tasks?
Yes, La Leaderboard allows users to define custom benchmarks tailored to their specific requirements.