Display and filter leaderboard models
Convert Stable Diffusion checkpoint to Diffusers and open a PR
Convert Hugging Face models to OpenVINO format
Pergel: A Unified Benchmark for Evaluating Turkish LLMs
Push a ML model to Hugging Face Hub
Explore and visualize diverse models
View and compare language model evaluations
View LLM Performance Leaderboard
Display leaderboard for earthquake intent classification models
Evaluate open LLMs in the languages of LATAM and Spain.
Merge Lora adapters with a base model
Measure execution times of BERT models using WebGPU and WASM
Open Persian LLM Leaderboard
Encodechka Leaderboard is a tool designed for model benchmarking, allowing users to compare and evaluate different AI models based on their performance metrics. It provides a centralized platform to display and filter leaderboard models, making it easier to identify top-performing models and understand their strengths.
• Model Comparison: Easily compare performance metrics of different AI models. • Filtering Options: Filter models based on specific criteria such as dataset, task, or model type. • Real-Time Updates: Stay up-to-date with the latest models and their performance. • Detailed Insights: Access in-depth information about each model's capabilities and benchmarks. • Customizable Views: Tailor the leaderboard to focus on metrics that matter most to your use case.
What models are included in the Encodechka Leaderboard?
The leaderboard features a wide range of AI models, including state-of-the-art models from leading research institutions and organizations.
How often is the leaderboard updated?
The leaderboard is updated in real-time to reflect the latest additions and performance changes in the AI model landscape.
Can I customize the metrics displayed on the leaderboard?
Yes, users can customize the view to focus on specific metrics such as accuracy, inference speed, or memory usage.