Open Persian LLM Leaderboard
Calculate memory usage for LLM models
Analyze model errors with interactive pages
Upload ML model to Hugging Face Hub
Compare LLM performance across benchmarks
Pergel: A Unified Benchmark for Evaluating Turkish LLMs
View RL Benchmark Reports
Rank machines based on LLaMA 7B v2 benchmark results
Convert Hugging Face models to OpenVINO format
Benchmark AI models by comparison
Predict customer churn based on input details
Evaluate LLM over-refusal rates with OR-Bench
SolidityBench Leaderboard
The Open Persian LLM Leaderboard is a comprehensive benchmarking platform designed to evaluate and compare the performance of Persian language models. It provides a transparent and standardized framework for assessing models across various tasks, enabling researchers and developers to identify top-performing models for specific use cases. The leaderboard is continuously updated to reflect the latest advancements in the field of Persian natural language processing.
What models are included in the Open Persian LLM Leaderboard?
The leaderboard includes a wide range of Persian language models, from state-of-the-art research models to open-source community models. The list is regularly updated as new models are released.
How often are the models updated?
Models are typically updated on a quarterly basis, but the leaderboard may be refreshed more frequently to include cutting-edge research advancements.
Why isn’t a specific model appearing on the leaderboard?
A model may not appear if it has not been submitted for evaluation or if it does not meet the leaderboard’s inclusion criteria. Users are encouraged to submit models for consideration.