Open Persian LLM Leaderboard
Convert PaddleOCR models to ONNX format
Display benchmark results
Browse and submit LLM evaluations
Rank machines based on LLaMA 7B v2 benchmark results
Submit deepfake detection models for evaluation
Optimize and train foundation models using IBM's FMS
Evaluate adversarial robustness using generative models
View and submit LLM benchmark evaluations
Generate leaderboard comparing DNA models
Text-To-Speech (TTS) Evaluation using objective metrics.
View RL Benchmark Reports
Compare model weights and visualize differences
The Open Persian LLM Leaderboard is a comprehensive benchmarking platform designed to evaluate and compare the performance of Persian language models. It provides a transparent and standardized framework for assessing models across various tasks, enabling researchers and developers to identify top-performing models for specific use cases. The leaderboard is continuously updated to reflect the latest advancements in the field of Persian natural language processing.
What models are included in the Open Persian LLM Leaderboard?
The leaderboard includes a wide range of Persian language models, from state-of-the-art research models to open-source community models. The list is regularly updated as new models are released.
How often are the models updated?
Models are typically updated on a quarterly basis, but the leaderboard may be refreshed more frequently to include cutting-edge research advancements.
Why isn’t a specific model appearing on the leaderboard?
A model may not appear if it has not been submitted for evaluation or if it does not meet the leaderboard’s inclusion criteria. Users are encouraged to submit models for consideration.