Browse and submit model evaluations in LLM benchmarks
View and submit LLM benchmark evaluations
Benchmark LLMs in accuracy and translation across languages
Find and download models from Hugging Face
Open Persian LLM Leaderboard
View and submit LLM benchmark evaluations
Measure BERT model performance using WASM and WebGPU
Launch web-based model application
Quantize a model for faster inference
Track, rank and evaluate open LLMs and chatbots
Explore and manage STM32 ML models with the STM32AI Model Zoo dashboard
Submit deepfake detection models for evaluation
Convert Stable Diffusion checkpoint to Diffusers and open a PR
The OpenLLM Turkish leaderboard v0.2 is a tool designed to evaluate and benchmark large language models (LLMs) for the Turkish language. It provides a platform for developers and researchers to submit and compare model evaluations across various tasks and metrics specific to Turkish. This leaderboard aims to promote transparency and progress in Turkish NLP by enabling fair comparisons of model performance.
What models are supported on the leaderboard?
The leaderboard supports a variety of LLMs, including popular models like T5, BERT, and specialized Turkish models.
How are models evaluated?
Models are evaluated based on standard NLP tasks such as text classification, question answering, and language translation, using precision, recall, BLEU score, and other relevant metrics.
How often is the leaderboard updated?
The leaderboard is updated regularly with new models, datasets, and features to reflect the latest advancements in Turkish NLP.