Browse and submit model evaluations in LLM benchmarks
Calculate memory needed to train AI models
Persian Text Embedding Benchmark
Calculate GPU requirements for running LLMs
Convert Hugging Face models to OpenVINO format
View and submit LLM benchmark evaluations
View NSQL Scores for Models
View and submit LLM evaluations
Evaluate open LLMs in the languages of LATAM and Spain.
Browse and submit LLM evaluations
Display and filter leaderboard models
Generate and view leaderboard for LLM evaluations
Evaluate code generation with diverse feedback types
The OpenLLM Turkish leaderboard v0.2 is a tool designed to evaluate and benchmark large language models (LLMs) for the Turkish language. It provides a platform for developers and researchers to submit and compare model evaluations across various tasks and metrics specific to Turkish. This leaderboard aims to promote transparency and progress in Turkish NLP by enabling fair comparisons of model performance.
What models are supported on the leaderboard?
The leaderboard supports a variety of LLMs, including popular models like T5, BERT, and specialized Turkish models.
How are models evaluated?
Models are evaluated based on standard NLP tasks such as text classification, question answering, and language translation, using precision, recall, BLEU score, and other relevant metrics.
How often is the leaderboard updated?
The leaderboard is updated regularly with new models, datasets, and features to reflect the latest advancements in Turkish NLP.