Display LLM benchmark leaderboard and info
Search for model performance across languages and benchmarks
Find recent high-liked Hugging Face models
Explore and visualize diverse models
Evaluate open LLMs in the languages of LATAM and Spain.
Pergel: A Unified Benchmark for Evaluating Turkish LLMs
Download a TriplaneGaussian model checkpoint
Browse and submit LLM evaluations
Browse and evaluate language models
Compare code model performance on benchmarks
Compare model weights and visualize differences
View and submit LLM benchmark evaluations
Open Persian LLM Leaderboard
The Hebrew Transcription Leaderboard is a platform designed to benchmark and evaluate the performance of large language models (LLMs) on Hebrew transcription tasks. It provides a comprehensive comparison of models based on their accuracy, efficiency, and reliability in transcribing Hebrew text. This tool is essential for researchers, developers, and users seeking to understand the capabilities of different LLMs in handling the Hebrew language.
What is the purpose of the Hebrew Transcription Leaderboard?
The purpose is to provide a transparent and comprehensive platform for comparing the performance of LLMs on Hebrew transcription tasks, helping users make informed decisions.
How are models evaluated on the leaderboard?
Models are evaluated using standardized metrics such as WER (Word Error Rate) and CER (Character Error Rate), ensuring fair and consistent comparisons.
Can I use the leaderboard for languages other than Hebrew?
No, the Hebrew Transcription Leaderboard is specifically designed for Hebrew transcription tasks. For other languages, you may need to use a different benchmarking tool.