Display LLM benchmark leaderboard and info
View and submit language model evaluations
Convert Hugging Face models to OpenVINO format
Visualize model performance on function calling tasks
Browse and submit LLM evaluations
Submit deepfake detection models for evaluation
Compare code model performance on benchmarks
View RL Benchmark Reports
Create and upload a Hugging Face model card
Explore and submit models using the LLM Leaderboard
Compare LLM performance across benchmarks
Upload ML model to Hugging Face Hub
Evaluate RAG systems with visual analytics
The Hebrew Transcription Leaderboard is a platform designed to benchmark and evaluate the performance of large language models (LLMs) on Hebrew transcription tasks. It provides a comprehensive comparison of models based on their accuracy, efficiency, and reliability in transcribing Hebrew text. This tool is essential for researchers, developers, and users seeking to understand the capabilities of different LLMs in handling the Hebrew language.
What is the purpose of the Hebrew Transcription Leaderboard?
The purpose is to provide a transparent and comprehensive platform for comparing the performance of LLMs on Hebrew transcription tasks, helping users make informed decisions.
How are models evaluated on the leaderboard?
Models are evaluated using standardized metrics such as WER (Word Error Rate) and CER (Character Error Rate), ensuring fair and consistent comparisons.
Can I use the leaderboard for languages other than Hebrew?
No, the Hebrew Transcription Leaderboard is specifically designed for Hebrew transcription tasks. For other languages, you may need to use a different benchmarking tool.