Display LLM benchmark leaderboard and info
Persian Text Embedding Benchmark
Compare model weights and visualize differences
Text-To-Speech (TTS) Evaluation using objective metrics.
Demo of the new, massively multilingual leaderboard
Analyze model errors with interactive pages
Convert PyTorch models to waifu2x-ios format
Quantize a model for faster inference
Find and download models from Hugging Face
Explore and submit models using the LLM Leaderboard
Create and manage ML pipelines with ZenML Dashboard
Browse and submit model evaluations in LLM benchmarks
Display model benchmark results
The Hebrew Transcription Leaderboard is a platform designed to benchmark and evaluate the performance of large language models (LLMs) on Hebrew transcription tasks. It provides a comprehensive comparison of models based on their accuracy, efficiency, and reliability in transcribing Hebrew text. This tool is essential for researchers, developers, and users seeking to understand the capabilities of different LLMs in handling the Hebrew language.
What is the purpose of the Hebrew Transcription Leaderboard?
The purpose is to provide a transparent and comprehensive platform for comparing the performance of LLMs on Hebrew transcription tasks, helping users make informed decisions.
How are models evaluated on the leaderboard?
Models are evaluated using standardized metrics such as WER (Word Error Rate) and CER (Character Error Rate), ensuring fair and consistent comparisons.
Can I use the leaderboard for languages other than Hebrew?
No, the Hebrew Transcription Leaderboard is specifically designed for Hebrew transcription tasks. For other languages, you may need to use a different benchmarking tool.