Text-To-Speech (TTS) Evaluation using objective metrics.
Teach, test, evaluate language models with MTEB Arena
View and submit LLM benchmark evaluations
Explore GenAI model efficiency on ML.ENERGY leaderboard
Compare LLM performance across benchmarks
Compare and rank LLMs using benchmark scores
Convert PyTorch models to waifu2x-ios format
Submit models for evaluation and view leaderboard
Create and upload a Hugging Face model card
Evaluate open LLMs in the languages of LATAM and Spain.
Convert and upload model files for Stable Diffusion
Measure BERT model performance using WASM and WebGPU
Merge machine learning models using a YAML configuration file
The TTSDS Benchmark and Leaderboard is a comprehensive tool designed to evaluate and compare Text-To-Speech (TTS) models using objective metrics. It provides a platform to assess the quality of TTS systems by measuring how closely synthesized speech matches human speech. The leaderboard serves as a central hub to track the performance of various models, enabling easy comparison and fostering advancements in TTS technology.
What metrics does TTSDS Benchmark use?
TTSDS Benchmark primarily uses Mel-Cepstral Distortion (MCD) and Short-Time Objective Intelligibility (STOI) to evaluate TTS models. These metrics are widely accepted for assessing speech synthesis quality.
How do I add my custom TTS model to the leaderboard?
To add your custom model, follow these steps:
How often is the leaderboard updated?
The leaderboard is dynamically updated whenever new models are benchmarked and submitted. However, users are responsible for running the benchmark script and submitting their results to reflect the latest performance of their models.