Text-To-Speech (TTS) Evaluation using objective metrics.
Track, rank and evaluate open LLMs and chatbots
Generate leaderboard comparing DNA models
Evaluate RAG systems with visual analytics
Request model evaluation on COCO val 2017 dataset
Demo of the new, massively multilingual leaderboard
Evaluate open LLMs in the languages of LATAM and Spain.
Convert Stable Diffusion checkpoint to Diffusers and open a PR
Optimize and train foundation models using IBM's FMS
Browse and evaluate language models
Display benchmark results
Explore and submit models using the LLM Leaderboard
View RL Benchmark Reports
The TTSDS Benchmark and Leaderboard is a comprehensive tool designed to evaluate and compare Text-To-Speech (TTS) models using objective metrics. It provides a platform to assess the quality of TTS systems by measuring how closely synthesized speech matches human speech. The leaderboard serves as a central hub to track the performance of various models, enabling easy comparison and fostering advancements in TTS technology.
What metrics does TTSDS Benchmark use?
TTSDS Benchmark primarily uses Mel-Cepstral Distortion (MCD) and Short-Time Objective Intelligibility (STOI) to evaluate TTS models. These metrics are widely accepted for assessing speech synthesis quality.
How do I add my custom TTS model to the leaderboard?
To add your custom model, follow these steps:
How often is the leaderboard updated?
The leaderboard is dynamically updated whenever new models are benchmarked and submitted. However, users are responsible for running the benchmark script and submitting their results to reflect the latest performance of their models.