Text-To-Speech (TTS) Evaluation using objective metrics.
Display and submit LLM benchmarks
Display LLM benchmark leaderboard and info
Determine GPU requirements for large language models
Submit models for evaluation and view leaderboard
Browse and submit language model benchmarks
Create and manage ML pipelines with ZenML Dashboard
Display and filter leaderboard models
Track, rank and evaluate open LLMs and chatbots
Visualize model performance on function calling tasks
Evaluate and submit AI model results for Frugal AI Challenge
Quantize a model for faster inference
Generate and view leaderboard for LLM evaluations
The TTSDS Benchmark and Leaderboard is a comprehensive tool designed to evaluate and compare Text-To-Speech (TTS) models using objective metrics. It provides a platform to assess the quality of TTS systems by measuring how closely synthesized speech matches human speech. The leaderboard serves as a central hub to track the performance of various models, enabling easy comparison and fostering advancements in TTS technology.
What metrics does TTSDS Benchmark use?
TTSDS Benchmark primarily uses Mel-Cepstral Distortion (MCD) and Short-Time Objective Intelligibility (STOI) to evaluate TTS models. These metrics are widely accepted for assessing speech synthesis quality.
How do I add my custom TTS model to the leaderboard?
To add your custom model, follow these steps:
How often is the leaderboard updated?
The leaderboard is dynamically updated whenever new models are benchmarked and submitted. However, users are responsible for running the benchmark script and submitting their results to reflect the latest performance of their models.