Submit evaluations for speaker tagging and view leaderboard
Select and analyze data subsets
This is a timeline of all the available models released
Search for tagged characters in Animagine datasets
Analyze and visualize your dataset using AI
Leaderboard for text-to-video generation models
Create a detailed report from a dataset
Explore and filter model evaluation results
A Leaderboard that demonstrates LMM reasoning capabilities
Create detailed data reports
Generate a data profile report
Check system health
Evaluate LLMs using Kazakh MC tasks
The Post-ASR LLM based Speaker Tagging Leaderboard is a tool designed to evaluate and compare the performance of Large Language Models (LLMs) on speaker tagging tasks. It focuses on processing outputs from Automatic Speech Recognition (ASR) systems to identify and tag speakers in audio data. This leaderboard provides a platform to benchmark different LLMs and their ability to accurately assign speaker tags in transcribed audio content.
What is speaker tagging?
Speaker tagging is the process of identifying and assigning labels to different speakers in an audio transcription, allowing for the differentiation of dialogue between multiple participants.
How do I submit my model's results to the leaderboard?
Submit your model's speaker tagging results through the leaderboard's submission interface, ensuring your data is formatted according to the specified requirements.
What metrics are used to evaluate performance on the leaderboard?
The leaderboard uses standard metrics such as accuracy, precision, recall, and F1-score to evaluate speaker tagging performance. These metrics provide a comprehensive view of your model's effectiveness.