Submit evaluations for speaker tagging and view leaderboard
VLMEvalKit Evaluation Results Collection
Analyze and compare datasets, upload reports to Hugging Face
Search and save datasets generated with a LLM in real time
Display a treemap of languages and datasets
Multilingual metrics for the LMSys Arena Leaderboard
Display and manage data in a clean table format
A Leaderboard that demonstrates LMM reasoning capabilities
Explore income data with an interactive visualization tool
Embed and use ZeroEval for evaluation tasks
Analyze and visualize your dataset using AI
Need to analyze data? Let a Llama-3.1 agent do it for you!
Uncensored General Intelligence Leaderboard
The Post-ASR LLM based Speaker Tagging Leaderboard is a tool designed to evaluate and compare the performance of Large Language Models (LLMs) on speaker tagging tasks. It focuses on processing outputs from Automatic Speech Recognition (ASR) systems to identify and tag speakers in audio data. This leaderboard provides a platform to benchmark different LLMs and their ability to accurately assign speaker tags in transcribed audio content.
What is speaker tagging?
Speaker tagging is the process of identifying and assigning labels to different speakers in an audio transcription, allowing for the differentiation of dialogue between multiple participants.
How do I submit my model's results to the leaderboard?
Submit your model's speaker tagging results through the leaderboard's submission interface, ensuring your data is formatted according to the specified requirements.
What metrics are used to evaluate performance on the leaderboard?
The leaderboard uses standard metrics such as accuracy, precision, recall, and F1-score to evaluate speaker tagging performance. These metrics provide a comprehensive view of your model's effectiveness.