Submit evaluations for speaker tagging and view leaderboard
Display server status information
Open Agent Leaderboard
Analyze your dataset with guided tools
View and compare pass@k metrics for AI models
Analyze and visualize your dataset using AI
Simulate causal effects and determine variable control
Analyze weekly and daily trader performance in Olas Predict
What happened in open-source AI this year, and what’s next?
Browse and submit evaluation results for AI benchmarks
Check system health
Generate images based on data
Filter and view AI model leaderboard data
The Post-ASR LLM based Speaker Tagging Leaderboard is a tool designed to evaluate and compare the performance of Large Language Models (LLMs) on speaker tagging tasks. It focuses on processing outputs from Automatic Speech Recognition (ASR) systems to identify and tag speakers in audio data. This leaderboard provides a platform to benchmark different LLMs and their ability to accurately assign speaker tags in transcribed audio content.
What is speaker tagging?
Speaker tagging is the process of identifying and assigning labels to different speakers in an audio transcription, allowing for the differentiation of dialogue between multiple participants.
How do I submit my model's results to the leaderboard?
Submit your model's speaker tagging results through the leaderboard's submission interface, ensuring your data is formatted according to the specified requirements.
What metrics are used to evaluate performance on the leaderboard?
The leaderboard uses standard metrics such as accuracy, precision, recall, and F1-score to evaluate speaker tagging performance. These metrics provide a comprehensive view of your model's effectiveness.