Submit evaluations for speaker tagging and view leaderboard
Analyze data using Pandas Profiling
Filter and view AI model leaderboard data
Evaluate diversity in data sets to improve fairness
Generate financial charts from stock data
Explore speech recognition model performance
Analyze and visualize your dataset using AI
Classify breast cancer risk based on cell features
Browse and filter AI model evaluation results
Browse LLM benchmark results in various categories
Label data for machine learning models
Display CLIP benchmark results for inference performance
Search and save datasets generated with a LLM in real time
The Post-ASR LLM based Speaker Tagging Leaderboard is a tool designed to evaluate and compare the performance of Large Language Models (LLMs) on speaker tagging tasks. It focuses on processing outputs from Automatic Speech Recognition (ASR) systems to identify and tag speakers in audio data. This leaderboard provides a platform to benchmark different LLMs and their ability to accurately assign speaker tags in transcribed audio content.
What is speaker tagging?
Speaker tagging is the process of identifying and assigning labels to different speakers in an audio transcription, allowing for the differentiation of dialogue between multiple participants.
How do I submit my model's results to the leaderboard?
Submit your model's speaker tagging results through the leaderboard's submission interface, ensuring your data is formatted according to the specified requirements.
What metrics are used to evaluate performance on the leaderboard?
The leaderboard uses standard metrics such as accuracy, precision, recall, and F1-score to evaluate speaker tagging performance. These metrics provide a comprehensive view of your model's effectiveness.