Leaderboard for text-to-video generation models
Explore speech recognition model performance
Make RAG evaluation dataset. 100% compatible to AutoRAG
Submit evaluations for speaker tagging and view leaderboard
Check system health
Open Agent Leaderboard
This is AI app that help to chat with your CSV & Excel.
Create a detailed report from a dataset
Analyze and compare datasets, upload reports to Hugging Face
Evaluate model predictions and update leaderboard
Analyze data to generate a comprehensive profile report
More advanced and challenging multi-task evaluation
Mapping Nieman Lab's 2025 Journalism Predictions
VideoScore Leaderboard is a tool designed to compare and analyze the performance of text-to-video generation models. It provides a clear and organized way to display leaderboard tables, showcasing video scores and evaluation data. This tool is essential for researchers and developers to track progress, identify top-performing models, and make data-driven decisions.
• Score Display: Shows video scores in a structured leaderboard format.
• Model Comparison: Allows side-by-side comparison of different models.
• Real-Time Data Updates: Reflects the latest evaluation results.
• Interactive Tables: Enables sorting, filtering, and searching functionalities.
• Data Visualization: Includes charts to represent trends and performance metrics.
• Customization: Users can filter by specific criteria or metrics.
What is a "model" in the context of VideoScore Leaderboard?
A model refers to a specific text-to-video generation algorithm or system being evaluated.
Can I customize the leaderboard to show only specific models?
Yes, users can filter the leaderboard to display only the models they are interested in.
How are the video scores calculated?
Video scores are calculated using predefined evaluation metrics, which may vary depending on the dataset or use case.