Request model evaluation on COCO val 2017 dataset
Text-To-Speech (TTS) Evaluation using objective metrics.
Launch web-based model application
Find recent high-liked Hugging Face models
Open Persian LLM Leaderboard
GIFT-Eval: A Benchmark for General Time Series Forecasting
Upload a machine learning model to Hugging Face Hub
Teach, test, evaluate language models with MTEB Arena
Load AI models and prepare your space
View and submit language model evaluations
Compare LLM performance across benchmarks
Display and filter leaderboard models
View and submit LLM evaluations
The Open Object Detection Leaderboard is a platform designed to evaluate and benchmark object detection models. It allows users to submit their models for evaluation on the COCO val 2017 dataset, providing detailed performance metrics and insights. This leaderboard is a valuable resource for researchers and developers to compare their models against industry standards and identify areas for improvement.
• Model Evaluation: Submit your object detection models for evaluation on the COCO val 2017 dataset. • Performance Metrics: Receive detailed performance metrics, including mAP (mean Average Precision), AP across different object sizes, and AR (Average Recall). • Visualization Tools: Access visualization tools to analyze detection results and compare with ground truth annotations. • Leaderboard Comparison: Compare your model's performance with other state-of-the-art models in the leaderboard. • Community Sharing: Share your model's results with the community to foster collaboration and innovation. • Submission Tracking: Track your model's performance history and improvements over time. • Support for Popular Frameworks: Easily integrate with popular object detection frameworks like TensorFlow, PyTorch, and more. • API Access: Utilize the leaderboard's API to automate model submissions and retrieve results programmatically. • Comprehensive Documentation: Access detailed documentation and tutorials to guide you through the evaluation process.
What dataset is used for evaluation?
The Open Object Detection Leaderboard uses the COCO val 2017 dataset for evaluating object detection models. This dataset is widely used in the computer vision community for benchmarking object detection tasks.
How do I submit my model for evaluation?
To submit your model, you need to generate predictions on the COCO val 2017 dataset and submit them via the leaderboard's web interface or API. Detailed submission instructions are provided in the platform's documentation.
What performance metrics are reported?
The leaderboard reports standard object detection metrics, including mAP (mean Average Precision), AP (Average Precision) across different object sizes, and AR (Average Recall). These metrics provide a comprehensive understanding of your model's performance.