Request model evaluation on COCO val 2017 dataset
Evaluate AI-generated results for accuracy
Convert Stable Diffusion checkpoint to Diffusers and open a PR
Rank machines based on LLaMA 7B v2 benchmark results
Evaluate LLM over-refusal rates with OR-Bench
Convert Hugging Face models to OpenVINO format
Compare audio representation models using benchmark results
Evaluate and submit AI model results for Frugal AI Challenge
Measure BERT model performance using WASM and WebGPU
Evaluate open LLMs in the languages of LATAM and Spain.
Browse and filter machine learning models by category and modality
Visualize model performance on function calling tasks
Browse and submit LLM evaluations
The Open Object Detection Leaderboard is a platform designed to evaluate and benchmark object detection models. It allows users to submit their models for evaluation on the COCO val 2017 dataset, providing detailed performance metrics and insights. This leaderboard is a valuable resource for researchers and developers to compare their models against industry standards and identify areas for improvement.
• Model Evaluation: Submit your object detection models for evaluation on the COCO val 2017 dataset. • Performance Metrics: Receive detailed performance metrics, including mAP (mean Average Precision), AP across different object sizes, and AR (Average Recall). • Visualization Tools: Access visualization tools to analyze detection results and compare with ground truth annotations. • Leaderboard Comparison: Compare your model's performance with other state-of-the-art models in the leaderboard. • Community Sharing: Share your model's results with the community to foster collaboration and innovation. • Submission Tracking: Track your model's performance history and improvements over time. • Support for Popular Frameworks: Easily integrate with popular object detection frameworks like TensorFlow, PyTorch, and more. • API Access: Utilize the leaderboard's API to automate model submissions and retrieve results programmatically. • Comprehensive Documentation: Access detailed documentation and tutorials to guide you through the evaluation process.
What dataset is used for evaluation?
The Open Object Detection Leaderboard uses the COCO val 2017 dataset for evaluating object detection models. This dataset is widely used in the computer vision community for benchmarking object detection tasks.
How do I submit my model for evaluation?
To submit your model, you need to generate predictions on the COCO val 2017 dataset and submit them via the leaderboard's web interface or API. Detailed submission instructions are provided in the platform's documentation.
What performance metrics are reported?
The leaderboard reports standard object detection metrics, including mAP (mean Average Precision), AP (Average Precision) across different object sizes, and AR (Average Recall). These metrics provide a comprehensive understanding of your model's performance.