Submit deepfake detection models for evaluation
Evaluate code generation with diverse feedback types
Browse and submit language model benchmarks
Determine GPU requirements for large language models
Display leaderboard for earthquake intent classification models
Track, rank and evaluate open LLMs and chatbots
Calculate memory needed to train AI models
Browse and evaluate language models
Browse and submit model evaluations in LLM benchmarks
Convert and upload model files for Stable Diffusion
Upload ML model to Hugging Face Hub
Launch web-based model application
Compare audio representation models using benchmark results
The Deepfake Detection Arena Leaderboard is a platform designed for evaluating and comparing deepfake detection models. It provides a standardized environment where researchers and developers can submit their models for benchmarking against state-of-the-art algorithms. The leaderboard categorizes submissions under Model Benchmarking and focuses on identifying deepfake detection capabilities.
What models are eligible for submission?
Only deepfake detection models are eligible for submission. Ensure your model adheres to the platform's guidelines.
How are models evaluated on the leaderboard?
Models are evaluated based on accuracy, precision, and recall when detecting deepfake content.
Can I share my model's results publicly?
Yes, the platform allows users to share their model's performance metrics and insights with the community.