Submit deepfake detection models for evaluation
Export Hugging Face models to ONNX
View and submit language model evaluations
Quantize a model for faster inference
Explore and visualize diverse models
Generate and view leaderboard for LLM evaluations
Teach, test, evaluate language models with MTEB Arena
Convert and upload model files for Stable Diffusion
Measure BERT model performance using WASM and WebGPU
Find and download models from Hugging Face
Evaluate and submit AI model results for Frugal AI Challenge
Display LLM benchmark leaderboard and info
Analyze model errors with interactive pages
The Deepfake Detection Arena Leaderboard is a platform designed for evaluating and comparing deepfake detection models. It provides a standardized environment where researchers and developers can submit their models for benchmarking against state-of-the-art algorithms. The leaderboard categorizes submissions under Model Benchmarking and focuses on identifying deepfake detection capabilities.
What models are eligible for submission?
Only deepfake detection models are eligible for submission. Ensure your model adheres to the platform's guidelines.
How are models evaluated on the leaderboard?
Models are evaluated based on accuracy, precision, and recall when detecting deepfake content.
Can I share my model's results publicly?
Yes, the platform allows users to share their model's performance metrics and insights with the community.