Submit deepfake detection models for evaluation
Evaluate LLM over-refusal rates with OR-Bench
View NSQL Scores for Models
Create and upload a Hugging Face model card
Download a TriplaneGaussian model checkpoint
Convert Hugging Face models to OpenVINO format
Calculate memory needed to train AI models
Convert PaddleOCR models to ONNX format
Leaderboard of information retrieval models in French
Submit models for evaluation and view leaderboard
Predict customer churn based on input details
Browse and evaluate language models
Browse and submit model evaluations in LLM benchmarks
The Deepfake Detection Arena Leaderboard is a platform designed for evaluating and comparing deepfake detection models. It provides a standardized environment where researchers and developers can submit their models for benchmarking against state-of-the-art algorithms. The leaderboard categorizes submissions under Model Benchmarking and focuses on identifying deepfake detection capabilities.
What models are eligible for submission?
Only deepfake detection models are eligible for submission. Ensure your model adheres to the platform's guidelines.
How are models evaluated on the leaderboard?
Models are evaluated based on accuracy, precision, and recall when detecting deepfake content.
Can I share my model's results publicly?
Yes, the platform allows users to share their model's performance metrics and insights with the community.