Display benchmark results
Display and filter leaderboard models
Evaluate and submit AI model results for Frugal AI Challenge
Submit models for evaluation and view leaderboard
Convert Hugging Face model repo to Safetensors
Explore GenAI model efficiency on ML.ENERGY leaderboard
Request model evaluation on COCO val 2017 dataset
Explore and submit models using the LLM Leaderboard
Load AI models and prepare your space
Leaderboard of information retrieval models in French
Display and submit language model evaluations
Display and submit LLM benchmarks
Compare and rank LLMs using benchmark scores
Redteaming Resistance Leaderboard is a model benchmarking tool designed to evaluate and compare the performance of AI models in resisting adversarial attacks. It provides a comprehensive platform to display benchmark results, enabling researchers and developers to assess the robustness of their models against various threat scenarios. The leaderboard serves as a centralized resource for identifying top-performing models and tracking progress in adversarial defense.
What models are included in the leaderboard?
The leaderboard features a diverse range of AI models, including state-of-the-art architectures designed for adversarial defense.
How often are the results updated?
Results are updated in real-time to ensure the latest advancements in model resistance are reflected.
Can I contribute my own model to the leaderboard?
Yes, submissions are welcome. Please refer to the platform's documentation for guidelines on model submission and evaluation criteria.