Display benchmark results
Rank machines based on LLaMA 7B v2 benchmark results
Find and download models from Hugging Face
Determine GPU requirements for large language models
Teach, test, evaluate language models with MTEB Arena
Convert PyTorch models to waifu2x-ios format
Evaluate and submit AI model results for Frugal AI Challenge
View NSQL Scores for Models
View and submit LLM benchmark evaluations
Benchmark LLMs in accuracy and translation across languages
Convert Stable Diffusion checkpoint to Diffusers and open a PR
Create and upload a Hugging Face model card
Evaluate AI-generated results for accuracy
Redteaming Resistance Leaderboard is a model benchmarking tool designed to evaluate and compare the performance of AI models in resisting adversarial attacks. It provides a comprehensive platform to display benchmark results, enabling researchers and developers to assess the robustness of their models against various threat scenarios. The leaderboard serves as a centralized resource for identifying top-performing models and tracking progress in adversarial defense.
What models are included in the leaderboard?
The leaderboard features a diverse range of AI models, including state-of-the-art architectures designed for adversarial defense.
How often are the results updated?
Results are updated in real-time to ensure the latest advancements in model resistance are reflected.
Can I contribute my own model to the leaderboard?
Yes, submissions are welcome. Please refer to the platform's documentation for guidelines on model submission and evaluation criteria.