Display benchmark results
Visualize model performance on function calling tasks
Convert PyTorch models to waifu2x-ios format
Text-To-Speech (TTS) Evaluation using objective metrics.
Convert Hugging Face model repo to Safetensors
Retrain models for new data at edge devices
Evaluate and submit AI model results for Frugal AI Challenge
Explore and benchmark visual document retrieval models
Upload a machine learning model to Hugging Face Hub
View and submit LLM benchmark evaluations
Push a ML model to Hugging Face Hub
Display and submit LLM benchmarks
Explore and manage STM32 ML models with the STM32AI Model Zoo dashboard
Redteaming Resistance Leaderboard is a model benchmarking tool designed to evaluate and compare the performance of AI models in resisting adversarial attacks. It provides a comprehensive platform to display benchmark results, enabling researchers and developers to assess the robustness of their models against various threat scenarios. The leaderboard serves as a centralized resource for identifying top-performing models and tracking progress in adversarial defense.
What models are included in the leaderboard?
The leaderboard features a diverse range of AI models, including state-of-the-art architectures designed for adversarial defense.
How often are the results updated?
Results are updated in real-time to ensure the latest advancements in model resistance are reflected.
Can I contribute my own model to the leaderboard?
Yes, submissions are welcome. Please refer to the platform's documentation for guidelines on model submission and evaluation criteria.