Browse and filter AI model evaluation results
M-RewardBench Leaderboard
This project is a GUI for the gpustack/gguf-parser-go
Generate plots for GP and PFN posterior approximations
Display CLIP benchmark results for inference performance
NSFW Text Generator for Detecting NSFW Text
Browse and compare Indic language LLMs on a leaderboard
Analyze and visualize Hugging Face model download stats
Need to analyze data? Let a Llama-3.1 agent do it for you!
Visualize dataset distributions with facets
Explore and compare LLM models through interactive leaderboards and submissions
Cluster data points using KMeans
View monthly arXiv download trends since 1994
UnlearnDiffAtk Benchmark is a data visualization tool designed to help users evaluate and compare AI models through the lens of differentiable attacks. It provides an interactive platform to browse and filter results of model evaluations, enabling researchers and developers to understand model vulnerabilities and performance more effectively.
What is the UnlearnDiffAtk Benchmark used for?
The UnlearnDiffAtk Benchmark is used to evaluate and compare AI models based on their robustness against differentiable attacks. It helps identify vulnerabilities and understand model performance under various scenarios.
What types of attacks are supported by UnlearnDiffAtk Benchmark?
The benchmark supports a wide range of differentiable attacks, including gradient-based and black-box attacks. For a full list, refer to the documentation.
How do I install UnlearnDiffAtk Benchmark?
Installation instructions are provided in the documentation. Typically, it involves running a single command to set up the tool and its dependencies.