Browse and filter AI model evaluation results
Visualize amino acid changes in protein sequences interactively
Life System and Habit Tracker
Search for tagged characters in Animagine datasets
Explore how datasets shape classifier biases
Generate a detailed dataset report
Explore and filter model evaluation results
VLMEvalKit Evaluation Results Collection
A Leaderboard that demonstrates LMM reasoning capabilities
Search and save datasets generated with a LLM in real time
statistics analysis for linear regression
Display document size plots
Evaluate diversity in data sets to improve fairness
UnlearnDiffAtk Benchmark is a data visualization tool designed to help users evaluate and compare AI models through the lens of differentiable attacks. It provides an interactive platform to browse and filter results of model evaluations, enabling researchers and developers to understand model vulnerabilities and performance more effectively.
What is the UnlearnDiffAtk Benchmark used for?
The UnlearnDiffAtk Benchmark is used to evaluate and compare AI models based on their robustness against differentiable attacks. It helps identify vulnerabilities and understand model performance under various scenarios.
What types of attacks are supported by UnlearnDiffAtk Benchmark?
The benchmark supports a wide range of differentiable attacks, including gradient-based and black-box attacks. For a full list, refer to the documentation.
How do I install UnlearnDiffAtk Benchmark?
Installation instructions are provided in the documentation. Typically, it involves running a single command to set up the tool and its dependencies.