Browse and filter AI model evaluation results
M-RewardBench Leaderboard
https://huggingface.co/spaces/VIDraft/mouse-webgen
Search and save datasets generated with a LLM in real time
Browse and compare Indic language LLMs on a leaderboard
Display a welcome message on a webpage
Analyze weekly and daily trader performance in Olas Predict
Browse and submit evaluation results for AI benchmarks
Generate a co-expression network for genes
Evaluate LLMs using Kazakh MC tasks
Embed and use ZeroEval for evaluation tasks
This is AI app that help to chat with your CSV & Excel.
Analyze and visualize car data
UnlearnDiffAtk Benchmark is a data visualization tool designed to help users evaluate and compare AI models through the lens of differentiable attacks. It provides an interactive platform to browse and filter results of model evaluations, enabling researchers and developers to understand model vulnerabilities and performance more effectively.
What is the UnlearnDiffAtk Benchmark used for?
The UnlearnDiffAtk Benchmark is used to evaluate and compare AI models based on their robustness against differentiable attacks. It helps identify vulnerabilities and understand model performance under various scenarios.
What types of attacks are supported by UnlearnDiffAtk Benchmark?
The benchmark supports a wide range of differentiable attacks, including gradient-based and black-box attacks. For a full list, refer to the documentation.
How do I install UnlearnDiffAtk Benchmark?
Installation instructions are provided in the documentation. Typically, it involves running a single command to set up the tool and its dependencies.