Compare audio representation models using benchmark results
Leaderboard of information retrieval models in French
Track, rank and evaluate open LLMs and chatbots
Find recent high-liked Hugging Face models
View and submit LLM evaluations
View and submit LLM benchmark evaluations
Request model evaluation on COCO val 2017 dataset
Visualize model performance on function calling tasks
Rank machines based on LLaMA 7B v2 benchmark results
Open Persian LLM Leaderboard
Launch web-based model application
Convert Stable Diffusion checkpoint to Diffusers and open a PR
Browse and evaluate language models
ARCH is a model benchmarking tool designed to help users compare audio representation models. It provides a comprehensive platform to evaluate and analyze the performance of different audio models, enabling informed decision-making for researchers and developers.
What models does ARCH support?
ARCH supports a wide range of audio representation models, including popular ones like HuBERT, Wav2Vec, and others. The list of supported models is continuously updated.
How do I interpret the benchmark results?
Benchmark results are presented in a user-friendly format, including metrics, visualizations, and comparisons. Users can focus on the metrics that matter most for their specific application.
Can I add custom models to ARCH?
Yes, ARCH allows users to upload and benchmark their custom audio representation models, enabling flexible and personalized evaluations.