Analyze model errors with interactive pages
Run benchmarks on prediction models
Display model benchmark results
View NSQL Scores for Models
Create demo spaces for models on Hugging Face
Teach, test, evaluate language models with MTEB Arena
Convert and upload model files for Stable Diffusion
Upload a machine learning model to Hugging Face Hub
Load AI models and prepare your space
View RL Benchmark Reports
Browse and evaluate ML tasks in MLIP Arena
Explore GenAI model efficiency on ML.ENERGY leaderboard
Explore and manage STM32 ML models with the STM32AI Model Zoo dashboard
ExplaiNER is a cutting-edge tool designed for model benchmarking and error analysis. It provides an interactive environment to help users identify and understand model errors through detailed, user-friendly pages. Whether you're refining your model's performance or comparing different AI solutions, ExplaiNER offers the insights you need to make data-driven decisions.
What models does ExplaiNER support?
ExplaiNER supports a wide range of models, including popular frameworks like TensorFlow, PyTorch, and Scikit-learn.
Can I compare multiple models at once?
Yes, ExplaiNER allows you to upload and compare multiple models simultaneously, making it easy to identify the best-performing solution for your needs.
How do I access historical benchmarking data?
Historical data is stored automatically in ExplaiNER. You can retrieve it by navigating to the "Reports" section and selecting the desired date or model configuration.