Analyze model errors with interactive pages
Compare code model performance on benchmarks
Quantize a model for faster inference
Generate leaderboard comparing DNA models
Multilingual Text Embedding Model Pruner
View and submit machine learning model evaluations
Convert Hugging Face model repo to Safetensors
Upload a machine learning model to Hugging Face Hub
Merge Lora adapters with a base model
Display model benchmark results
Create demo spaces for models on Hugging Face
Explore and benchmark visual document retrieval models
Leaderboard of information retrieval models in French
ExplaiNER is a cutting-edge tool designed for model benchmarking and error analysis. It provides an interactive environment to help users identify and understand model errors through detailed, user-friendly pages. Whether you're refining your model's performance or comparing different AI solutions, ExplaiNER offers the insights you need to make data-driven decisions.
What models does ExplaiNER support?
ExplaiNER supports a wide range of models, including popular frameworks like TensorFlow, PyTorch, and Scikit-learn.
Can I compare multiple models at once?
Yes, ExplaiNER allows you to upload and compare multiple models simultaneously, making it easy to identify the best-performing solution for your needs.
How do I access historical benchmarking data?
Historical data is stored automatically in ExplaiNER. You can retrieve it by navigating to the "Reports" section and selecting the desired date or model configuration.