Analyze model errors with interactive pages
Browse and filter machine learning models by category and modality
Generate leaderboard comparing DNA models
Generate and view leaderboard for LLM evaluations
Convert Hugging Face model repo to Safetensors
Convert PyTorch models to waifu2x-ios format
Calculate VRAM requirements for LLM models
Determine GPU requirements for large language models
View and submit language model evaluations
Text-To-Speech (TTS) Evaluation using objective metrics.
Convert Hugging Face models to OpenVINO format
Explore and visualize diverse models
Evaluate AI-generated results for accuracy
ExplaiNER is a cutting-edge tool designed for model benchmarking and error analysis. It provides an interactive environment to help users identify and understand model errors through detailed, user-friendly pages. Whether you're refining your model's performance or comparing different AI solutions, ExplaiNER offers the insights you need to make data-driven decisions.
What models does ExplaiNER support?
ExplaiNER supports a wide range of models, including popular frameworks like TensorFlow, PyTorch, and Scikit-learn.
Can I compare multiple models at once?
Yes, ExplaiNER allows you to upload and compare multiple models simultaneously, making it easy to identify the best-performing solution for your needs.
How do I access historical benchmarking data?
Historical data is stored automatically in ExplaiNER. You can retrieve it by navigating to the "Reports" section and selecting the desired date or model configuration.