Benchmark models using PyTorch and OpenVINO
Convert PaddleOCR models to ONNX format
Explore and visualize diverse models
Explain GPU usage for model training
Search for model performance across languages and benchmarks
Analyze model errors with interactive pages
Merge machine learning models using a YAML configuration file
Display leaderboard of language model evaluations
Compare model weights and visualize differences
Browse and submit model evaluations in LLM benchmarks
Evaluate code generation with diverse feedback types
View NSQL Scores for Models
Browse and evaluate ML tasks in MLIP Arena
OpenVINO Benchmark is a tool designed to evaluate and compare the performance of models using OpenVINO and PyTorch. It helps users assess inference speed, latency, and other critical metrics to optimize their model's performance across different hardware configurations.
• Multi-framework support: Benchmark models from both OpenVINO and PyTorch.
• Performance metrics: Measure inference speed, latency, and throughput.
• Multi-device support: Test performance across CPUs, GPUs, and other accelerators.
• Customizable settings: Tailor benchmarking parameters to specific use cases.
• Detailed reports: Generate comprehensive reports for in-depth analysis.
Pro tip: Use the benchmarking results to identify bottlenecks and optimize your model further.
What models are supported by OpenVINO Benchmark?
OpenVINO Benchmark supports models in OpenVINO IR format and PyTorch models. It also supports other formats like TensorFlow and ONNX through conversion tools.
Can I run OpenVINO Benchmark on any platform?
Yes, OpenVINO Benchmark can run on multiple platforms, including Windows, Linux, and macOS, as long as you have the required dependencies installed.
What performance metrics does OpenVINO Benchmark measure?
OpenVINO Benchmark measures inference speed, latency, and throughput. It also provides insights into resource utilization.