Benchmark models using PyTorch and OpenVINO
View and submit LLM benchmark evaluations
Display leaderboard for earthquake intent classification models
Evaluate LLM over-refusal rates with OR-Bench
Browse and evaluate ML tasks in MLIP Arena
Persian Text Embedding Benchmark
Optimize and train foundation models using IBM's FMS
Demo of the new, massively multilingual leaderboard
Evaluate open LLMs in the languages of LATAM and Spain.
Display leaderboard of language model evaluations
Display and submit language model evaluations
Evaluate AI-generated results for accuracy
Convert PaddleOCR models to ONNX format
OpenVINO Benchmark is a tool designed to evaluate and compare the performance of models using OpenVINO and PyTorch. It helps users assess inference speed, latency, and other critical metrics to optimize their model's performance across different hardware configurations.
• Multi-framework support: Benchmark models from both OpenVINO and PyTorch.
• Performance metrics: Measure inference speed, latency, and throughput.
• Multi-device support: Test performance across CPUs, GPUs, and other accelerators.
• Customizable settings: Tailor benchmarking parameters to specific use cases.
• Detailed reports: Generate comprehensive reports for in-depth analysis.
Pro tip: Use the benchmarking results to identify bottlenecks and optimize your model further.
What models are supported by OpenVINO Benchmark?
OpenVINO Benchmark supports models in OpenVINO IR format and PyTorch models. It also supports other formats like TensorFlow and ONNX through conversion tools.
Can I run OpenVINO Benchmark on any platform?
Yes, OpenVINO Benchmark can run on multiple platforms, including Windows, Linux, and macOS, as long as you have the required dependencies installed.
What performance metrics does OpenVINO Benchmark measure?
OpenVINO Benchmark measures inference speed, latency, and throughput. It also provides insights into resource utilization.