Benchmark models using PyTorch and OpenVINO
Evaluate adversarial robustness using generative models
Compare and rank LLMs using benchmark scores
Rank machines based on LLaMA 7B v2 benchmark results
Display leaderboard of language model evaluations
Create and upload a Hugging Face model card
Display model benchmark results
Explore and manage STM32 ML models with the STM32AI Model Zoo dashboard
Open Persian LLM Leaderboard
Display leaderboard for earthquake intent classification models
Display LLM benchmark leaderboard and info
Find and download models from Hugging Face
Launch web-based model application
OpenVINO Benchmark is a tool designed to evaluate and compare the performance of models using OpenVINO and PyTorch. It helps users assess inference speed, latency, and other critical metrics to optimize their model's performance across different hardware configurations.
• Multi-framework support: Benchmark models from both OpenVINO and PyTorch.
• Performance metrics: Measure inference speed, latency, and throughput.
• Multi-device support: Test performance across CPUs, GPUs, and other accelerators.
• Customizable settings: Tailor benchmarking parameters to specific use cases.
• Detailed reports: Generate comprehensive reports for in-depth analysis.
Pro tip: Use the benchmarking results to identify bottlenecks and optimize your model further.
What models are supported by OpenVINO Benchmark?
OpenVINO Benchmark supports models in OpenVINO IR format and PyTorch models. It also supports other formats like TensorFlow and ONNX through conversion tools.
Can I run OpenVINO Benchmark on any platform?
Yes, OpenVINO Benchmark can run on multiple platforms, including Windows, Linux, and macOS, as long as you have the required dependencies installed.
What performance metrics does OpenVINO Benchmark measure?
OpenVINO Benchmark measures inference speed, latency, and throughput. It also provides insights into resource utilization.