Convert Hugging Face models to OpenVINO format
Submit deepfake detection models for evaluation
GIFT-Eval: A Benchmark for General Time Series Forecasting
Teach, test, evaluate language models with MTEB Arena
Explore and benchmark visual document retrieval models
View LLM Performance Leaderboard
Evaluate model predictions with TruLens
Evaluate open LLMs in the languages of LATAM and Spain.
Analyze model errors with interactive pages
Convert PyTorch models to waifu2x-ios format
Browse and submit language model benchmarks
Find and download models from Hugging Face
Persian Text Embedding Benchmark
OpenVINO Export is a tool designed to convert Hugging Face models into the OpenVINO format. It enables seamless integration of models from the Hugging Face ecosystem into Intel's OpenVINO toolkit, allowing developers to leverage OpenVINO's optimization capabilities for improved performance on Intel hardware.
• Seamless Model Conversion: Easily convert Hugging Face models to OpenVINO IR format. • Hardware Optimization: Optimized for Intel CPUs, GPUs, and other accelerators. • Broad Model Support: Compatible with popular models like BERT, RoBERTa, and other transformer-based architectures. • Integration with OpenVINO Tools: Exported models are ready for use with OpenVINO's Model Optimization and Inference Engine.
Install the OpenVINO Export package:
pip install openvino-export
Import the converter and load your Hugging Face model:
from openvino_export import convert
model = AutoModelForSequenceClassification.from_pretrained('bert-base-uncased')
Convert the model to OpenVINO format:
openvino_model = model.convert('openvino')
Export the model:
openvino_model.export('model.xml', 'model.bin')
Use the exported model with OpenVINO's Inference Engine:
from openvino.inference_engine import IECore
ie = IECore()
net = ie.read_network(model='model.xml', weights='model.bin')
What models are supported by OpenVINO Export?
OpenVINO Export supports a wide range of Hugging Face models, including popular architectures like BERT, RoBERTa, and other transformer-based models. However, some models may require specific configurations or versions for optimal conversion.
How do I use the exported model with OpenVINO?
After exporting the model, you can use OpenVINO's Inference Engine to load and run inference. Use IECore
to read the network and execute inference on your target hardware.
What if I encounter issues during conversion?
Check your model's compatibility with the OpenVINO Export tool. Ensure that your Hugging Face model is up-to-date and matches the supported versions. If issues persist, refer to the OpenVINO documentation or community forums for troubleshooting.