Convert Hugging Face models to OpenVINO format
Explain GPU usage for model training
Upload a machine learning model to Hugging Face Hub
Browse and submit model evaluations in LLM benchmarks
Request model evaluation on COCO val 2017 dataset
Multilingual Text Embedding Model Pruner
Open Persian LLM Leaderboard
View NSQL Scores for Models
Pergel: A Unified Benchmark for Evaluating Turkish LLMs
Compare LLM performance across benchmarks
Create and manage ML pipelines with ZenML Dashboard
Run benchmarks on prediction models
Display genomic embedding leaderboard
OpenVINO Export is a tool designed to convert Hugging Face models into the OpenVINO format. It enables seamless integration of models from the Hugging Face ecosystem into Intel's OpenVINO toolkit, allowing developers to leverage OpenVINO's optimization capabilities for improved performance on Intel hardware.
• Seamless Model Conversion: Easily convert Hugging Face models to OpenVINO IR format. • Hardware Optimization: Optimized for Intel CPUs, GPUs, and other accelerators. • Broad Model Support: Compatible with popular models like BERT, RoBERTa, and other transformer-based architectures. • Integration with OpenVINO Tools: Exported models are ready for use with OpenVINO's Model Optimization and Inference Engine.
Install the OpenVINO Export package:
pip install openvino-export
Import the converter and load your Hugging Face model:
from openvino_export import convert
model = AutoModelForSequenceClassification.from_pretrained('bert-base-uncased')
Convert the model to OpenVINO format:
openvino_model = model.convert('openvino')
Export the model:
openvino_model.export('model.xml', 'model.bin')
Use the exported model with OpenVINO's Inference Engine:
from openvino.inference_engine import IECore
ie = IECore()
net = ie.read_network(model='model.xml', weights='model.bin')
What models are supported by OpenVINO Export?
OpenVINO Export supports a wide range of Hugging Face models, including popular architectures like BERT, RoBERTa, and other transformer-based models. However, some models may require specific configurations or versions for optimal conversion.
How do I use the exported model with OpenVINO?
After exporting the model, you can use OpenVINO's Inference Engine to load and run inference. Use IECore to read the network and execute inference on your target hardware.
What if I encounter issues during conversion?
Check your model's compatibility with the OpenVINO Export tool. Ensure that your Hugging Face model is up-to-date and matches the supported versions. If issues persist, refer to the OpenVINO documentation or community forums for troubleshooting.