Convert PaddleOCR models to ONNX format
Generate and view leaderboard for LLM evaluations
Search for model performance across languages and benchmarks
Optimize and train foundation models using IBM's FMS
Evaluate open LLMs in the languages of LATAM and Spain.
Upload ML model to Hugging Face Hub
Find and download models from Hugging Face
Text-To-Speech (TTS) Evaluation using objective metrics.
Predict customer churn based on input details
Retrain models for new data at edge devices
Benchmark LLMs in accuracy and translation across languages
Measure execution times of BERT models using WebGPU and WASM
Merge machine learning models using a YAML configuration file
PaddleOCRModelConverter is a tool designed to convert PaddleOCR models into the ONNX (Open Neural Network Exchange) format. This conversion enables models to be used across different frameworks and platforms, providing greater flexibility and compatibility for deployment in various environments.
• Compatibility: Converts PaddleOCR models to ONNX format for broader compatibility.
• Flexibility: Supports deployment on multiple devices and frameworks.
• High Performance: Optimizes models for inference speed and efficiency.
• Easy Integration: Simplifies the process of using PaddleOCR models in different workflows.
• Model Support: Works with a wide range of PaddleOCR models for text recognition, detection, and other tasks.
pip install paddleocr paddleonnx
paddleonnx_model_exporter --model_dir <model_path> --output_dir <output_path>
What is ONNX and why is it useful?
ONNX is an open standard for representing machine learning models, enabling models to be transferred between different frameworks and hardware. It allows for better performance and compatibility across various platforms.
Can PaddleOCRModelConverter handle all PaddleOCR models?
PaddleOCRModelConverter supports a wide range of PaddleOCR models, but certain models with proprietary or unsupported operations may not be fully compatible. Check the official documentation for specific model support.
How do I optimize the converted ONNX model for inference?
You can use tools like ONNX Runtime or TensorRT to further optimize the ONNX model for inference. These tools provide options for quantization, pruning, and other optimizations to improve performance.