Convert PaddleOCR models to ONNX format
Pergel: A Unified Benchmark for Evaluating Turkish LLMs
Browse and submit model evaluations in LLM benchmarks
Request model evaluation on COCO val 2017 dataset
Evaluate open LLMs in the languages of LATAM and Spain.
Browse and filter machine learning models by category and modality
View and submit language model evaluations
SolidityBench Leaderboard
Explain GPU usage for model training
Convert PyTorch models to waifu2x-ios format
Open Persian LLM Leaderboard
Measure BERT model performance using WASM and WebGPU
View and submit LLM benchmark evaluations
PaddleOCRModelConverter is a tool designed to convert PaddleOCR models into the ONNX (Open Neural Network Exchange) format. This conversion enables models to be used across different frameworks and platforms, providing greater flexibility and compatibility for deployment in various environments.
• Compatibility: Converts PaddleOCR models to ONNX format for broader compatibility.
• Flexibility: Supports deployment on multiple devices and frameworks.
• High Performance: Optimizes models for inference speed and efficiency.
• Easy Integration: Simplifies the process of using PaddleOCR models in different workflows.
• Model Support: Works with a wide range of PaddleOCR models for text recognition, detection, and other tasks.
pip install paddleocr paddleonnx
paddleonnx_model_exporter --model_dir <model_path> --output_dir <output_path>
What is ONNX and why is it useful?
ONNX is an open standard for representing machine learning models, enabling models to be transferred between different frameworks and hardware. It allows for better performance and compatibility across various platforms.
Can PaddleOCRModelConverter handle all PaddleOCR models?
PaddleOCRModelConverter supports a wide range of PaddleOCR models, but certain models with proprietary or unsupported operations may not be fully compatible. Check the official documentation for specific model support.
How do I optimize the converted ONNX model for inference?
You can use tools like ONNX Runtime or TensorRT to further optimize the ONNX model for inference. These tools provide options for quantization, pruning, and other optimizations to improve performance.