Multilingual Text Embedding Model Pruner
Display leaderboard for earthquake intent classification models
Explore and visualize diverse models
Launch web-based model application
Evaluate model predictions with TruLens
Create and manage ML pipelines with ZenML Dashboard
Convert and upload model files for Stable Diffusion
Predict customer churn based on input details
Find recent high-liked Hugging Face models
Explore GenAI model efficiency on ML.ENERGY leaderboard
Quantize a model for faster inference
Generate and view leaderboard for LLM evaluations
Open Persian LLM Leaderboard
MTEM Pruner is a Multilingual Text Embedding Model Pruner designed to simplify and optimize multilingual text embedding models. This tool allows users to prune multilingual models to focus on a single target language, making the model more efficient and specialized for specific use cases. By reducing the complexity of multilingual models, MTEM Pruner helps in improving inference speed, memory usage, and overall performance for monolingual applications.
pip install mtem-pruner
to install the MTEM Pruner package.model = load_multilingual_model("xlm-roberta-base")
pruned_model = mtem_pruner.prune(model, target_lang="en")
What models are supported by MTEM Pruner?
MTEM Pruner supports popular multilingual models such as Multilingual BERT, XLM-RoBERTa, and DistilMultilingualBERT. Support for additional models is continuously being added.
Does pruning affect the model's accuracy?
While pruning reduces the model size and complexity, it is designed to retain the most important features for the target language. In many cases, the accuracy for the specific language remains comparable or even improves due to the focus on relevant parameters.
Can I prune a model to support multiple languages?
MTEM Pruner is specifically designed for single-language pruning. However, you can run the pruning process multiple times for different languages if you need models for various languages.