Multilingual Text Embedding Model Pruner
Display and filter leaderboard models
Browse and submit LLM evaluations
Evaluate adversarial robustness using generative models
Browse and submit language model benchmarks
Open Persian LLM Leaderboard
Explore and visualize diverse models
Determine GPU requirements for large language models
Create and manage ML pipelines with ZenML Dashboard
Measure execution times of BERT models using WebGPU and WASM
Push a ML model to Hugging Face Hub
Browse and submit LLM evaluations
Leaderboard of information retrieval models in French
MTEM Pruner is a Multilingual Text Embedding Model Pruner designed to simplify and optimize multilingual text embedding models. This tool allows users to prune multilingual models to focus on a single target language, making the model more efficient and specialized for specific use cases. By reducing the complexity of multilingual models, MTEM Pruner helps in improving inference speed, memory usage, and overall performance for monolingual applications.
pip install mtem-pruner
to install the MTEM Pruner package.model = load_multilingual_model("xlm-roberta-base")
pruned_model = mtem_pruner.prune(model, target_lang="en")
What models are supported by MTEM Pruner?
MTEM Pruner supports popular multilingual models such as Multilingual BERT, XLM-RoBERTa, and DistilMultilingualBERT. Support for additional models is continuously being added.
Does pruning affect the model's accuracy?
While pruning reduces the model size and complexity, it is designed to retain the most important features for the target language. In many cases, the accuracy for the specific language remains comparable or even improves due to the focus on relevant parameters.
Can I prune a model to support multiple languages?
MTEM Pruner is specifically designed for single-language pruning. However, you can run the pruning process multiple times for different languages if you need models for various languages.