Display genomic embedding leaderboard
View and submit LLM benchmark evaluations
Display model benchmark results
Evaluate and submit AI model results for Frugal AI Challenge
Open Persian LLM Leaderboard
Launch web-based model application
Quantize a model for faster inference
Evaluate RAG systems with visual analytics
Create demo spaces for models on Hugging Face
View and compare language model evaluations
Convert Hugging Face models to OpenVINO format
Persian Text Embedding Benchmark
Leaderboard of information retrieval models in French
DGEB is a model benchmarking tool designed to display genomic embedding leaderboards. It provides a centralized platform to evaluate and compare the performance of different models in genomic embedding tasks. DGEB helps researchers and developers assess how well their models handle genomic data and identify areas for improvement.
• Real-time leaderboard updates to track model performance
• Detailed accuracy metrics for comprehensive evaluation
• Visualizations to compare model performance side-by-side
• Support for multiple model architectures
• Filtering options to focus on specific datasets or metrics
• API access for seamless integration with custom workflows
What is the purpose of DGEB?
DGEB is designed to benchmark and compare the performance of models in genomic embedding tasks, helping users identify the best-performing models for their needs.
How often is the leaderboard updated?
The leaderboard is updated regularly to reflect the latest model submissions and performance metrics.
Can I submit my own model to DGEB?
Yes, DGEB typically allows users to submit their models for evaluation. Check the platform’s documentation for specific requirements and submission guidelines.