Display genomic embedding leaderboard
Determine GPU requirements for large language models
View and submit machine learning model evaluations
Track, rank and evaluate open LLMs and chatbots
Calculate GPU requirements for running LLMs
Pergel: A Unified Benchmark for Evaluating Turkish LLMs
Push a ML model to Hugging Face Hub
Convert PyTorch models to waifu2x-ios format
Submit deepfake detection models for evaluation
View and submit language model evaluations
Leaderboard of information retrieval models in French
Multilingual Text Embedding Model Pruner
Display and filter leaderboard models
DGEB is a model benchmarking tool designed to display genomic embedding leaderboards. It provides a centralized platform to evaluate and compare the performance of different models in genomic embedding tasks. DGEB helps researchers and developers assess how well their models handle genomic data and identify areas for improvement.
• Real-time leaderboard updates to track model performance
• Detailed accuracy metrics for comprehensive evaluation
• Visualizations to compare model performance side-by-side
• Support for multiple model architectures
• Filtering options to focus on specific datasets or metrics
• API access for seamless integration with custom workflows
What is the purpose of DGEB?
DGEB is designed to benchmark and compare the performance of models in genomic embedding tasks, helping users identify the best-performing models for their needs.
How often is the leaderboard updated?
The leaderboard is updated regularly to reflect the latest model submissions and performance metrics.
Can I submit my own model to DGEB?
Yes, DGEB typically allows users to submit their models for evaluation. Check the platform’s documentation for specific requirements and submission guidelines.