Browse and submit evaluations for CaselawQA benchmarks
Explore GenAI model efficiency on ML.ENERGY leaderboard
Load AI models and prepare your space
Convert Hugging Face models to OpenVINO format
Optimize and train foundation models using IBM's FMS
View LLM Performance Leaderboard
GIFT-Eval: A Benchmark for General Time Series Forecasting
View and submit LLM evaluations
Browse and filter ML model leaderboard data
View and submit language model evaluations
Browse and evaluate language models
Compare code model performance on benchmarks
Generate and view leaderboard for LLM evaluations
CaselawQA leaderboard (WIP) is a tool designed for browsing and submitting evaluations for the CaselawQA benchmarks. It serves as a platform to track and compare performance of different models on legal question-answering tasks. The leaderboard is currently a work in progress, with ongoing updates to improve functionality and user experience.
• Benchmark Browse: Explore and view performance metrics for various models on CaselawQA benchmarks.
• Submission Portal: Easily submit your model's results for evaluation.
• Comparison Tools: Compare model performance across different metrics and tasks.
• Filtering Options: Narrow down results by specific criteria such as model type or benchmark version.
• Version Tracking: Track changes in model performance over time.
• Community Sharing: Share insights and discuss results with other users.
What is the purpose of the CaselawQA leaderboard?
The leaderboard is designed to facilitate model evaluation and comparison for legal question-answering tasks, helping researchers and developers track progress in the field.
Do I need specific expertise to use the leaderboard?
While some technical knowledge is helpful, the platform is designed to be accessible to both experts and newcomers. Detailed instructions and guidelines are provided for submissions.
How are submissions evaluated?
Submissions are evaluated based on predefined metrics for the CaselawQA benchmarks, ensuring consistency and fairness in comparisons. Results are typically updated periodically.