Explore and submit NER models
Submit evaluations for speaker tagging and view leaderboard
Search for tagged characters in Animagine datasets
Generate a data report using the pandas-profiling tool
Analyze weekly and daily trader performance in Olas Predict
World warming land sites
Generate detailed data reports
Gather data from websites
Generate synthetic dataset files (JSON Lines)
Simulate causal effects and determine variable control
Filter and view AI model leaderboard data
Display competition information and manage submissions
Migrate datasets from GitHub or Kaggle to Hugging Face Hub
The Clinical NER Leaderboard is a platform designed to evaluate and compare Named Entity Recognition (NER) models specifically tailored for clinical and medical text data. It provides a centralized hub for researchers and developers to submit their models, benchmark performance, and explore state-of-the-art solutions in clinical NLP.
• Model Comparison: Allows users to compare performance metrics of different NER models on clinical datasets.
• Benchmark Scores: Provides standardized benchmark scores for clinical NER tasks, enabling apples-to-apples comparisons.
• Interactive Visualization: Offers dynamic visualizations to explore model performance across different entity types and datasets.
• Model Submission: Enables researchers to submit their own NER models for evaluation and inclusion in the leaderboard.
• Community Engagement: Facilitates discussion and collaboration through forums and shared resources.
What is Named Entity Recognition (NER) in the clinical context?
NER in clinical contexts involves identifying and categorizing entities such as diseases, medications, symptoms, and genes from unstructured clinical text.
How can I submit my NER model to the leaderboard?
Visit the platform's submission page, follow the provided guidelines, and upload your model along with required documentation.
What datasets are used for benchmarking on the leaderboard?
The leaderboard uses standardized clinical datasets, including publicly available sources like MIMIC and i2b2, to ensure fair and consistent evaluations.