Explore and submit NER models
Profile a dataset and publish the report on Hugging Face
Transfer GitHub repositories to Hugging Face Spaces
World warming land sites
Display document size plots
Simulate causal effects and determine variable control
Analyze data using Pandas Profiling
Execute commands and visualize data
Evaluate LLMs using Kazakh MC tasks
Search and save datasets generated with a LLM in real time
Generate detailed data reports
Analyze and visualize data with various statistical methods
Launch Argilla for data labeling and annotation
The Clinical NER Leaderboard is a platform designed to evaluate and compare Named Entity Recognition (NER) models specifically tailored for clinical and medical text data. It provides a centralized hub for researchers and developers to submit their models, benchmark performance, and explore state-of-the-art solutions in clinical NLP.
• Model Comparison: Allows users to compare performance metrics of different NER models on clinical datasets.
• Benchmark Scores: Provides standardized benchmark scores for clinical NER tasks, enabling apples-to-apples comparisons.
• Interactive Visualization: Offers dynamic visualizations to explore model performance across different entity types and datasets.
• Model Submission: Enables researchers to submit their own NER models for evaluation and inclusion in the leaderboard.
• Community Engagement: Facilitates discussion and collaboration through forums and shared resources.
What is Named Entity Recognition (NER) in the clinical context?
NER in clinical contexts involves identifying and categorizing entities such as diseases, medications, symptoms, and genes from unstructured clinical text.
How can I submit my NER model to the leaderboard?
Visit the platform's submission page, follow the provided guidelines, and upload your model along with required documentation.
What datasets are used for benchmarking on the leaderboard?
The leaderboard uses standardized clinical datasets, including publicly available sources like MIMIC and i2b2, to ensure fair and consistent evaluations.