Check text for moderation flags
Generate topics from text data with BERTopic
Generate answers by querying text in uploaded documents
Parse and highlight entities in an email thread
Test your attribute inference skills with comments
Determine emotion from text
Compare different tokenizers in char-level and byte-level.
Detect harms and risks with Granite Guardian 3.1 8B
Extract... key phrases from text
Analyze Ancient Greek text for syntax and named entities
Ask questions and get answers from PDFs in multiple languages
Experiment with and compare different tokenizers
Display and filter LLM benchmark results
Moderation is a tool designed to analyze and check text for moderation flags. It helps users identify potentially sensitive, inappropriate, or unwanted content within text inputs. This tool is particularly useful for ensuring compliance with content policies, maintaining safety standards, or pre-screening user-generated content.
• Automated scanning: Quickly reviews text for predefined flags such as profanity, hate speech, or spam.
• Configurable flags: Allows users to customize the types of content to monitor based on their specific needs.
• Language support: Compatible with multiple languages to cater to diverse audiences.
• Detailed reporting: Provides clear results highlighting flagged content for easy review.
What types of content can Moderation check?
Moderation can check for profanity, hate speech, spam, and other predefined or custom flags depending on user settings.
Is Moderation available in multiple languages?
Yes, Moderation supports multiple languages, making it suitable for analyzing content from diverse regions and audiences.
Can I customize the moderation flags?
Absolutely! Users can define custom flags or modify existing ones to align with their specific moderation needs.