Detect objects in images using uploaded files
Analyze images to find tags and labels
Identify NSFW content in images
Identify objects in images
Analyze image and highlight detected objects
Analyze images to identify tags and ratings
Detect people with masks in images and videos
Image-Classification test
Identify Not Safe For Work content
Detect objects in images from URLs or uploads
Detect objects in images using YOLO
Classifies images as SFW or NSFW
Identify objects in images based on text descriptions
Transformers.js is a JavaScript library designed to detect harmful or offensive content in images. It leverages advanced AI models to analyze image files and identify objectionable content, making it a powerful tool for content moderation and object detection tasks.
• Image File Support: Processes uploaded image files for content analysis.
• Multiple Model Support: Utilizes state-of-the-art AI models for accurate detection.
• Harmful Content Detection: Identifies offensive or inappropriate content within images.
• Asynchronous Processing: enables non-blocking image analysis.
• Browser Compatibility: Works seamlessly with modern web browsers.
• Easy Integration: Simple API for developers to implement in web applications.
npm install transformers.js
const Transformers = require('transformers.js');
const detector = new Transformers.Detector('harmful-content-detection');
detector.detect(imageFile)
.then(result => {
// Handle detection results
})
.catch(error => {
// Handle errors
});
1. What types of images does Transformers.js support?
Transformers.js supports JPEG, PNG, and BMP image formats.
2. Is Transformers.js suitable for large-scale applications?
Yes, Transformers.js is optimized for asynchronous processing and can handle large-scale applications efficiently.
3. Can I customize the detection models?
Yes, you can use custom models or switch between predefined models like 'harmful-content-detection' or 'object-detection' based on your needs.