Skip to main content

๐Ÿšซ NSFW Content Detection

The /nsfw-filter endpoint analyzes an image to detect potentially inappropriate or sensitive content. It classifies images into Safe or Not Safe for Work (NSFW), helping developers build safer applications for all audiences.


๐Ÿ“ฅ Endpoint: POST /nsfw-filterโ€‹

Base URL: https://api.visionary.ai/v1/nsfw-filter


๐Ÿ” Headersโ€‹

KeyValueRequired
AuthorizationBearer YOUR_API_KEYโœ…
Content-Typeapplication/json or multipart/form-dataโœ…

๐Ÿ“ธ Request Bodyโ€‹

Option 1 โ€“ JSON (Base64)โ€‹

{
"image": "data:image/jpeg;base64,/9j/4AAQSkZ..."
}

Option 2 โ€“ Multipartโ€‹

Use image as the field name.


๐Ÿงช Example (curl โ€“ multipart)โ€‹

curl -X POST https://api.visionary.ai/v1/nsfw-filter \
-H "Authorization: Bearer demo-api-key" \
-F "image=@/path/to/image.jpg"

โœ… Responseโ€‹

{
"nsfw_score": 0.87,
"classification": "nsfw"
}

๐Ÿง  Fieldsโ€‹

FieldTypeDescription
nsfw_scorefloatValue between 0 and 1 indicating probability of NSFW content
classificationstringReturns "safe" or "nsfw"

โš ๏ธ Threshold Logicโ€‹

  • nsfw_score < 0.50 โ†’ "safe"

  • nsfw_score โ‰ฅ 0.50 โ†’ "nsfw"

Threshold can be customized at integration level.


๐Ÿšซ Error Responsesโ€‹

CodeMessageCause
400"Missing image field"No file or base64 string sent
401"Unauthorized"Invalid or missing API key
422"Image unreadable or unsupported"File corrupted or unsupported

๐Ÿ“Œ Notesโ€‹

  • This model uses an open-source NSFW classifier fine-tuned for general image filtering.

  • This system does not store or log any images.

  • Use responsibly and comply with local moderation laws and app store policies.


๐Ÿ”— Next endpointโ€‹

โžก๏ธ Explore /metadata to extract EXIF and technical data from an image.