Moderation and Penalty#
Overview#
We apply a two-layer content moderation system to balance content safety with API availability:Layer 1 (Pre-submission): Our API screens prompts and images to block clearly prohibited content (explicit sexual, graphic violence, etc.) before it reaches the upstream provider. Blocked requests are fully refunded — no cost to you.
Layer 2 (Post-submission): Some tasks may still be rejected by ByteDance's content review system, which enforces stricter and less predictable guidelines. When this happens, a small content violation surcharge is applied to the submitting account. Repeated failed submissions degrade API availability for all users, and this nominal fee helps discourage bulk submissions of borderline content.
Layer 1: API-Level Content Moderation (Pre-submission)#
Trigger: Prompt flagged by our moderation system (categories: sexual, violence, harassment, hate, self-harm, illicit)
Penalty: Full credit refund (no charge)
Purpose: Block clearly prohibited content before consuming upstream resources
Layer 2: Upstream Content Review (Post-submission)#
Trigger: Task rejected by ByteDance's content review system after submission to Seedance
Penalty: Tiered credit surcharge based on daily violation count per account:
| Daily Violations | Credit Surcharge |
|---|
| < 20 | 1% of task cost |
| 20 – 100 | 5% of task cost |
| > 100 | 10% of task cost |
Daily violation counter resets at midnight UTC
The surcharge is deducted from the refunded credits (e.g., for a 1% surcharge, you receive 99% of the task cost back)
Notes#
The moderation check uses a fail-open strategy: if the moderation service is temporarily unavailable, your task will proceed normally.
Error code for content violations: 10003 (Invalid Request)
Violation details including triggered categories are included in the task logs.
Content Moderation Tips#
Understanding what each layer catches can help you avoid unnecessary rejections and surcharges.Layer 1 (Our Filter)#
Layer 1 uses banned-word matching and the OpenAI Moderation API. It blocks common-sense unsafe content — explicit sexual material, graphic violence, and similar categories. If your prompt would be flagged by any standard moderation tool, it will be caught here (and fully refunded).Layer 2 (ByteDance Content Review) — What to Watch For#
ByteDance's review system is significantly stricter than typical content moderation. Key areas that frequently cause rejections:1.
Real person depictions — Any content featuring real people is prohibited in the current product, even more restrictively than SORA. In some cases, using AI-generated faces as a workaround may pass review.
2.
Copyrighted or branded content — References to brands, franchises, or trademarked properties (e.g., "Hollywood Movie", studio names, game titles) are likely to be rejected. ByteDance appears to run a brand-word filter on prompts, making this the single trickiest category to navigate. Rephrase with generic descriptions instead.
3.
Political content — Any politically sensitive material will be rejected.
4.
Visual content review — You may occasionally encounter a generation_failed error (RUN_ERROR with no error message in the response). Based on our observations, this likely means Seedance performed a visual review of the frames — either during or after video generation — and determined the content violates its guidelines, even if the prompt itself passed all checks. Because this cannot be confirmed with certainty (Seedance never discloses the specific reason), this error is not subject to any surcharge. However, if you see it repeatedly, it's a strong signal that your prompt or input imagery is producing visuals that trigger content review. Adjusting your prompt or source material may help improve your success rate.
A Note on This Policy#
We recognize that this moderation schema adds friction. It is not our preference to impose it — but a restricted service is better than no service at all. We continue to work on minimizing false positives and making the experience as smooth as possible.Modified at 2026-03-14 12:11:25