You're not hiring one AI,
you're hiring a fine-tuned AI team.
From story discovery and script creation, to character design and video generation, from voice production and editing to TikTok publishing and analytics — every stage has a specialist. They're not working in silos; they collaborate like a real production crew: discussing in roundtables, executing in parallel, cross-validating quality.
AI Roundtable — Multi-Agent Discussion Mechanism
This isn't 11 independent AI tools stitched together. Your AI crew collaborates through the "roundtable" mechanism: when creative decisions need to be made, relevant Agents gather to discuss. The screenwriter pushes for emotional depth, the director pursues visual impact, the QA officer flags compliance risks, and the producer synthesizes opinions into consensus. You see the discussion and decision, you approve, and the team keeps moving forward.
Phase 1: Pre-Production
Agents: Producer, Scout, WriterProducer
OrchestratorThe maestro of your entire AI crew. The Producer parses your natural language requirements and automatically builds a directed acyclic graph (DAG) to define task dependencies and execution order. He manages the entire production timeline, coordinates 10 Agents in parallel workflows, executes approval gates and quality checkpoints, and maintains task state across sessions. The Producer also handles cross-Agent model routing decisions — allocating optimal model combinations to each Agent based on task type, cost budget, and quality requirements.
- ›Natural language parsing → Automatic DAG construction
- ›End-to-end timeline management and parallel scheduling
- ›Cross-Agent multi-model routing decisions (cost/quality/speed trade-offs)
- ›Approval gate execution (story selection, script, final cuts — 3 gates)
- ›Cross-session state persistence
- ›Automatic task retry with exponential backoff
Task graph, real-time progress dashboard, approval notifications
Scout
Market IntelligenceYour market intelligence expert. The Scout connects to real-time drama charts, analyzes trending genres, and applies a proprietary 5-dimension scoring model (layering depth, 3-second hook strength, emotion density, cultural leverage, value clarity) to evaluate story potential. He delivers data-driven ROI predictions considering genre, region, season, and budget. Every production cycle begins with Scout's insights driving strategy.
- ›Real-time drama chart data analysis
- ›Trending genre identification and tracking
- ›5-dimension story scoring model (depth/hook/emotion/culture/values)
- ›ROI prediction (genre × region × season × budget)
- ›Competitive analysis and differentiation recommendations
- ›Data flywheel learning loop participation
3 data-driven story proposals (with scores, ROI predictions, competitive analysis)
Writer
ScreenwriterShort-form drama narrative expert. The Writer masters TikTok vertical storytelling rhythm — 3-second visual hook → 10-second suspense build → 30-second twist reveal. Creates complete 30-32 episode × 90-second scripts with precise character arcs, emotion curves, and per-episode paywall points. Handles multiple genres (romance, CEO drama, thriller, revenge, isekai) and delivers dual-track English + Chinese dialogue options for global reach.
- ›Complete 30-32 episode script creation
- ›TikTok hook formula design (3s/10s/30s rhythm mastery)
- ›Character arc and emotion curve design
- ›Per-episode paywall and cliffhanger placement
- ›Multi-genre adaptation (romance/CEO/thriller/revenge/isekai)
- ›Bilingual output (English + Chinese dialogue tracks)
Complete 30-episode script, episode outlines, character profiles, emotion curves
Phase 2: Production
Agents: Storyboarder, Art Director, Director, Voice Artist, Editor, QAStoryboarder
Visual TranslatorTransforms script text into precise visual instructions. The Storyboarder decodes the script shot-by-shot and outputs strict Seedance formatted prompts — including @ character/scene references, camera movements (OTS/dolly/crane/rack focus), lighting atmosphere, action descriptions, and timing markers. He manages @ reference accuracy (critical for video generation quality) and maintains the 4-15 second per shot × 90 second per episode timing budget.
- ›Script → shot-by-shot storyboard conversion
- ›Seedance prompt output with @ reference syntax
- ›Camera language design (OTS/dolly/crane/rack focus etc.)
- ›Lighting design (Rembrandt/silhouette/neon/natural)
- ›Duration budgeting (4-15s per shot, 90s±3s per episode)
- ›100K character budget management
Complete storyboard script (Shot N ⊙ Ns format) with all @ references
Art Director
Visual CreatorThe visual soul of your crew. The Art Director intelligently routes across multiple image generation models — Gemini 3 Pro for high-quality realistic portraits, Seedream 4.5 for stylized treatments — ensuring characters are both photorealistic and pass video generation content moderation. Creates character cards (2-16 outfit variants per character) and scene cards (5-layer visual structure). Employs an "anchor first, variant later" consistency pipeline for visual coherence across episodes.
- ›Character card generation (2-16 outfit variants per character)
- ›Scene card generation (5-layer visual structure)
- ›Base anchor → variant generation consistency pipeline
- ›AI image generation + stylization (3D render style for moderation bypass)
- ›Asset version management (C01-C99 characters / S01-S99 scenes / P01-P99 props)
- ›Cross-episode visual consistency assurance
Character cards (base + variant references), scene cards, prop cards
Director
Production EngineThe core production engine with the most advanced multi-model routing capabilities. After vertical fine-tuning, the Director masters each video model's strengths — Seedance for character consistency and shot control, MiniMax for dynamic scenes and large movements. Routes each shot to the optimal model automatically. Uses image-to-video (I2V) exclusively with character reference anchors (T2V strictly forbidden). Supports batch async submission, intelligent polling, and automatic voice injection.
- ›Multi-model intelligent routing (Seedance / MiniMax / custom)
- ›I2V image-to-video with character anchoring (T2V absolutely forbidden)
- ›Batch async submission (--poll 0 immediate return)
- ›Voice auto-injection + intelligent audio trimming (≤15.2s total)
- ›Parallel control (3-5 concurrent tasks)
- ›ffmpeg video compression (11MB→360KB)
All episode video clips (5/10/12/15s × 9:16 vertical), video_manifest.json
Voice Artist
Sound DesignerGives each character a unique voice personality. The Voice Artist, after vertical fine-tuning, intelligently routes across TTS engines — Volcengine TTS for rich timbres (ideal for protagonists), MiniMax TTS for low-latency fast-paced scenes, voice cloning API for authentic voice reproduction. Personalizes voice per character (language/accent/age/timbre) and outputs dialogue + BGM + ambient SFX multi-track, auto-synced to video timeline.
- ›Multi-character TTS synthesis and voice cloning
- ›Character voice personalization (language/accent/age/timbre)
- ›Dialogue + BGM + ambient SFX multi-track output
- ›ASMR techniques (breathing/fabric/whisper for tension scenes)
- ›Audio-video timeline auto-sync
- ›Cloud voice asset management
Full episode character dialogue audio, BGM, ambient SFX, synced to video timeline
Editor
Post-Production MasterMulti-track compositing expert. The Editor uses FFmpeg pipeline to composite video clips, dialogue, BGM, SFX, and subtitles into final cuts. Handles transitions, color correction, volume balancing. Outputs 1080×1920 (9:16 vertical) MP4 files. Batch processes all 30 episodes with 90-second precision.
- ›FFmpeg multi-track compositing (video + dialogue + BGM + SFX + subtitles)
- ›Transition design and application
- ›Color correction and image optimization
- ›Auto volume balancing
- ›Batch output (30 episodes × 90s)
- ›720p→1080p smart upscaling (when needed)
30 × MP4 final cuts (1080×1920, 9:16, h.264+AAC, ~90s per episode)
QA
10-Layer ValidationFull-pipeline quality guardian. The QA Officer executes 10-layer automated validation: @ reference integrity, Chinese name leakage detection (export dramas must be pure English), duration compliance (4-15s per shot), format validation, character budget (100K limit), platform compliance (TikTok/Reels/Shorts have different safety thresholds), character consistency, narrative continuity, metadata completeness. Flags non-conformances and requests rework.
- ›L1: @ reference integrity — regex validation
- ›L2: Chinese name leakage detection — export dramas must be pure English
- ›L3: Duration compliance — 4-15s per shot, 87-93s per episode
- ›L4-L5: Format and character budget validation
- ›L6: Platform compliance — TikTok/Reels/Shorts safety scores
- ›L7-L10: Content moderation, character consistency, narrative continuity, metadata
10-layer QA report, non-conformance flags and rework requests
Phase 3: Distribution & Analytics
Agents: Launcher, AnalystLauncher
Platform LauncherAutomated publishing engine. The Launcher integrates with TikTok Content Posting API, managing auto-upload of 30 episodes, metadata optimization (title/description/tags/category), smart publish time scheduling, TikTok Minis paywall configuration, and multi-platform distribution. Once approved, one click publishes everything — no manual intervention needed.
- ›TikTok Content Posting API auto-publish
- ›Metadata optimization (title/description/tags/category)
- ›Smart publish time scheduling
- ›TikTok Minis paywall configuration
- ›Thumbnail and hook A/B testing
- ›Multi-platform distribution (YouTube Shorts / Reels / Douyin)
30-episode auto-publish, release calendar, A/B test plans
Analyst
Analytics OfficerData flywheel engine and recipe self-evolution driver. The Analyst connects to TikTok Marketing API, tracking views, completion rate, conversion, and ROI. Builds attribution models validating recipe parameters. Most critically — data feeds back to all Agents' Skills for continuous self-evolution. The screenwriter learns better pacing, the director optimizes shot strategies, the launcher refines schedules — the team evolves with every cycle.
- ›TikTok Marketing API data collection (~11h delay)
- ›Views/completion/engagement/conversion tracking
- ›Attribution analysis (genre × structure × region × season → ROI)
- ›Recipe validation — verify recipes with real data
- ›Data feedback to Skills — drive Agent self-evolution
- ›Auto daily/weekly reports + Scout optimization suggestions
Daily/weekly data reports, ROI attribution analysis, pitch optimization suggestions
Recipe-Driven, Self-Evolving Flywheel
The "Recipe" is DramaClaw's core product philosophy — extract recipes from data insights, apply with one click across the entire pipeline, and continuously feed performance data back to Skills and Agent capabilities for self-evolution.
Ready to build your AI crew?
Sign up free. 11 AI specialists stand ready, waiting for your command.
Start Free