ByteDance's second-generation AI video model with revolutionary @Reference system — combine video, audio, and image references in one request to control camera work, rhythm, and style precisely. Native audio sync, up to 15 seconds at 1080p. Try it free in Nano Banana 2 Pro AI video generator.
Seedance 2.0 is ByteDance's cutting-edge AI video generation model. Its revolutionary @Reference system lets you upload video, audio, and image references in a single request — the model extracts camera paths, motion patterns, rhythm, and style from each reference to apply precisely to your generated video. With native audio synchronization, V2V editing, and up to 15 seconds at 1080p, Seedance 2.0 sets a new standard for AI video creativity.
@Reference system, native audio sync, V2V video editing, up to 1080p / 15s — explore Seedance 2.0's next-generation capabilities.
Seedance 2.0's @Reference system lets you assign roles to uploaded files using @Image1, @Video1, @Audio1 tags in your prompt. The model extracts camera path from video, beat patterns from audio, and composition style from images — then applies each precisely to the generated video.
Using @Video1 camera path and @Image1 as the first frame, generate a cinematic drone shot of a coastal city at golden hour.
Demonstrates camera path extraction from reference video
Seedance 2.0 automatically generates audio tailored to the video content. This includes multilingual lip-sync dialogue, action-matched sound effects, and emotion-aligned background music. Audio generation can be toggled on or off per request.
A street musician playing guitar in a rainy alley at night, neon lights reflecting off the wet pavement.
Native guitar music and rain sounds automatically generated
The V2V mode lets you modify specific parts of existing videos. Change a character's clothing, alter their motion, replace scene elements — while everything else remains exactly as it was. The only top-tier model with V2V editing capability.
Change the character's clothing to a red jacket, keep everything else exactly the same.
Precise character editing with V2V mode
Choose from 480p, 720p (default), and 1080p resolution at 24fps. Supported aspect ratios include 16:9, 9:16, 1:1, 4:3, 3:4, and 21:9. Generate videos from 4 to 15 seconds — the longest duration in its class.
A slow-motion waterfall in a lush forest at 1080p, 16:9 aspect ratio.
1080p cinematic-quality output
Seedance 2.0 AI Video Generator FAQ
The @Reference system lets you upload video, audio, and image files and assign them roles in your prompt using @Image1, @Video1, @Audio1 tags. The model extracts camera motion from video, beat patterns from audio, and composition style from images — then applies each to the generated video.
Seedance 2.0 supports T2V (Text-to-Video), I2V (Image-to-Video), and V2V (Video-to-Video editing). The V2V mode is currently unique to Seedance 2.0 among top-tier video models.
Seedance 2.0 supports 480p, 720p (default), and 1080p at 24fps. Videos can be 4 to 15 seconds long. Supported aspect ratios include 16:9, 9:16, 1:1, 4:3, 3:4, and 21:9.
When enabled, Seedance 2.0 automatically generates synchronized audio with the video — multilingual lip-sync dialogue, action-matched sound effects, and emotion-aligned background music. Audio generation can be toggled on or off per request.
Up to 9 image references (each ≤30MB), 3 video references (2-15 seconds each, ≤50MB), and 3 audio references (≤15 seconds each, ≤15MB) per request. Maximum 12 files across all modalities combined.
No. Seedance 2.0 does not support uploading real people's face images or videos. Content safety policies automatically reject such uploads.
Seedance 2.0 adds video and audio reference input (1.5 Pro doesn't have them), increases image references to 9, introduces V2V video editing mode, extends max duration from 12s to 15s, and debuts the full @Reference system.
“The @Reference system is a game-changer. Extracting camera paths from reference clips and applying them directly — it's a completely new workflow.”
“The @Reference system is a game-changer. Extracting camera paths from reference clips and applying them directly — it's a completely new workflow.”
“The @Reference system is a game-changer. Extracting camera paths from reference clips and applying them directly — it's a completely new workflow.”
“The @Reference system is a game-changer. Extracting camera paths from reference clips and applying them directly — it's a completely new workflow.”
Free to try. No credit card required. Experience next-generation AI video generation with @Reference system and native audio.