Seedance 2.0 API Unlock The Next Generation of AI Video
ByteDance's most advanced video generation model, launched April 9, 2026. Cinematic output with native audio, real-world physics, and director-level camera control. Accepts text, image, audio, and video inputs.
Start building with the Seedance 2.0 API
Multiple endpoints for text-to-video, image-to-video, and reference-to-video generation, including optimized fast variants.

ByteDance's most advanced image-to-video model. Animate still images into cinematic video with synchronized audio, start and end frame control, and motion prompts.

ByteDance's most advanced image-to-video model, fast tier. Lower latency and cost with synchronized audio, start and end frame control, and motion prompts.

ByteDance's most advanced text-to-video model, fast tier. Lower latency and cost with cinematic output, native audio, multi-shot editing, and director-level camera control.

ByteDance's most advanced reference-to-video model. Generate video from up to 9 images, 3 videos, and 3 audio clips with native audio and cinematic camera control.

ByteDance's most advanced text-to-video model. Cinematic output with native audio, multi-shot editing, real-world physics, and director-level camera control.

ByteDance's most advanced reference-to-video model, fast tier. Lower latency and cost with up to 9 images, 3 videos, and 3 audio clips as inputs.
How to get access to Seedance 2.0 API
The client API handles the API submit protocol. It will handle the request status updates and return the result when the request is completed.
import { fal } from "@fal-ai/client";
const result = await fal.subscribe("bytedance/seedance-2.0/text-to-video", {
input: {
prompt: "An octopus throws a football in the ocean",
duration: "5",
resolution: "720p",
},
logs: true,
onQueueUpdate: (update) => {
if (update.status === "IN_PROGRESS") {
update.logs.map((log) => log.message).forEach(console.log);
}
},
});
console.log(result.data);
console.log(result.requestId);What Makes The Seedance 2.0 API Different
Director-Level Camera Control API
The model handles complex camera work that other models struggle with. Dolly zooms, rack focuses, tracking shots, POV switches, and smooth handheld movement all work as expected. You describe the shot, and the camera executes it.
Realistic Motion & Action Generation
Fight scenes, vehicle chases, explosions, falling debris. Seedance 2.0 understands how objects interact under force. Collisions have weight, fabric tears realistically, and characters move with physical believability even in high-action sequences.
Built-in Cinematic Audio Generation
Seedance 2.0 generates audio natively alongside video. Music carries deep bass and cinematic warmth. Dialogue is clear with precise lip-sync. Sound effects land exactly on cue. No post-production audio layering needed.
Two API Tiers for Different Use Cases
fal is the official Seedance 2.0 API, supporting two tiers so you can pick the right balance of quality, speed, and cost for your workflow.
| Feature | Seedance 2.0 (Standard) | Seedance 2.0 Fast |
|---|---|---|
| Primary Use Case | Final production renders, cinematic output. | Rapid prototyping, high-volume workloads. |
| Best For | Enterprises, big budget production teams, major film studios | Solopreneurs, budget conscious developers, indie film studios, students |
| Default Resolution | Up to 720p (HD). | Up to 720p (Fast upscaling from 480p). |
| Wins on | Best output, sharp quality | Rapid iteration with low latency |
| Director Control | High director control. Control shots, camera, lighting, mood in one shot. | Less in tune with director controls. Wonβt perform slow motion, multishot, dolly shot on first try. |
| Cost | Higher cost | Cost-Optimized |
| Audio Generation | Synced audio generation at no extra price | Synced audio generation at no extra price |
| API Endpoints | bytedance/seedance-2.0/text-to-videobytedance/seedance-2.0/image-to-videobytedance/seedance-2.0/reference-to-video | bytedance/seedance-2.0/fast/text-to-videobytedance/seedance-2.0/fast/image-to-videobytedance/seedance-2.0/fast/reference-to-video |
Industries Using the Seedance 2.0 API
From film studios to fashion brands, teams across industries are building with Seedance 2.0 on fal.
High Quality Renders For Architecture & Design
World class luxury look and feel all rendered within minutes. Seedance 2.0 brings photorealistic textures, lighting, and spatial depth to architectural visualization and interior design workflows.
Create Studio Worthy Output For Film & Entertainment
Studios and independent filmmakers can generate storyboard-quality pre-visualization content directly from a script or shot list. Camera moves, lighting moods, and action sequences can be previewed before a single frame is shot, cutting pre-production timelines significantly.
Expand On AI Assisted Ecommerce Campaign Ads
Brands can generate polished, on-brief video assets from a single prompt. Product showcases, lifestyle sequences, and cinematic brand ad spots, produced at the speed of a prompt, not a shoot.
Scale Video Game Development & Virtual Production
Game studios can use Seedance 2.0 to generate high-fidelity cinematic sequences, environmental previews, and in-engine concept footage β all without a dedicated animation pipeline.
Fashion Campaign Assets & Virtual Try-On Videos On Demand
Generate editorial-quality video content without booking a studio, a crew, or a location. Seedance 2.0 handles fabric movement, lighting, and texture with cinematic precision β ideal for lookbooks, campaign content, seasonal drops, and digital runway presentations.
Handheld UGC & AI Creator Content
Seedance 2.0 can replicate the handheld, lo-fi aesthetic of user-generated content while maintaining full creative control. Generate UGC-style video that feels organic and platform-native β ideal for TikTok, Instagram Reels, and YouTube Shorts.
Seedance 2.0 API Examples
Turn on audio to hear the native sound generation. Every example below was generated in a single pass with no post-production.
"Camera follows a man in black sprinting through a crowded street, a group chasing close behind. The shot cuts to a side tracking angle as he panics and crashes into a roadside fruit stall, scrambles to his feet, and keeps running. Sounds of a frantic crowd"
"A spear-wielding warrior clashes with a dual-blade fighter in a maple leaf forest. Autumn leaves scatter on each impact. Wide shot pulls into tight close-ups of parrying blades, then cuts to a slow-motion overhead as both leap into the air"
"Spy thriller style. Front-tracking shot of a female agent in a red trench coat walking forward through a busy street, pedestrians constantly crossing in front of her. She rounds a corner and disappears. A masked girl lurks at the corner, glaring after her. Camera pans forward as the agent walks into a mansion and vanishes. Single continuous take, no cuts"
"15s commercial. Shot 1: side angle, a donkey rides a motorcycle bursting through a barn fence, chickens scatter. Shot 2: close-up of spinning tires on sand, then aerial shot of the donkey doing donuts, dust clouds rising. Shot 3: snow mountain backdrop, the donkey launches off a hillside, text 'Inspire Creativity, Enrich Life' revealed behind it as dust settles"
Seedance 2.0 API FAQ
How does the Seedance 2.0 API work?
Send a POST request to any Seedance 2.0 endpoint with your prompt and parameters. The fal serverless infrastructure handles GPU allocation, inference, and scaling automatically. You get back a URL to the generated video. Use the Python or JavaScript SDK for the simplest integration, or call the REST API directly.
What input types & formats does Seedance 2.0 API support?
Input: text prompts, images (JPEG, PNG, WebP), video files (MP4, MOV), and audio files (WAV, MP3). Output: MP4 video with synchronized audio. Resolutions: 480p and 720p. Durations: 4 to 15 seconds. Aspect ratios: 21:9, 16:9, 4:3, 1:1, 3:4, and 9:16.
How fast is the Seedance 2.0 API?
Generations complete in under 2 minutes. Fast-tier endpoints offer lower latency and cost for production workloads. Standard-tier endpoints prioritize maximum quality. Both tiers run on fal's serverless infrastructure with automatic scaling and no cold starts for sustained traffic.
How do I get access to the Seedance 2.0 API?
Seedance 2.0 is available now on fal as of April 9th, 2026. You can start generating videos immediately in the playground with no setup required. If you are a developer looking to integrate via API, sign up and grab an API key from your dashboard, then use the Python or JavaScript SDK, or call the REST API directly.
What countries can access the Seedance 2.0 API?
The Seedance 2.0 API is available globally through fal's infrastructure. Developers and enterprises in any country can access and integrate the API into their applications.
What is Seedance 2.0?
Seedance 2.0 is ByteDance's latest video generation model. It uses a unified multimodal audio-video architecture that accepts text, image, audio, and video inputs. It generates cinematic video with native audio, multi-shot cuts, and realistic physics in a single generation.
What input types does Seedance 2.0 support?
Seedance 2.0 accepts text prompts, reference images, audio clips, and video inputs. You can combine these to control the output. For example, provide a reference image for visual style, an audio clip for the soundtrack, and a text prompt for the scene description.
How long can generated videos be?
Seedance 2.0 generates videos up to 15 seconds in a single generation. Within that duration, the model can produce multiple shots with natural cuts and transitions, so a single output can feel like an edited sequence rather than a single continuous clip.
How good is the audio quality?
The audio quality is a standout feature. Music has deep bass and cinematic presence. Dialogue is clear with accurate lip-sync. Sound effects are contextually appropriate and well-timed. The model generates audio natively alongside the video, so everything stays in sync without post-production.
Seedance 2.0 API Integration Steps
Get up and running in minutes. No GPUs to manage, no infrastructure to set up.
- 1Install the client
Pick your package manager. For Python, use pip.
npm install --save @fal-ai/client
- 2Create an account on fal
Sign up to get access to the dashboard and your API keys.
- 3Get your API key
Locate your API credentials in the developer dashboard. Set
FAL_KEYas an environment variable in your runtime. - 4Submit a request
Use
fal.subscribe()to submit your request with a prompt and parameters. The client handles the async queue automatically, providing progress updates viaonQueueUpdate, and returns the final video URL when generation is complete.
No setup required
Start generating Seedance 2.0 videos instantly in the playground. No API key needed, just describe your scene and hit generate.
Open Playground βIntegrate via API
Grab an API key from your dashboard and integrate Seedance 2.0 into your app with a few lines of code. Python and JavaScript SDKs available, plus a REST API for any language.
Get API Key βRead the full guide
Prompting techniques, audio-video generation, image-to-video, reference-to-video, multi-shot workflows, and pricing explained.
Read the Guide βGet in Touch About Seedance 2.0
Want to learn more about integrating Seedance 2.0 into your workflow? Leave your details and our team will reach out.

