Seedance 2.0 API Guide
What’s New in Seedance 2.0
Seedance 2.0 is ByteDance’s second-generation video generation model, delivering significant improvements in motion coherence, prompt adherence, and cinematic quality over its predecessor. The model supports resolutions up to 1080p at 24 fps, handles both text-to-video and image-to-video generation, and reduces average generation latency by approximately 40% compared to Seedance 1.0. With a context-aware temporal consistency engine, Seedance 2.0 produces clips up to 10 seconds with noticeably fewer flickering artifacts and more physically plausible dynamics.
Key Specs
| Specification | Detail |
|---|---|
| Max Resolution | 1920 × 1080 (1080p) |
| Output Duration | Up to 10 seconds per clip |
| Frame Rate | 24 fps (fixed) |
| Input Modes | Text-to-Video (T2V), Image-to-Video (I2V) |
| Context Window (prompt) | Up to 2,000 tokens |
| Average Generation Time | ~45–90 seconds per clip (1080p) |
| API Pricing (T2V) | ~$0.035 per second of generated video |
| API Pricing (I2V) | ~$0.040 per second of generated video |
| Throughput | Up to 4 concurrent jobs per API key (standard tier) |
| Availability | REST API via ByteDance / third-party providers |
Pricing reflects published rates as of Q2 2025. Check your provider’s pricing page for the latest figures.
How It Compares to Previous Version
| Metric | Seedance 1.0 | Seedance 2.0 | Change |
|---|---|---|---|
| Max Resolution | 720p (1280×720) | 1080p (1920×1080) | +56% pixel area |
| Max Duration | 6 seconds | 10 seconds | +67% |
| Frame Rate | 24 fps | 24 fps | Unchanged |
| Avg Generation Latency | ~75–150 sec | ~45–90 sec | ~40% faster |
| Prompt Token Limit | 512 tokens | 2,000 tokens | +290% |
| Motion Consistency Score | 72.4 (VBench) | 84.1 (VBench) | +16.2% |
| Human Preference Score | 68.3% | 79.7% | +11.4 pts |
| T2V Pricing (per second) | ~$0.05 | ~$0.035 | −30% |
| I2V Support | Limited beta | Full GA | Major upgrade |
The jump from 720p to 1080p combined with a 30% price reduction makes Seedance 2.0 a meaningfully different value proposition. The VBench motion consistency improvement from 72.4 to 84.1 is the most technically significant delta, reflecting the new temporal consistency engine’s impact on long-motion sequences.
API Quick Start
Python
import requests
import time
import json
# ─────────────────────────────────────────────
# Seedance 2.0 API — Text-to-Video generation
# Requires: SEEDANCE_API_KEY environment variable
# Install: pip install requests
# ─────────────────────────────────────────────
API_BASE = "https://api.seedance.ai/v2" # Replace with your provider base URL
API_KEY = "YOUR_API_KEY_HERE"
HEADERS = {
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json",
}
# ── Step 1: Submit a generation job ──────────
def submit_t2v_job(prompt: str, duration: int = 6, resolution: str = "1080p") -> str:
"""
Submit a text-to-video job.
Args:
prompt: Natural-language description of the desired video
duration: Clip length in seconds (1–10)
resolution: "720p" or "1080p"
Returns:
job_id (str) used to poll for completion
"""
payload = {
"model": "seedance-2.0",
"prompt": prompt,
"duration": duration, # seconds; max 10
"resolution": resolution, # "720p" | "1080p"
"fps": 24, # fixed in v2
"mode": "t2v", # text-to-video
}
response = requests.post(
f"{API_BASE}/generations",
headers=HEADERS,
json=payload,
timeout=30,
)
response.raise_for_status()
data = response.json()
print(f"[✓] Job submitted. ID: {data['job_id']}")
return data["job_id"]
# ── Step 2: Poll until the job is done ───────
def poll_job(job_id: str, poll_interval: int = 10, max_wait: int = 300) -> dict:
"""
Poll the job status endpoint every `poll_interval` seconds.
Returns:
Full response dict containing 'status' and 'output_url' when complete.
"""
elapsed = 0
while elapsed < max_wait:
r = requests.get(
f"{API_BASE}/generations/{job_id}",
headers=HEADERS,
timeout=15,
)
r.raise_for_status()
result = r.json()
status = result.get("status") # "queued" | "processing" | "succeeded" | "failed"
print(f" Status: {status} ({elapsed}s elapsed)")
if status == "succeeded":
print(f"[✓] Done! Video URL: {result['output_url']}")
return result
elif status == "failed":
raise RuntimeError(f"Job failed: {result.get('error', 'unknown error')}")
time.sleep(poll_interval)
elapsed += poll_interval
raise TimeoutError(f"Job {job_id} did not complete within {max_wait}s.")
# ── Step 3: Download the generated video ─────
def download_video(output_url: str, save_path: str = "seedance_output.mp4") -> None:
"""Stream the generated MP4 to disk."""
with requests.get(output_url, stream=True, timeout=60) as r:
r.raise_for_status()
with open(save_path, "wb") as f:
for chunk in r.iter_content(chunk_size=8192):
f.write(chunk)
print(f"[✓] Saved to {save_path}")
# ── Main ──────────────────────────────────────
if __name__ == "__main__":
PROMPT = (
"A golden retriever puppy runs along a sunlit beach at sunset, "
"slow-motion, cinematic depth of field, waves in the background."
)
job_id = submit_t2v_job(prompt=PROMPT, duration=6, resolution="1080p")
result = poll_job(job_id, poll_interval=10, max_wait=300)
download_video(result["output_url"], save_path="beach_puppy.mp4")
curl
#!/usr/bin/env bash
# ─────────────────────────────────────────────
# Seedance 2.0 API — curl example
# Usage: export SEEDANCE_API_KEY="your_key" && bash seedance_t2v.sh
# ─────────────────────────────────────────────
API_BASE="https://api.seedance.ai/v2"
API_KEY="${SEEDANCE_API_KEY:?Error: SEEDANCE_API_KEY not set}"
# ── Step 1: Submit the generation job ────────
echo "Submitting T2V job..."
RESPONSE=$(curl --silent --fail \
--request POST "${API_BASE}/generations" \
--header "Authorization: Bearer ${API_KEY}" \
--header "Content-Type: application/json" \
--data '{
"model": "seedance-2.0",
"prompt": "A lone astronaut walks across a red desert at dawn, epic wide shot, cinematic.",
"duration": 8,
"resolution": "1080p",
"fps": 24,
"mode": "t2v"
}')
# Extract the job ID using Python (available on all platforms)
JOB_ID=$(echo "${RESPONSE}" | python3 -c "import sys,json; print(json.load(sys.stdin)['job_id'])")
echo "Job ID: ${JOB_ID}"
# ── Step 2: Poll for completion ───────────────
echo "Polling for completion..."
STATUS="queued"
while [[ "${STATUS}" != "succeeded" && "${STATUS}" != "failed" ]]; do
sleep 10
POLL_RESPONSE=$(curl --silent --fail \
--request GET "${API_BASE}/generations/${JOB_ID}" \
--header "Authorization: Bearer ${API_KEY}")
STATUS=$(echo "${POLL_RESPONSE}" | python3 -c "import sys,json; print(json.load(sys.stdin)['status'])")
echo " Status: ${STATUS}"
done
# ── Step 3: Download the video ────────────────
if [[ "${STATUS}" == "succeeded" ]]; then
OUTPUT_URL=$(echo "${POLL_RESPONSE}" | python3 -c "import sys,json; print(json.load(sys.stdin)['output_url'])")
echo "Downloading from: ${OUTPUT_URL}"
curl --silent --output "astronaut_desert.mp4" "${OUTPUT_URL}"
echo "Saved to astronaut_desert.mp4"
else
echo "Job failed. Response:"
echo "${POLL_RESPONSE}"
exit 1
fi
Best Use Cases
Commercial content production is where Seedance 2.0 shines most immediately. The combination of 1080p output, 10-second clips, and a 30% price drop versus v1 makes it viable for short-form ad spots, social media content, and product explainers at scale.
Image-to-video animation is the other standout. Seedance 2.0’s I2V mode has graduated from limited beta to full general availability, enabling product photographers, illustrators, and e-commerce teams to animate still assets without dedicated motion graphics resources. The model handles camera pan, parallax depth, and subtle character motion reliably at this resolution tier.
Cinematic pre-visualization is an emerging use case that the 2,000-token prompt limit specifically enables. Directors and storyboard artists can now pass highly detailed scene descriptions — including lighting notes, lens characteristics, and character blocking — and receive coherent pre-vis clips for pitching or concept review.
Education and training content benefits from the improved temporal consistency. Explainer videos demonstrating physical processes (fluid dynamics, mechanical motion, biological systems) previously suffered from frame-to-frame inconsistency; the VBench score improvement to 84.1 reflects a materially better experience for this audience.
Where to Access the Seedance 2.0 API
Seedance 2.0 is accessible directly through ByteDance’s developer platform (Volcano Engine / volcengine.com) and through unified inference aggregators such as AtlasCloud (atlascloud.ai), which exposes Seedance 2.0 alongside 50+ other video and language models under a single OpenAI-compatible endpoint — useful if you’re already routing Sora, Kling, or Runway calls through one SDK.
AtlasCloud Unified API Example
# ─────────────────────────────────────────────
# AtlasCloud unified API — Seedance 2.0 T2V
# Drop-in if you're already using AtlasCloud
# for multi-model routing.
# ─────────────────────────────────────────────
import requests
ATLASCLOUD_BASE = "https://api.atlascloud.ai/v1"
ATLASCLOUD_KEY = "YOUR_ATLASCLOUD_KEY"
headers = {
"Authorization": f"Bearer {ATLASCLOUD_KEY}",
"Content-Type": "application/json",
}
payload = {
"model": "seedance-2.0", # same model ID, same response schema
"prompt": "Time-lapse of cherry blossoms falling in a Japanese garden, 4K cinematic.",
"duration": 8,
"resolution": "1080p",
"mode": "t2v",
}
# Submit
r = requests.post(f"{ATLASCLOUD_BASE}/video/generations", headers=headers, json=payload)
r.raise_for_status()
job = r.json()
print(f"AtlasCloud job ID: {job['job_id']}")
# The polling pattern is identical to the direct API example above.
# AtlasCloud normalizes status fields and output_url across all providers.
FAQ
Q1: What is the maximum video length I can generate with the Seedance 2.0 API?
The current hard limit is 10 seconds per API call at 24 fps, yielding a maximum of 240 frames. For longer sequences, the recommended pattern is to chain multiple calls — using the final frame of clip n as the seed image for clip n+1 via the I2V endpoint — and concatenate outputs in post. ByteDance has not published an official stitch endpoint as of Q2 2025.
Q2: How much does the Seedance 2.0 API cost compared to competitors?
At ~$0.035/second for T2V 1080p, Seedance 2.0 is approximately 22% cheaper than Kling 2.0’s published rate of ~$0.045/second and roughly 40% cheaper than Sora’s turbo tier at ~$0.06/second for equivalent resolution. Volume discounts (typically 15–20% off) activate at 10,000+ seconds of generated video per month on most provider tiers.
Q3: Does Seedance 2.0 support audio generation or sound effects?
No — Seedance 2.0 outputs silent MP4 files only. Audio generation is not part of the current API spec. ByteDance’s developer roadmap references audio support as a future feature, but no public release date has been confirmed. For production workflows requiring synchronized audio, teams typically pair Seedance outputs with ElevenLabs Sound Effects or Adobe Podcast’s AI audio tools in a separate pipeline step.
Authoritative References
- ByteDance Volcano Engine — Seedance Model Documentation: https://www.volcengine.com/docs/seedance
- VBench: Comprehensive Benchmark Suite for Video Generative Models (CVPR 2024): https://arxiv.org/abs/2311.17982
- Artificial Analysis — Video Generation Model Leaderboard (2025): https://artificialanalysis.ai/video-generation
“The jump in temporal consistency scores between first- and second-generation diffusion video models isn’t incremental — it’s the difference between a demo and a production tool. Seedance 2.0 crosses that threshold for a meaningful class of commercial workloads.” — Dr. Lena Hartmann, AI Video Research Lead, Runway Labs Alumni (independent commentary, May 2025)
title: "Seedance 2.0 API Guide: Specs, Pricing, and Quick Start (2025)"
description: "Complete Seedance 2.0 API reference Try this API on AtlasCloud
AtlasCloudTags
Related Articles
WAN 2.6 API: Complete Guide to Alibaba's Latest Video Model
Explore the WAN 2.6 API by Alibaba, the latest AI video model. Learn how to integrate, use endpoints, and generate stunning videos with this comprehensive guide.
Seedance 2.0 API Integration Guide: Text-to-Video with Python
Learn how to integrate the Seedance 2.0 API for text-to-video generation using Python. Step-by-step guide with code examples, authentication, and best practices.
AI Video Generation API Benchmark 2026: Kling vs Seedance vs WAN
Explore our 2026 AI video generation API benchmark comparing Kling, Seedance, and WAN. Discover speed, quality, and pricing insights to choose the best tool.