Choose a Video Generation Model

Select an OpenRouter video model by matching clip requirements and scoring priorities

Use this guide when you need to add video model selection based on the clip your app needs to generate.

By the end, your implementation should have a small model-selection helper that filters models by capability and scores them by priority before submitting a video job.

Not sure what model to use? Copy this prompt to run a model-selection process.

For reusable agent knowledge across projects, install the openrouter-video skill.

Before you start

You need:

  • Node.js 20 or newer
  • An OpenRouter API key available as OPENROUTER_API_KEY only if you submit the optional generation request
  • A stable, directly downloadable image URL if you test an image-to-video request

Use the API reference pages as the source of truth for exact fields:

Submitting POST /api/v1/videos starts a real video generation job and may spend OpenRouter credits. Use the model-selection and request-preview steps first, then submit only when the request is ready.

Step 1: Fetch the video model list

Call the dedicated video model endpoint:

1const response = await fetch("https://openrouter.ai/api/v1/videos/models");
2
3if (!response.ok) {
4 throw new Error(await response.text());
5}
6
7const { data } = await response.json();
8const models = data;
9
10console.log(models.map((model) => model.id));

Actual output from the model-list call:

[
"kwaivgi/kling-v3.0-pro",
"kwaivgi/kling-v3.0-std",
"google/veo-3.1-fast",
"google/veo-3.1-lite",
"kwaivgi/kling-video-o1",
"minimax/hailuo-2.3",
"bytedance/seedance-2.0",
"bytedance/seedance-2.0-fast",
"alibaba/wan-2.7",
"alibaba/wan-2.6",
"bytedance/seedance-1-5-pro",
"openai/sora-2-pro",
"google/veo-3.1"
]

Each model includes the values you need for routing decisions. Use the List video generation models API reference as the source of truth for the endpoint response and model metadata fields. If your app uses the TypeScript SDK, see the generated listVideosModels SDK reference for the SDK method shape.

Step 2: Filter by the job you want to run

Start by translating the product request into model requirements: clip length, output shape, generation mode, audio, deterministic retries, provider-specific controls, and cost. Use the API reference above for the exact metadata fields to inspect before filtering.

For example, this helper finds models that can generate a 720p, vertical, image-to-video clip with first-frame support:

1function findVideoModels(models) {
2 return models.filter((model) => {
3 const supportsRequest =
4 model.supported_resolutions?.includes("720p") &&
5 model.supported_aspect_ratios?.includes("9:16") &&
6 model.supported_durations?.includes(5) &&
7 model.supported_frame_images?.includes("first_frame");
8
9 return supportsRequest;
10 });
11}
12
13function getLowestAdvertisedPrice(model) {
14 const prices = Object.values(model.pricing_skus ?? {})
15 .map((price) => Number(price))
16 .filter((price) => Number.isFinite(price));
17
18 return prices.length > 0 ? Math.min(...prices) : Number.POSITIVE_INFINITY;
19}
20
21const matchingModels = findVideoModels(models).sort((first, second) => {
22 return getLowestAdvertisedPrice(first) - getLowestAdvertisedPrice(second);
23});
24
25if (matchingModels.length === 0) {
26 throw new Error("No matching video model found.");
27}
28
29console.log(
30 JSON.stringify(
31 matchingModels.map((match) => ({
32 id: match.id,
33 lowest_advertised_price: getLowestAdvertisedPrice(match),
34 })),
35 null,
36 2,
37 ),
38);

Example output:

1[
2 {
3 "id": "bytedance/seedance-1-5-pro",
4 "lowest_advertised_price": 0.0000012
5 },
6 {
7 "id": "bytedance/seedance-2.0-fast",
8 "lowest_advertised_price": 0.0000056
9 },
10 {
11 "id": "bytedance/seedance-2.0",
12 "lowest_advertised_price": 0.000007
13 },
14 {
15 "id": "alibaba/wan-2.6",
16 "lowest_advertised_price": 0.04
17 },
18 {
19 "id": "kwaivgi/kling-v3.0-std",
20 "lowest_advertised_price": 0.084
21 },
22 {
23 "id": "alibaba/wan-2.7",
24 "lowest_advertised_price": 0.1
25 },
26 {
27 "id": "kwaivgi/kling-v3.0-pro",
28 "lowest_advertised_price": 0.112
29 },
30 {
31 "id": "kwaivgi/kling-video-o1",
32 "lowest_advertised_price": 0.112
33 }
34]

At this point, you have models that satisfy the hard requirements. Score the matching set before selecting one.

Step 3: Score the matching models by priority

Use weighted priorities to make the final choice. For example, a draft workflow might prioritize speed and cost, while a production render might prioritize quality and cost:

1const priorityProfiles = {
2 fastAndCheap: {
3 speed: 0.55,
4 cost: 0.35,
5 quality: 0.1,
6 },
7 qualityAndCost: {
8 speed: 0.15,
9 cost: 0.3,
10 quality: 0.55,
11 },
12 balanced: {
13 speed: 0.33,
14 cost: 0.33,
15 quality: 0.34,
16 },
17};
18
19const resolutionRanks = new Map([
20 ["480p", 1],
21 ["720p", 2],
22 ["1080p", 3],
23 ["4K", 4],
24]);
25
26function getResolutionRank(model) {
27 return Math.max(
28 0,
29 ...(model.supported_resolutions ?? []).map((resolution) => {
30 return resolutionRanks.get(resolution) ?? 0;
31 }),
32 );
33}
34
35function getSpeedScore(model) {
36 const id = model.id.toLowerCase();
37
38 if (id.includes("fast")) return 1;
39 if (id.includes("lite") || id.includes("std")) return 0.8;
40 if (id.includes("pro") || id.includes("o1")) return 0.35;
41
42 return 0.55;
43}
44
45function normalize(value, min, max, invert = false) {
46 if (!Number.isFinite(value) || max === min) {
47 return 0.5;
48 }
49
50 const score = (value - min) / (max - min);
51
52 return invert ? 1 - score : score;
53}
54
55function scoreVideoModels(models, weights) {
56 const prices = models.map(getLowestAdvertisedPrice).filter(Number.isFinite);
57 const minPrice = prices.length > 0 ? Math.min(...prices) : 0;
58 const maxPrice = prices.length > 0 ? Math.max(...prices) : 0;
59 const maxResolutionRank = Math.max(0, ...models.map(getResolutionRank));
60
61 return models
62 .map((model) => {
63 const price = getLowestAdvertisedPrice(model);
64 const speedScore = getSpeedScore(model);
65 const costScore = Number.isFinite(price)
66 ? normalize(price, minPrice, maxPrice, true)
67 : 0;
68 const qualityScore =
69 maxResolutionRank === 0 ? 0.5 : getResolutionRank(model) / maxResolutionRank;
70 const score =
71 weights.speed * speedScore +
72 weights.cost * costScore +
73 weights.quality * qualityScore;
74
75 return {
76 model,
77 id: model.id,
78 score: Number(score.toFixed(3)),
79 lowest_advertised_price: price,
80 speed_score: Number(speedScore.toFixed(3)),
81 cost_score: Number(costScore.toFixed(3)),
82 quality_score: Number(qualityScore.toFixed(3)),
83 };
84 })
85 .sort((first, second) => second.score - first.score);
86}
87
88function summarizeScores(rankedModels) {
89 return rankedModels.slice(0, 4).map(({ model: _model, ...summary }) => {
90 return summary;
91 });
92}
93
94const fastAndCheapModels = scoreVideoModels(
95 matchingModels,
96 priorityProfiles.fastAndCheap,
97);
98const qualityAndCostModels = scoreVideoModels(
99 matchingModels,
100 priorityProfiles.qualityAndCost,
101);
102const model = fastAndCheapModels[0]?.model;
103
104if (!model) {
105 throw new Error("No scored video model found.");
106}
107
108console.log(
109 JSON.stringify(
110 {
111 fast_and_cheap: summarizeScores(fastAndCheapModels),
112 quality_and_cost: summarizeScores(qualityAndCostModels),
113 },
114 null,
115 2,
116 ),
117);
118console.log(`Use ${model.id}`);

Actual output from the scoring helper:

1{
2 "fast_and_cheap": [
3 {
4 "id": "bytedance/seedance-2.0-fast",
5 "score": 0.967,
6 "lowest_advertised_price": 0.0000056,
7 "speed_score": 1,
8 "cost_score": 1,
9 "quality_score": 0.667
10 },
11 {
12 "id": "bytedance/seedance-2.0",
13 "score": 0.752,
14 "lowest_advertised_price": 0.000007,
15 "speed_score": 0.55,
16 "cost_score": 1,
17 "quality_score": 1
18 },
19 {
20 "id": "bytedance/seedance-1-5-pro",
21 "score": 0.642,
22 "lowest_advertised_price": 0.0000012,
23 "speed_score": 0.35,
24 "cost_score": 1,
25 "quality_score": 1
26 },
27 {
28 "id": "alibaba/wan-2.6",
29 "score": 0.628,
30 "lowest_advertised_price": 0.04,
31 "speed_score": 0.55,
32 "cost_score": 0.643,
33 "quality_score": 1
34 }
35 ],
36 "quality_and_cost": [
37 {
38 "id": "bytedance/seedance-2.0",
39 "score": 0.932,
40 "lowest_advertised_price": 0.000007,
41 "speed_score": 0.55,
42 "cost_score": 1,
43 "quality_score": 1
44 },
45 {
46 "id": "bytedance/seedance-1-5-pro",
47 "score": 0.903,
48 "lowest_advertised_price": 0.0000012,
49 "speed_score": 0.35,
50 "cost_score": 1,
51 "quality_score": 1
52 },
53 {
54 "id": "alibaba/wan-2.6",
55 "score": 0.825,
56 "lowest_advertised_price": 0.04,
57 "speed_score": 0.55,
58 "cost_score": 0.643,
59 "quality_score": 1
60 },
61 {
62 "id": "bytedance/seedance-2.0-fast",
63 "score": 0.817,
64 "lowest_advertised_price": 0.0000056,
65 "speed_score": 1,
66 "cost_score": 1,
67 "quality_score": 0.667
68 }
69 ]
70}
Use bytedance/seedance-2.0-fast

Pick the model that best fits your product needs after capability matching. For example, you might prefer the lowest compatible price, audio support, seed support, provider-specific controls, a specific provider, or a known latency profile. The speed score is a slug-based heuristic, and the quality score uses resolution support as a proxy. Pricing SKU units can differ by provider, so treat the helper as a quick starting point and inspect the matching model’s pricing_skus before routing production traffic.

Step 4: Preview the generation request

Before submitting, have the implementation build the exact request body it will send. This makes capability mismatches visible before starting a paid job:

1const firstFrameUrl = process.env.FIRST_FRAME_URL;
2
3if (!firstFrameUrl) {
4 throw new Error("Set FIRST_FRAME_URL to a directly downloadable image URL.");
5}
6
7const requestBody = {
8 model: model.id,
9 prompt:
10 "A handheld vertical product shot of a ceramic mug on a sunny kitchen counter",
11 duration: 5,
12 resolution: "720p",
13 aspect_ratio: "9:16",
14 frame_images: [
15 {
16 type: "image_url",
17 image_url: {
18 url: firstFrameUrl,
19 },
20 frame_type: "first_frame",
21 },
22 ],
23};
24
25console.log(JSON.stringify(requestBody, null, 2));

Before submitting, check that your image URL returns 200 with an image content type:

$curl -I "$FIRST_FRAME_URL"

Example output:

HTTP/2 200
content-type: image/jpeg

Step 5: Submit when ready

1const apiKey = process.env.OPENROUTER_API_KEY;
2
3if (!apiKey) {
4 throw new Error("Set OPENROUTER_API_KEY before submitting a video job.");
5}
6
7const generation = await fetch("https://openrouter.ai/api/v1/videos", {
8 method: "POST",
9 headers: {
10 Authorization: `Bearer ${apiKey}`,
11 "Content-Type": "application/json",
12 },
13 body: JSON.stringify(requestBody),
14});
15
16if (!generation.ok) {
17 throw new Error(await generation.text());
18}
19
20console.log(await generation.json());

The submission response contains the job id, polling_url, and an initial status. In a completed run, that submitted job later reached this final state:

1{
2 "id": "S2wge1oFOBzIj1PpFcFu",
3 "status": "completed",
4 "polling_url": "https://openrouter.ai/api/v1/videos/S2wge1oFOBzIj1PpFcFu",
5 "has_unsigned_urls": true
6}

Check your work

Before submission, you should see a request body whose model supports every capability you filtered for. If you submit the request, you should see a response with a video job id, a polling_url, and an initial status such as pending. To wait for the playable MP4, use the polling and download helper from Generate and Download a Video from Text.