Stability AI logoStable Video Diffusion

An image to video model that adds motion to a provided image at 14 or 25 frames per second.

Deploy Stable Video Diffusion behind an API endpoint in seconds.

Deploy model

Example usage

The model accepts 4 different inputs:

  1. image(required): This is the input image the user provides as a base64 string. The entire animation is based on the input image.

  2. num_frames(optional): The total number of frames in the animated output video. This value can either be 14 or 25 as the model only supports these two values.

  3. fps(optional): The number of frames per second in the output video

  4. decoding_t(optional): The number of frames decoded by the model at a given time.

The output is a JSON object containing a key called output which contains the animated video as a base64 string.

Input
1import requests
2import os
3import base64
4from PIL import Image
5from io import BytesIO
6
7# Replace the empty string with your model id below
8model_id = ""
9baseten_api_key = os.environ["BASETEN_API_KEY"]
10
11def base64_to_mp4(base64_string, output_file_path):
12    binary_data = base64.b64decode(base64_string)
13    with open(output_file_path, "wb") as output_file:
14        output_file.write(binary_data)
15
16def mp4_to_base64(file_path: str):
17    with open(file_path, "rb") as mp4_file:
18        binary_data = mp4_file.read()
19        base64_data = base64.b64encode(binary_data)
20        base64_string = base64_data.decode("utf-8")
21
22    return base64_string
23    
24data = {
25  "image": image_to_base64("./pirate_ship.jpeg"),
26  "num_frames": 14,
27  "fps": 6,
28  "decoding_t": 5,
29  "duration": 2
30}
31
32# Call model endpoint
33res = requests.post(
34    f"https://model-{model_id}.api.baseten.co/production/predict",
35    headers={"Authorization": f"Api-Key {baseten_api_key}"},
36    json=data
37)
38
39# Get the output of the model
40res = res.json()
41base64_output = res.get("output")
42
43# Convert the base64 output to an mp4 video
44base64_to_mp4(base64_output, "stable-video-diffusion-output.mp4")
JSON output
1{
2    "output": "/9j/AKDDF0980AFBRKGl098257..."
3}
Preview
Video

Here is another example using a different image as input.

Input
1import requests
2import os
3import base64
4from PIL import Image
5from io import BytesIO
6
7# Replace the empty string with your model id below
8model_id = ""
9baseten_api_key = os.environ["BASETEN_API_KEY"]
10
11def base64_to_mp4(base64_string, output_file_path):
12    binary_data = base64.b64decode(base64_string)
13    with open(output_file_path, "wb") as output_file:
14        output_file.write(binary_data)
15
16def mp4_to_base64(file_path: str):
17    with open(file_path, "rb") as mp4_file:
18        binary_data = mp4_file.read()
19        base64_data = base64.b64encode(binary_data)
20        base64_string = base64_data.decode("utf-8")
21
22    return base64_string
23    
24data = {
25  "image": image_to_base64("./toucans.jpeg"),
26  "num_frames": 14,
27  "fps": 6,
28  "decoding_t": 5,
29  "duration": 2
30}
31
32# Call model endpoint
33res = requests.post(
34    f"https://model-{model_id}.api.baseten.co/production/predict",
35    headers={"Authorization": f"Api-Key {baseten_api_key}"},
36    json=data
37)
38
39# Get the output of the model
40res = res.json()
41base64_output = res.get("output")
42
43# Convert the base64 output to an mp4 video
44base64_to_mp4(base64_output, "stable-video-diffusion-output.mp4")
JSON output
1{
2    "output": "/9j/AKDDF0980AFBRKGl098257..."
3}
Preview
Video

Deploy any model in just a few commands

Avoid getting tangled in complex deployment processes. Deploy best-in-class open-source models and take advantage of optimized serving for your own models.

$

truss init -- example stable-diffusion-2-1-base ./my-sd-truss

$

cd ./my-sd-truss

$

export BASETEN_API_KEY=MdNmOCXc.YBtEZD0WFOYKso2A6NEQkRqTe

$

truss push

INFO

Serializing Stable Diffusion 2.1 truss.

INFO

Making contact with Baseten 👋 👽

INFO

🚀 Uploading model to Baseten 🚀

Upload progress: 0% | | 0.00G/2.39G