Stability AI logoSDXL ControlNet Depth

An image generation pipeline built on Stable Diffusion XL that uses depth estimation to apply a provided control image during text-to-image inference. Learn more

Deploy SDXL ControlNet Depth behind an API endpoint in seconds.

Deploy model

Example usage

The model accepts a few main inputs:

  1. prompt: This is text describing the image you want to generate. The output images tend to get better as you add more descriptive words to the prompt.

  2. image: Is an image that must be provided by the user as a base64 string. This input image gets used by the ControlNet to control the output from Stable Diffusion XL.

The output JSON object contains a key called result which represents the generated image as a base64 string.

Input
1import requests
2import os
3import base64
4from PIL import Image
5from io import BytesIO
6
7# Replace the empty string with your model id below
8model_id = ""
9baseten_api_key = os.environ["BASETEN_API_KEY"]
10BASE64_PREAMBLE = "data:image/png;base64,"
11
12# Function used to convert a base64 string to a PIL image
13def b64_to_pil(b64_str):
14    return Image.open(BytesIO(base64.b64decode(b64_str.replace(BASE64_PREAMBLE, ""))))
15    
16def pil_to_b64(pil_img):
17    buffered = BytesIO()
18    pil_img.save(buffered, format="PNG")
19    img_str = base64.b64encode(buffered.getvalue()).decode("utf-8")
20    return img_str
21
22data = {
23  "prompt": "a picture of a racoon",
24  "image": pil_to_b64(Image.open("/path/to/image/input_image_1.jpg")),
25}
26
27# Call model endpoint
28res = requests.post(
29    f"https://model-{model_id}.api.baseten.co/production/predict",
30    headers={"Authorization": f"Api-Key {baseten_api_key}"},
31    json=data
32)
33
34# Get output image
35res = res.json()
36output = res.get("result")
37
38# Convert the base64 model output to an image
39img = b64_to_pil(output)
40img.save("output_image.png")
41os.system("open output_image.png")
JSON output
1{
2    "result": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBg..."
3}
Preview

Here is another example using a different prompt and image.

Input
1import requests
2import os
3import base64
4from PIL import Image
5from io import BytesIO
6
7# Replace the empty string with your model id below
8model_id = ""
9baseten_api_key = os.environ["BASETEN_API_KEY"]
10BASE64_PREAMBLE = "data:image/png;base64,"
11
12# Function used to convert a base64 string to a PIL image
13def b64_to_pil(b64_str):
14    return Image.open(BytesIO(base64.b64decode(b64_str.replace(BASE64_PREAMBLE, ""))))
15    
16def pil_to_b64(pil_img):
17    buffered = BytesIO()
18    pil_img.save(buffered, format="PNG")
19    img_str = base64.b64encode(buffered.getvalue()).decode("utf-8")
20    return img_str
21
22data = {
23  "prompt": "large bed, abstract painting on the wall, fluffy rug on the floor, ambient lighting, extremely detailed",
24  "image": pil_to_b64(Image.open("/path/to/image/input_image_2.jpg")),
25}
26
27# Call model endpoint
28res = requests.post(
29    f"https://model-{model_id}.api.baseten.co/production/predict",
30    headers={"Authorization": f"Api-Key {baseten_api_key}"},
31    json=data
32)
33
34# Get output image
35res = res.json()
36output = res.get("result")
37
38# Convert the base64 model output to an image
39img = b64_to_pil(output)
40img.save("output_image.png")
41os.system("open output_image.png")
JSON output
1{
2    "result": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBg..."
3}
Preview

Deploy any model in just a few commands

Avoid getting tangled in complex deployment processes. Deploy best-in-class open-source models and take advantage of optimized serving for your own models.

$

truss init -- example stable-diffusion-2-1-base ./my-sd-truss

$

cd ./my-sd-truss

$

export BASETEN_API_KEY=MdNmOCXc.YBtEZD0WFOYKso2A6NEQkRqTe

$

truss push

INFO

Serializing Stable Diffusion 2.1 truss.

INFO

Making contact with Baseten 👋 👽

INFO

🚀 Uploading model to Baseten 🚀

Upload progress: 0% | | 0.00G/2.39G