Deploying and using Stable Diffusion XL 1.0
Stable Diffusion XL 1.0 is a highly capable text-to-image model by Stability AI that was released on July 26, 2023 under their CreativeML Open RAIL++-M license.
Deploy Stable Diffusion XL 1.0
You can deploy Stable Diffusion XL 1.0 in 2 clicks from Baseten’s model library. It’s also available packaged as a Truss on GitHub.
Hardware requirements
Stable Diffusion XL requires an A100 for invocation. In our testing, it takes 8-12 seconds to generate an image.
Manual deployment
Sign up or sign in to your Baseten account and create an API key. Then run:
git clone https://github.com/basetenlabs/truss-examples
pip install --upgrade baseten
baseten login
Paste your API key when prompted.
Once authenticated, in an iPython notebook, run the following script to deploy SDXL to your Baseten account:
import baseten
import truss
sdxl = truss.load("truss-examples/sdxl-1.0/")
baseten.deploy(
sdxl,
model_name="Stable Diffusion XL 1.0"
)
Use Stable Diffusion XL 1.0
This model is capable of generating stunningly detailed and accurate images from simple prompts.
To invoke the model, run:
import baseten
# You can retrieve your deployed model version ID from the UI
model = baseten.deployed_model_version_id('MODEL_VERSION_ID')
request = {
"prompt": "A tree in a field under the night sky",
"use_refiner": True
}
response = model.predict(request)
The output will be a dictionary with a key data
mapping to a base64 encoded image. You can save the image with the following snippet:
import base64
img=base64.b64decode(response["data"])
img_file = open('image.jpeg', 'wb')
img_file.write(img)
img_file.close()
The Stable Diffusion Refiner model
The Stable Diffusion Refiner model adds accuracy to difficult-to-generate details like facial features and hands. You can choose whether or not to use the refiner model in an invocation with the use_refiner parameter.
Example outputs
Reach out to us at support@baseten.co with any questions!