Deploy StableLM with Baseten and Truss

Stability AI launches StableLM
Stability AI recently announced the ongoing development of the StableLM series of language models and simultaneously released a number of checkpoints for this model. Trained on over 1.5 million tokens of content with a relatively small set of parameters (three billion and seven billion parameter models are included in the release), these models are ideal for conversational and coding-related tasks.
However, utilizing these models for inference can be challenging given the hardware requirements. But with Baseten and Truss, this can be dead simple. Baseten provides all the infrastructure you need to deploy and serve ML models performantly, scalably, and cost-efficiently, while Truss provides a seamless bridge from model development to model delivery.
You can see the full code repository for this project here.
Deploying StableLM
There are four models that were released:
- stabilityai/stablelm-base-alpha-7b
- stabilityai/stablelm-tuned-alpha-7b
- stabilityai/stablelm-base-alpha-3b
- stabilityai/stablelm-tuned-alpha-3b
You can modify the load method in model.py to select the version you'd like to deploy.
Configuring GPU resources for StableLM
We found this model runs reasonably fast on A10Gs; you can configure the hardware you'd like in the config.yaml.
StableLM parameters
The usual GPT-style parameters will pass right through to the inference point:
- max_new_tokens (default: 64)
- temperature (default: 0.5)
- top_p (default: 0.9)
- top_k (default: 0)
- num_beams (default: 4)
Adding system prompts for use in Chatbots
If the tuned versions are needed for use in Chatbots; prepend the input message with the system prompt as described in the StableLM README:
Deploy StableLM to Baseten with Truss
Deploying the truss is easy; simply load it and push.
Once deployed to Baseten, StableLM is available behind a REST API for immediate use in production. From there you can take advantage of our auto-scaling resources to ensure efficient, low-latency performance even in high-traffic scenarios. Get started today with $30 of free credits.