Stability AI launches StableLM
Stability AI recently announced the ongoing development of the StableLM series of language models and simultaneously released a number of checkpoints for this model. Trained on over 1.5 million tokens of content with a relatively small set of parameters (three billion and seven billion parameter models are included in the release), these models are ideal for conversational and coding-related tasks.
However, utilizing these models for inference can be challenging given the hardware requirements. But with Baseten and Truss, this can be dead simple. Baseten provides all the infrastructure you need to deploy and serve ML models performantly, scalably, and cost-efficiently, while Truss provides a seamless bridge from model development to model delivery.
You can see the full code repository for this project here.
There are four models that were released:
You can modify the load method in model.py to select the version you'd like to deploy.
Configuring GPU resources for StableLM
We found this model runs reasonably fast on A10Gs; you can configure the hardware you'd like in the config.yaml.
The usual GPT-style parameters will pass right through to the inference point:
- max_new_tokens (default: 64)
- temperature (default: 0.5)
- top_p (default: 0.9)
- top_k (default: 0)
- num_beams (default: 4)
Adding system prompts for use in Chatbots
If the tuned versions are needed for use in Chatbots; prepend the input message with the system prompt as described in the StableLM README:
Deploy StableLM to Baseten with Truss
Deploying the truss is easy; simply load it and push.
Once deployed to Baseten, StableLM is available behind a REST API for immediate use in production. From there you can take advantage of our auto-scaling resources to ensure efficient, low-latency performance even in high-traffic scenarios. Get started today with $30 of free credits.
New in May
Open-source models continue to bridge the gap on results quality versus their closed-source counterparts. Discover new models for text generation and text-to-speech, learn more about the GPUs they run on, and plug in to the community forming around open-source models in this newsletter.
Understanding NVIDIA’s Datacenter GPU line
NVIDIA has dozens of GPUs that can serve ML models of different sizes. But understanding the performance and cost of these different cards, not to mention just keeping the names straight, is a challenge. This guide helps you navigate NVIDIA’s datacenter GPU lineup.