New in May 2023

âś•
Baseten's model library offers two-click deploys for popular foundation models

The latest foundation models in two clicks

Our refreshed and restocked model library features familiar favorites like Stable Diffusion 2.1, Whisper, and Alpaca, all available to deploy directly to your Baseten account. Deployed library models run on their own instance, giving you control over resource configuration and autoscaling and visibility into logs and metrics.

Exciting new open-source ML models are released every week. Here are three that you should know about:

âś•
WizardLM, Bark, and StableLM are three new models with great performance and results

WizardLM: an open-source ChatGPT

WizardLM is a LLM that has been fine-tuned to behave like ChatGPT. It’s a great option for projects looking for similar behavior, quality, and results but from an open-source model.

Deploy WizardLM

Bark: generative AI for audio

Bark is a speech-generation model that creates audio based on text prompts. It’s a remarkably accurate and natural-sounding text-to-speech model.

Deploy Bark

StableLM: an LLM by the creators of Stable Diffusion

StableLM is a project to train and fine-tune large language models for a variety of powerful and configurable behaviors. StableLM uses some of the largest and most advanced open-source training datasets available.

Deploy StableLM

Everything you need to know about GPUs

NVIDIA’s A10 GPU is a workhorse for model serving.

But, pop quiz: what does “A10” even mean?

Well, the “A” refers to the GPU’s microarchitecture, Ampere, while the 10 identifies the card’s tier. The name helps us understand the card relative to other cards: it’s smaller than an A100 but uses the same architecture, and it’s newer than the T4, another popular and capable graphics card for model inference.

âś•
NVIDIA GPU microarchitecture generations of the last twelve years

If you want to learn more about the fascinating hardware powering the generative AI boom, we’ve been writing about GPUs on our blog:

ML momentum continues from coast to coast

We had another great meetup in our San Francisco office, this time featuring demos from talented builders across AI. Please join us in NYC for a June 15 meetup, details will be shared soon on Twitter!

âś•
Builders showed their projects at the demo station at our SF meetup

The Baseten team got together this month in person for a hackathon. Shoutout to Samiksha for her winning project: “I stream for ice cream,” which prototyped streaming responses for language models with Truss! Many of the hackathon projects are—after some refactoring—making their way into the Baseten platform as exciting new features that we look forward to sharing soon.

âś•
Ice cream was a common theme between the winning project and a favorite outing!

See you next month!

— The team at Baseten