Changelog

Our latest product additions and improvements.

Dec 13, 2022

Slow dev loops break flow state and make for a frustrating experience. And for data scientists, slow dev loops make all but the most essential deployment workflows too expensive and time consuming to even consider.

To speed up dev loops in model deployment, Baseten is introducing draft models. For more, read our blog post on using this feature to accelerate your workflows.

By default, the baseten.deploy() command deploys your model as a draft. Here’s a simple example:

import baseten

baseten.deploy(
    packaged_model,
    model_name="Penguin Predictor"
)

When you're ready to publish your model, just pass publish=True to the same deploy command.

To get started with draft models, read the docs or try our demo notebook!

Nov 29, 2022

Flan-T5 XL is an open-source large language model developed by Google. Flan-T5 is an instruction-tuned model, meaning that it exhibits zero-shot-like behavior when given instructions as part of the prompt. You can learn more about instruction tuning on Google's blog.

The model also comes with a starter app so that you can experiment with instruction tuning. You can give it a try here!

A screenshot of the Flan-T5 XL starter app

If you want to fine-tune and build with state of the art models like Flan T5, check out what we are working on with Blueprint and join the waitlist for early access.

Nov 15, 2022

The latest release of Truss, version 0.1.5, introduces a live reload mechanic to improve developer velocity when working with Docker.

Docker is great because it makes your development environment nearly identical to your production environment. But that comes at the expense of rebuilding your environment when you make changes to your Truss. With live reload, you can now make changes to your model code and keep the same Docker container running, which can save several minutes every time you change your code.

To enable this feature, install the latest version of Truss and set live_reload = True in your Truss config file.

Nov 8, 2022

Until today, applications on your Baseten account shared a single Python environment. Now, you can install Python packages from PyPi or system packages like ffmpeg on an app-by-app basis. What’s more, draft and production versions of the same application also run in different environments.

This means that you can:

  • Install or upgrade a Python package without affecting applications in production

  • Run different versions of the same package in different applications

  • Publish and manage your code and dependencies in sync

Baseten’s application builder is designed for making apps to handle real production use cases, and this change gives you an even more flexible, robust developer experience.

🎃 The pumpkin patch

This week’s small-but-mighty changes to bring more magic to your models!

Use more keyboard shortcuts: Accelerate your workflows with a dozen new view builder keyboard shortcuts, listed here. My favorite: nudge components around the view with arrow keys.

Copy-and-paste improvements: Multiselect and copy-and-paste between views now work together, and pasting multiple components preserves their relative layout.

Nov 3, 2022

Baseten now supports MLflow models via Truss. MLflow is a popular library for model experimentation and model management with over ten million monthly downloads on PyPi. With MLflow, you can train a model in any framework (PyTorch, TensorFlow, XGBoost, etc) and access features for tracking, packaging, and registering your model. And now, deploying to Baseten is a natural extension of MLflow-based workflows.

Deploying an MLflow model looks a bit like this:

import mlflow
import baseten
 
model = mlflow.pyfunc.load_model(MODEL_URI)
baseten.deploy(model, "MLflow model")

For a complete runnable example, check out this demo on Google Colab.

Baseten uses MLFlow's pyfunc module to load the model and packages it via Truss. To learn more about packaging MLflow models for deployment, consult the Truss documentation on MLflow.

Nov 2, 2022

What if instead of painstakingly configuring Stable Diffusion to run locally or paying for expensive cloud GPUs, you could deploy it in a couple of clicks? And better still, it would be instantly available as an authenticated API?

Baseten has added Stable Diffusion to our model library so you can do exactly that. Simply deploy the pre-trained model on your Baseten account then use the starter app or built-in API to use the model.

Stable Diffusion model page

Deploy Stable Diffusion today and build awesome tools for generating everything from avatars to Zoom backgrounds.

Oct 26, 2022

Explore models with guidance

Often, the hardest part of a project is getting started. And when you’re getting started with an unfamiliar model, there are a few things you want to do: try it on a variety of inputs, parse its output to a usable form, and tweak its configuration to meet your needs.

A screenshot of the Whisper model's new README

Baseten’s library of models now features comprehensive updated READMEs for many of our most popular models, with more coming soon. 

Load Baseten up to ten times faster

Baseten power users are filling their workspaces with powerful models and dynamic apps. And we found that as the number and size of deployed systems grew on an account, load times shot way up. So we refactored the user interface to load much faster.

But saying “the website is way faster” is hardly useful information. Here’s a table showing how much loading time is saved:

Workspace sizeAvg. load time (before)Avg. load time (after)
5 applications2.4 sec0.40 sec
15 applications6.3 sec0.47 sec
25 applications11.1 sec0.59 sec

Saving time on your MLOps isn’t just about removing clunky hours-long deploy processes. We also care about saving you seconds at the margin.

Oct 18, 2022

We added Whisper, a best-in-class speech-to-text model, to our library of pre-trained models. That means you can deploy Whisper instantly on your Baseten account and build applications powered by the most sophisticated transcription model available.

A screenshot of the Whisper starter app

You can deploy Whisper from its model page in the Baseten app. Just sign in or create an account and click “Deploy.” The model and associated starter app will be added to your workspace instantly. Or, try the model first with our public demo.

Review improved model logs

In a comprehensive overhaul, we made model logs ten times shorter but way more useful. Here’s what we changed:

  • Build logs are now separated into steps for easier skimming

  • Model deployment logs are surfaced just like build logs

  • Model OOMs are now reported

  • Many extraneous log statements have been deleted

OOM logging is a particularly important improvement. An OOM, or out-of-memory error, is a special lifecycle event that we monitor for on Kubernetes. This error means that the model is too big for the infrastructure provisioned for it. Existing logging solutions don’t capture these errors, resulting in frustrating debugging sessions, so we built a special listener to let you know about OOMs right away.

A screenshot showing an OOM error in entries 4 and 5

Oct 12, 2022

In the view builder, you can now select multiple components at the same time and move them as a single block. You can also bulk duplicate and bulk delete multiple selected components.

Selecting multiple components, duplicating them, then deleting them

To select multiple components, either use Command-click on each component you wish to select, or drag your cursor over an area of the screen to select everything within its path.

🎃 The pumpkin patch

This week’s small-but-mighty changes to bring more magic to your models!

Set image empty state: You can now specify custom text to appear in an image component when no image is present.

The new empty state placeholder field

Remove canvas frame: You can hide the canvas frame in your application to give the published views a consistent all-white background.

The same app with and without a canvas border

Oct 5, 2022

Baseten supports deploying multiple versions of the same model, so you can iterate, test, and experiment to your heart’s content. Now, you can either deactivate or delete model versions when they are no longer useful.

A screenshot showing the options available on a deployed model version

A deactivated model version cannot be invoked or used in applications, but can be re-activated. A deleted model version is permanently gone. Either way, neither deactivated nor deleted model versions count against your deployed model limit.

Record audio with new microphone component

When building UI models like Whisper and wav2vec that process audio, you’ve been able to let users upload audio clips with the file upload component. With the new microphone component, you can instead let users capture audio directly in the app.

A screenshot showing the new microphone component

🎃 The pumpkin patch

This week’s small-but-mighty changes to bring more magic to your models!

Share state between views: If you’re building a complex application on Baseten with multiple views, you might want to share state between those views. This is useful for building interactions like clicking on a row in a table and going to a detail page pre-populated with that row’s information.

New account profile and API key pages: Go to Settings in the main sidebar and you’ll find Account settings broken out from Workspace settings for easier access

Set object fit in image components: Select from five options to set the object fit that works best in the context of your application.

A gif of the five different image fit options