Until today, applications on your Baseten account shared a single Python environment. Now, you can install Python packages from PyPi or system packages like ffmpeg on an app-by-app basis. What’s more, draft and production versions of the same application also run in different environments.
This means that you can:
Install or upgrade a Python package without affecting applications in production
Run different versions of the same package in different applications
Publish and manage your code and dependencies in sync
Baseten’s application builder is designed for making apps to handle real production use cases, and this change gives you an even more flexible, robust developer experience.
🎃 The pumpkin patch
This week’s small-but-mighty changes to bring more magic to your models!
Use more keyboard shortcuts: Accelerate your workflows with a dozen new view builder keyboard shortcuts, listed here. My favorite: nudge components around the view with arrow keys.
Copy-and-paste improvements: Multiselect and copy-and-paste between views now work together, and pasting multiple components preserves their relative layout.
Baseten now supports MLflow models via Truss. MLflow is a popular library for model experimentation and model management with over ten million monthly downloads on PyPi. With MLflow, you can train a model in any framework (PyTorch, TensorFlow, XGBoost, etc) and access features for tracking, packaging, and registering your model. And now, deploying to Baseten is a natural extension of MLflow-based workflows.
Deploying an MLflow model looks a bit like this:
import mlflow
import baseten
model = mlflow.pyfunc.load_model(MODEL_URI)
baseten.deploy(model, "MLflow model")
Baseten uses MLFlow's pyfunc module to load the model and packages it via Truss. To learn more about packaging MLflow models for deployment, consult the Truss documentation on MLflow.
What if instead of painstakingly configuring Stable Diffusion to run locally or paying for expensive cloud GPUs, you could deploy it in a couple of clicks? And better still, it would be instantly available as an authenticated API?
Often, the hardest part of a project is getting started. And when you’re getting started with an unfamiliar model, there are a few things you want to do: try it on a variety of inputs, parse its output to a usable form, and tweak its configuration to meet your needs.
✕
A screenshot of the Whisper model's new README
Baseten’s library of models now features comprehensive updated READMEs for many of our most popular models, with more coming soon.
Load Baseten up to ten times faster
Baseten power users are filling their workspaces with powerful models and dynamic apps. And we found that as the number and size of deployed systems grew on an account, load times shot way up. So we refactored the user interface to load much faster.
But saying “the website is way faster” is hardly useful information. Here’s a table showing how much loading time is saved:
Workspace size
Avg. load time (before)
Avg. load time (after)
5 applications
2.4 sec
0.40 sec
15 applications
6.3 sec
0.47 sec
25 applications
11.1 sec
0.59 sec
Saving time on your MLOps isn’t just about removing clunky hours-long deploy processes. We also care about saving you seconds at the margin.
We added Whisper, a best-in-class speech-to-text model, to our library of pre-trained models. That means you can deploy Whisper instantly on your Baseten account and build applications powered by the most sophisticated transcription model available.
✕
A screenshot of the Whisper starter app
You can deploy Whisper from its model page in the Baseten app. Just sign in or create an account and click “Deploy.” The model and associated starter app will be added to your workspace instantly. Or, try the model first with our public demo.
Review improved model logs
In a comprehensive overhaul, we made model logs ten times shorter but way more useful. Here’s what we changed:
Build logs are now separated into steps for easier skimming
Model deployment logs are surfaced just like build logs
Model OOMs are now reported
Many extraneous log statements have been deleted
OOM logging is a particularly important improvement. An OOM, or out-of-memory error, is a special lifecycle event that we monitor for on Kubernetes. This error means that the model is too big for the infrastructure provisioned for it. Existing logging solutions don’t capture these errors, resulting in frustrating debugging sessions, so we built a special listener to let you know about OOMs right away.
✕
A screenshot showing an OOM error in entries 4 and 5
In the view builder, you can now select multiple components at the same time and move them as a single block. You can also bulk duplicate and bulk delete multiple selected components.
✕
Selecting multiple components, duplicating them, then deleting them
To select multiple components, either use Command-click on each component you wish to select, or drag your cursor over an area of the screen to select everything within its path.
🎃 The pumpkin patch
This week’s small-but-mighty changes to bring more magic to your models!
Set image empty state: You can now specify custom text to appear in an image component when no image is present.
✕
The new empty state placeholder field
Remove canvas frame: You can hide the canvas frame in your application to give the published views a consistent all-white background.
Baseten supports deploying multiple versions of the same model, so you can iterate, test, and experiment to your heart’s content. Now, you can either deactivate or delete model versions when they are no longer useful.
✕
A screenshot showing the options available on a deployed model version
A deactivated model version cannot be invoked or used in applications, but can be re-activated. A deleted model version is permanently gone. Either way, neither deactivated nor deleted model versions count against your deployed model limit.
Record audio with new microphone component
When building UI models like Whisper and wav2vec that process audio, you’ve been able to let users upload audio clips with the file upload component. With the new microphone component, you can instead let users capture audio directly in the app.
✕
A screenshot showing the new microphone component
🎃 The pumpkin patch
This week’s small-but-mighty changes to bring more magic to your models!
Share state between views: If you’re building a complex application on Baseten with multiple views, you might want to share state between those views. This is useful for building interactions like clicking on a row in a table and going to a detail page pre-populated with that row’s information.
New account profile and API key pages: Go to Settings in the main sidebar and you’ll find Account settings broken out from Workspace settings for easier access
Set object fit in image components: Select from five options to set the object fit that works best in the context of your application.
As a data scientist, you can do a lot with Python: train models, build servers, write scripts. Now, in the Baseten app builder, you can do something special with Python: build dynamic front-end web interactions that run in the browser.
✕
The Python code in this screenshot runs directly on the end user's browser
JavaScript’s claim to fame is that it is the language of the web browser. But building a dynamic front-end experience shouldn’t require learning a new programming language. So we now support Python as the default way to write front-end functions in the view builder. JavaScript is still supported as well.
nums = [1, 2, 3, 4, 5]
return [n for n in nums if n % 2 == 0]
Python in the view builder isn’t just for basic operations. You can import many packages, including all Python built-in packages and popular data science libraries like numpy. Unfortunately, due to limitations in the technology powering Python in the web browser, you can’t install your own packages like you can in the back end.
✕
A screenshot of an app build with Python in the view builder
To help you get started with this exciting new capability, we prepared a tutorial on building a stateful web app with Python in the view builder. Follow along to build an auto loan payoff calculator in fifteen minutes using drag and drop components, state, and a Python function. Read the tutorial here.
By default, when you deploy a model to Baseten, like in this XGBoost classifier example, the deployed model expects to receive a dictionary with the key inputs and a list of input values, and will return a dictionary with the key predictions and a list of model results.
Until now, that default behavior has not been changeable. All models deployed to Baseten had to follow that spec. However, this interface was too inflexible, so as of the most recent version of the Baseten Python package (0.2.7), you can set your own interface for your models.
Setting your model interface
Baseten uses Truss under the hood for model deployment. You can customize your model’s interface by editing the predict function in models/model.py in Truss. The auto-generated predict function for the aforementioned XGBoost example looks like this:
Modify this function to parse whatever request and response you want, and remember that Truss also supports pre- and post-processing functions that can further modify input and output when more complicated parsing is needed.
This change does not modify the behavior of existing deployed models, nor the default behavior of future models. However, it does change how deployed models are invoked through the Baseten Python client.
Previously, the predict() function wrapped its argument in a dictionary with the inputs key. Now that said key is not required, the predict() function passes its inputs as-is, which means you have to enter the entire model input yourself.
Before (1.0 spec):
baseten.predict([[0,0,0,0,0,0]])
Now (2.0 spec):
baseten.predict({"inputs": [[0,0,0,0,0,0]]})
The syntax for invoking a model via an API call has not changed.
However, to prevent breaking existing scripts using baseten.predict, a new spec_version flag is now included in Truss. This parameter is set to 2.0 by default for all new models, so they will use the new input spec, but all existing models will continue to function exactly as they have been on the 1.0 spec. You can upgrade your model to the latest interface spec by changing the flag in the config.yaml file in Truss.
Enjoy this unrestricted interface by installing the newest versions of Truss and the Baseten client.
pip install --upgrade baseten truss
Pass a Truss to baseten.deploy()
We also cleaned up the deployment experience in the Baseten Python client. You no longer have to use different functions to deploy an in-memory model versus a model packaged as a Truss. Whatever you have, toss it into baseten.deploy() and we’ll take care of it.
Plus, when you deploy an in-memory model, the deploy function now gives you insight into what is happening to package that model, including the path to where the auto-generated Truss folder lives. This is useful if you want to change something about your deployed model’s behavior (like updating the interface) after deploying the model, just edit the truss and pass it into baseten.deploy() to ship a new version!
INFO Autogenerating Truss for your model,
find more about Truss at https://truss.baseten.co/INFO You can find your auto generated Truss at
/root/.truss/models/lightgbm-MMVJD0
🎃 The pumpkin patch
This week’s small-but-mighty changes to bring more magic to your models!
Tab complete bindings: In the view builder, you can create a binding tag by typing “{{“ and you can now close the binding by pressing “Tab” after selecting the data you want to use in the binding.
Model building banners: When you deploy a new version of a model to your Baseten workspace, you’ll see a banner on other versions of the model letting you know that the new version is building.
✕
A screenshot of the model deployment banner showing a new version of the bert-base-uncased model building
Where did I leave that cutting-edge machine learning model? Never lose track of your application resources again with full workspace search. Just type Command-k (Ctrl-k on Windows) or click the search bar and you’ll be able to find the resource you're looking for: view, data table, code file, anything!
✕
A screenshot of the resource search UI overlaying the view builder
Search results are segmented by scope: the application you're in your Baseten workspace as a whole. Click on any resource and you’ll be taken directly to it!
🎃 The pumpkin patch
This week’s small-but-mighty changes to bring more magic to your models!
Copy data explorer values: Use values from the data explorer with a convenient copy button, just like worklet outputs. The copy button appears per value on hover and copies the information as JSON.
✕
A screenshot showing the copy button on data explorer values