Changelog | Page 4

See our latest feature releases, product improvements and bug fixes

May 26, 2023

Deploy foundation models in two clicks with restocked model library

Deploy the latest open-source models like WizardLM , Bark , Whisper , Stable Diffusion , and more from the refreshed and restocked model library. Previously, model library models deployed to your...

May 2, 2023

Model usage dashboard and invoice history

The billing page in your workspace settings has two new capabilities: a model usage dashboard and invoice history panel. Your model usage dashboard breaks down the billable time and total cost of...

Apr 28, 2023

Usage-based pricing with free credits and no platform fee for Startup plan workspaces

Baseten has transitioned to purely usage-based pricing for all workspaces not on our Enterprise plan. There is no monthly or annual platform fee for workspaces on the default Startup plan. Plus,...

Mar 15, 2023

Remove default inputs dictionary in Truss and Baseten client

We paid down some technical debt and, in doing so, removed a papercut from the Baseten and Truss developer experience. It used to be that all model invocations had to be formatted as: {

Feb 3, 2023

Deploy models with live reload workflow

Real-world model deployment workflows can be a bit messy. The latest version of the Baseten Python client, version 0.3.0 , gives you a cleaner path to production by introducing a live reload workflow...

Jan 24, 2023

Fix issues faster with improved logs UI

The only thing more frustrating than your code not working is not knowing why it isn't working. To make it easier to root-cause issues during model deployment and invocation, we separated build logs...

Jan 13, 2023

Truss 0.2.0: Improved developer experience

Truss is an open-source library for packaging and serving machine learning models. In the latest minor version release, 0.2.0, we simplified the names of several core functions in Truss to create a...

Jan 5, 2023

Manage your models with updated UI

The new user interface for models centers model versions. Previously in their own tab, model versions now have a dedicated sidebar to help you navigate different deployments of your model and review...

Dec 23, 2022

Configure your model resources

By default, models deployed on Baseten run on a single instance with 1 vCPU and 2 GiB of RAM. This instance size is sufficient for some models and workloads, but demanding models and high-traffic...

Dec 13, 2022

Enable live reload with draft models

Slow dev loops break flow state and make for a frustrating experience. And for data scientists, slow dev loops make all but the most essential deployment workflows too expensive and time consuming to...