Changelog

See our latest feature releases, product improvements and bug fixes

Jul 9, 2024

Pin frequently-used models for easy access

By popular demand: you can now pin models and chains to the top of your workspace! To pin an item, click into the ... menu on any model or chain and select Pin . You can pin up to 6 items for quick...

Jul 2, 2024

Export billing usage data

You can now export your model and billing usage data for in-depth analysis in your preferred tool. The exported CSV includes all the model usage data from the selected billing period. This includes:...

Jun 11, 2024

Run model inference asynchronously on Baseten

We’re thrilled to announce that you can now run async inference on Baseten models! This unlocks some powerful inference use cases: Scalable processing: Schedule tens of thousands of inference...

Apr 22, 2024

Refreshed model overview page with deployment statuses

We’ve revised the model overview page to give you more information about model deployments at a glance. Each model card now contains: The total number of deployments. Counts for deployment statuses:...

Apr 3, 2024

Improved log filtering

You can now filter logs through the main text input. Just start typing the filter you’re looking for, like level , and autocomplete options will appear. Currently, logs filter by: Log level: zoom in...

Mar 26, 2024

Permit inference on unhealthy models

A model enters an “unhealthy” state when the deployment is active but there are runtime errors such as downtime on an external dependency. We now permit inference requests to proceed even when a...

Mar 21, 2024

Improve performance and reduce cost with fractional H100 GPUs

Baseten now offers model inference on NVIDIA H100mig GPUs, available for all customers starting at $0.08250/minute. The H100mig family of instances runs on a fractional share of an H100 GPU using...

Mar 20, 2024

Manage models with the Baseten REST API

We’re excited to share that we’ve created a REST API for managing Baseten models! Unlock powerful use cases outside of the (albeit amazing) Baseten UI - interact with your models programmatically,...

Mar 7, 2024

Configure model hardware with new resource selector

Every deployment of an ML model requires certain hardware resources — usually a GPU plus CPU cores and RAM — to run inference. We’ve made it easier to navigate the wide variety of hardware options...

Feb 23, 2024

View detailed billing and usage metrics

You can now view a daily breakdown of your model usage and billing information to get more insight into usage and costs. Here are the key changes: A new graph displays daily costs, requests, and...

123…10