Changelog
See our latest feature releases, product improvements and bug fixes
Introducing the activity feed
Get more visibility into activity across your workspace, models, and Chains with the new Activity Feed! Click the Activity tab to view a detailed list of changes, including who made them and when.
Dec 6, 2024Deprecation Notice: --trusted
[No action needed]
Dec 5, 2024Introducing Custom Servers: Deploy production-ready model servers from Docker images
Our new Custom Servers feature lets you deploy production-ready model servers directly from Docker images using just a YAML file.
Nov 22, 2024View surrounding events in logs
Debugging just got a little easier! Now, when filtering logs, you can view surrounding events by simply clicking on an event's timestamp. The logs will expand to show events immediately before and...
Oct 31, 2024Changes to instance type management
As part of ongoing improvements to Baseten’s infrastructure platform, we’re working on giving you more flexibility in how resources are provisioned for each model deployment.
Oct 15, 2024Introducing canary deployments for seamless promotions
We're excited to introduce canary deployments on Baseten, designed to phase in new deployments with minimal impact on production latency and uptime.
Oct 11, 2024Create custom environments for model release management
Today we’re excited to introduce custom environments to help manage your model’s release cycles. Environments provide a way to ensure quality, stability, and scalability before your model reaches end...
Oct 9, 2024New request metrics
We've introduced three new request metrics to enhance model monitoring. You can now view percentiles and averages for the following: -
Oct 2, 2024Export model inference metrics to your favorite observability tools
You can now easily export model inference metrics to your favorite observability platforms, including Prometheus, Datadog, Grafana Cloud, and New Relic!
Sep 26, 2024Introducing Baseten Hybrid
Today we introduced early access to Baseten Hybrid, a multi-cloud solution that enables you to self-host inference with seamless flex capacity on Baseten Cloud.