Test and iterate on your model servers with scaffolds
Today, we’re excited to share more about scaffolds, the technology that underpins the model serving experience on BaseTen. Put simply, a BaseTen scaffold is a context for building a container for serving predictions from a model. Scaffolds are powered by familiar technologies: docker, KServe, and Python. We’ve added some light opinions and stitched them together.
Scaffolds enable functionality including complex pre-processing of model inputs and client deployments so you can quickly test and iterate on your model servers locally before deploying on BaseTen. You can also use scaffolds independently from BaseTen if you want to build your own container and deploy to your own server. And this is just a start—we’re building towards an increasingly robust scaffolds ecosystem, enabling you to do things like build observability pipelines and dynamically define and auto-document model server interfaces in the future.
Here, we’ll walk through a simple example to show how to set up a scaffold for a project. Then, we’ll use a second example to show the power of scaffolds for local iteration on a custom model.
Example 1: Setting up scaffolds
To show how scaffolds work, we’ll start with a simple model. We love scikit-learn here so let’s use a random forest classifier and the classic iris dataset.
First, we train the model:
Once the model is trained, it is trivial to create a scikit-learn scaffold around the model using the BaseTen client package:
Et voila, we now have a scaffold object in Python memory and on our local disk at the directory scaffold_rfc/.
Next, we could choose to deploy this scaffold onto the BaseTen infrastructure and start quickly building a user-facing application powered by our model:
However, we don’t have to deploy the scaffold onto the BaseTen infrastructure to interact with our model’s inference capabilities. We can also call inference on the model directly in the scaffold like this:
From the INFO statements provided to us at instantiation, we can build the container for the model server locally and make calls into it via HTTP with JSON. Here’s how we build the container image:
Next, we can run the container:
We can now interact with the model through the containerized web server and not just as an inference model. This allows for us to test our request and response structures:
Example 2: Using scaffolds for local iteration on a custom model
A big reason to use scaffolds is that they make it easier to find problems and debug your model server before it gets deployed onto BaseTen’s infrastructure. We know that for high touch points such as integrations it’s especially helpful to be able to fix any issues that occur in the same environment you have trained your model in. Let’s walk through how this works with a custom model—while scaffolds for simple models in scikit-learn, Keras, and PyTorch “just work”, the custom scaffold provides for more flexibility in implementation.
Here we define a custom model that we’ll use in a scaffold:
The model here is very simple, it’s just an identity function. The code is designed to pass through the input the model receives if it can parse it. Nothing is outwardly wrong with this code, so let’s test it.
First we create a custom BaseTen scaffold:
Next, we build and run the scaffold locally. Running the scaffold should look something like this:
If we call the endpoint provided by this model, it should return the same JSON we call it with. Let’s give it a try!
In both cases, we see this response: {“error”: “Could not parse input correctly. Please ensure that input is formatted correctly.“}
Do you see the bug? It turns out we are calling json.loads on an object that has already been parsed from JSON into Python types by the model server. The JSON parsing has already been done for us.
If we change the existing custom server code from predictions.append(json.loads(input)) to predictions.append(input) the codepath should work. Thanks to the scaffold, we can quickly make this change, run the new container, and test it locally:
After making a change to our model and rebuilding the container, the request is successfully processed! We were able to fix a bug in the data formats our model server expects without wasting time waiting for deployment on BaseTen. Now, we can deploy our custom model on BaseTen with greater confidence.
This just scratches the surface of scaffold functionality. Scaffolds do not have to live on our infrastructure–you can deploy scaffolds on your own server. The scaffold structure contains Docker files to edit and more. Read our scaffolds documentation for additional information. And please reach out with any questions or ideas—we’d love to hear from you.