Pinning ML model revisions for compatibility and security

TL;DR

When you depend on an open source package, like transformers from PyPi, the best practice is to pin the version you use to ensure there aren’t breaking changes or security vulnerabilities introduced to your codebase. You can do the same for model weights and associated code by pinning a model revision.

Pinning versions for package management

Working with open source models is much like working with open source packages on PyPi, NPM, and other repositories. You get massive benefits from building on the collective work on the entire industry, but must responsibly address potential security threats and compatibility issues.

Pinned versions is common when using a package manager to prevent two failure modes:

  • Backwards-incompatible changes, where the package you rely on makes an update which breaks something in your application.

  • New security vulnerabilities, either due to bugs or malicious code introduced into the package.

Using pinned versions protects against these issues by giving you the chance to review any changes to your dependencies before updating.

When packaging a model with Truss, we strongly recommend pinning versions of required packages in your config.yaml to prevent unexpected breaking changes. Here’s an example from jina-embeddings-v2, a text embedding model:

requirements:
- accelerate==0.22.0
- sentencepiece==0.1.99
- torch==2.0.1
- transformers==4.32.1

Pinning revisions for ML models

Similarly, you can pin a revision for an ML model when using the transformers library to load an open source model.

Let’s say you’re loading a public model from Hugging Face. You can use the revision parameter when loading the model:

def load(self):
    self._model = AutoModel.from_pretrained(
        "jinaai/jina-embeddings-v2-base-en",
        revision="0f472a4cde0e6e50067b8259a3a74d1110f4f8d8",
        trust_remote_code=True,
    ) # Version is pinned to prevent malicious code execution

As the model gets updated by its maintainers, you can update the revision after taking a look at the changes. It’s definitely more work to watch for updates, but you are protected against unexpected changes and are actively monitoring a potential attack vector.

Maintaining a private copy of a model

If you want to take things a step farther when using an open source model, you can copy the model into a private repository on Hugging Face and load it from there, or download the necessary model files and host them on a file store like AWS S3.

Maintaining a private copy of a model has the same benefits as forking an open source repository:

  • Get the same protections as pinning a model revision.

  • Ensure your application isn’t affected if the model you’re depending on is moved or deleted.

  • Apply your own updates to the model.

If you make your own private copy of a model on Hugging Face, follow this example to deploy it to Baseten.

Choosing whether to pin a model revision

While we always recommend pinning package versions for your Python requirements, it’s not always necessary to pin model revisions. Like many things in software development, it depends on your use case.

There are pros and cons to pinning a model revision:

Based on these factors, we recommend pinning a model revision or copying the model to a private repository when:

  • You’re using the trust_remote_code parameter in transformers.

  • You’re evaluating a model’s performance and want to ensure the integrity of your data.

  • Your application’s security requirements mandate pinning model revisions.

For more on pinning model revisions, see the Transformers documentation on the revision parameter and the Truss example for working with private models.