Create an API endpoint for a model

When you deploy a model on Baseten, you can invoke it via an API right away. This saves you the effort of building and deploying a model server in Flask or Django, making it faster to interact with models. But in production use cases, you often need to wrap model invocation in parsing and business logic for the model to be useful. In general, ML models enjoy big blobs of JSON; humans enjoy meaningful sentences. With worklets, you can build API endpoints with custom code around model invocations.
In this tutorial, we’ll be building an API endpoint for the English to French translation model from our pre-trained model library. All you’ll need is an api key. You can call the API endpoints from the command line, your microservice of choice, or Postman.
Model-based endpoint
On the models page, click “Deploy a model” then select “Use a pre-trained model.” Scroll to “English to French translation, select it, click “Next,” then “Deploy model.” From the deployed model page, you can see a command calling the model via curl.

Here is a model input:
Which returns:
Worklet-based endpoint
The previous API endpoint returns a valid translation, but you have to match up the translations to the inputs yourself. With a worklet, we can implement cleaner inputs and outputs.
In the starter application that we deployed earlier, visit the “Translate sentences” worklet. If you're curious, the code driving these changes is in main.py. Click on “API Endpoint” to see what to copy to call the worklet endpoint.

Here is a worklet input:
Which returns: