You can now fine-tune FLAN-T5, an instruction-tuned text-to-text-transfer-transformer model developed by Google, on Blueprint!
Why fine-tune FLAN-T5?
- Have a top-notch LLM customized with your data — train it on your customer support tickets, user-generated content, or documentation
- Build a special-purpose generative app like a recipe generator or a chatbot that’s an expert on healthcare acronyms
- Save money versus using generic LLM APIs
We currently have the FLAN-T5 base checkpoint available on Blueprint, with large and XXL versions of the model coming very soon — get early access here.
Once your fine-tuning run is complete, your model is deployed on our serverless GPU platform—no custom infrastructure needed. You can also use Blueprint to build your own APIs using serverless functions.
Every new Blueprint user receives 4 hours of GPU credits, ensuring that you have plenty of time to explore FLAN-T5 before incurring any costs.
To help you get started with FLAN-T5 and Blueprint, we’ve compiled some of our favorite resources:
- Creating a dataset for FLAN-T5
- Fine-tuning config for FLAN-T5
- Google research overview of FLAN-T5
- FLAN-T5 HuggingFace docs
- Scaling Instruction-Finetuned Language Models
We can’t wait to see what you build. Sign up for Blueprint today, be sure to share with us on Twitter, and join our Discord community to chat directly with the developers.
Fine-tune FLAN-T5 on Blueprint today!
You can now fine-tune FLAN-T5, an instruction-tuned text-to-text transformer model developed by Google, on Blueprint!