Google has recently released several open-source LLM models, including Flan-T5, a variant of T5 that generalises better, outperforming T5 in many NLP tasks. This post shows how to fine-tune a Flan-T5-Base model for the SAMSum dataset (summary of conversations in English) using Vertex AI. Flan-T5 is multilingual and uses instruction fine-tuning to improve the performance and usability of pretrained language models.
Previous ArticlePhd Candidate In Interaction Design And Learning Technologies
Next Article Research Assistant, Computational Fluid Dynamics