In this video, Isaiah Bjorklund demonstrates how to fine-tune the Tiny LLaMA model on a custom dataset to call functions and respond in JSON format. He begins by introducing the tools used, including Unsloth AI and Google Colab, and provides a guide to setting up the fine-tuning process. Isaiah reviews the custom dataset, which includes function calls and arguments in JSON format, and explains how to format the data correctly for training. He emphasizes the importance of high-quality data and avoiding overfitting the model. The video provides a step-by-step guide to setting up the Google Colab notebook, installing the Tiny LLaMA model, and downloading the custom dataset. Isaiah walks through the process of configuring the trainer, setting the number of epochs, and adjusting the learning rate. He demonstrates how to monitor the training process and test the fine-tuned model. The video concludes with a demonstration of the model’s performance, highlighting areas for improvement and encouraging viewers to join the Discord community for further discussions.

Isaiah Bjorklund
Not Applicable
July 7, 2024
Tiny LLaMA Finetuning Guide
PT10M37S