You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Welcome to the PEFT (Pretraining-Evaluation Fine-Tuning) project repository! This project focuses on efficiently fine-tuning large language models using LoRA and Hugging Face's transformers library.
Fine Tuning Notebook Table 📑
Notebook Title
Description
Colab Badge
1. Efficiently Train Large Language Models with LoRA and Hugging Face
Details and code for efficient training of large language models using LoRA and Hugging Face.
2. Fine-Tune Your Own Llama 2 Model in a Colab Notebook
Guide to fine-tuning your Llama 2 model using Colab.
3. Guanaco Chatbot Demo with LLaMA-7B Model
Showcase of a chatbot demo powered by LLaMA-7B model.
4. PEFT Finetune-Bloom-560m-tagger
Project details for PEFT Finetune-Bloom-560m-tagger.
5. Finetune_Meta_OPT-6-1b_Model_bnb_peft
Details and guide for finetuning the Meta OPT-6-1b Model using PEFT and Bloom-560m-tagger.
6.Finetune Falcon-7b with BNB Self Supervised Training
Guide for finetuning Falcon-7b using BNB self-supervised training.
7.FineTune LLaMa2 with QLoRa
Guide to fine-tune the Llama 2 7B pre-trained model using the PEFT library and QLoRa method
8.Stable_Vicuna13B_8bit_in_Colab
Guide of Fine Tuning Vecuna 13B_8bit
9. GPT-Neo-X-20B-bnb2bit_training
Guide How to train the GPT-NeoX-20B model using bfloat16 precision
10. MPT-Instruct-30B Model Training
MPT-Instruct-30B is a large language model from MosaicML that is trained on a dataset of short-form instructions. It can be used to follow instructions, answer questions, and generate text.
11.RLHF_Training_for_CustomDataset_for_AnyModel
How train a Model with RLHF training on any LLM model with custom dataset