Generative AI with LLMs (Week 2)
Course Notes and Slides from DeepLearning.AI’s Generative AI with LLMs course.
Fine Tuning
In context learning may not work for smaller models and the examples used may take up entire context window. LLM fine tuning can help here! It involves updating model weights.
Â
Â
Catastrophic Forgetting
Catastrophic Forgetting: Fine tuning can lead to increased model performance on specific task but reduction in ability to do other tasks
Â
How to avoid:
- Fine tune on multiple tasks
- Parameter Efficient Fine Tuning
Â
Model Evaluation
Common metrics:
ROUGUE - used for text summarization
BLUE - text translation
Â
Â
LCS = Longest common subsequence (e.g. “cold outside”)
Â
Â
Parameter Efficient Fine Tuning (PEFT)
Â
3 Main PEFT Methods
- Selective
- Reparameterization
- Additive
Low Rank Adaptation (LoRA)
Insert a smaller number of new weights into the model and only these are trained.
Â
Â
Â
Prompt Tuning
Prompt tuning ≠prompt engineering
Prepends vectors to embeddings
Â
Â
Â
Â