Generative AI with LLMs (Week 2)

Generative AI with LLMs (Week 2)

Tags
machine-learning
python
Published
October 6, 2023
Author
Chris Chan

Generative AI with LLMs (Week 2)

Course Notes and Slides from DeepLearning.AI’s Generative AI with LLMs course.

Fine Tuning

In context learning may not work for smaller models and the examples used may take up entire context window. LLM fine tuning can help here! It involves updating model weights.
 
notion image
 

Catastrophic Forgetting

Catastrophic Forgetting: Fine tuning can lead to increased model performance on specific task but reduction in ability to do other tasks
 
How to avoid:
  • Fine tune on multiple tasks
  • Parameter Efficient Fine Tuning
 

Model Evaluation

Common metrics:
ROUGUE - used for text summarization
BLUE - text translation
 
notion image
 
notion image
LCS = Longest common subsequence (e.g. “cold outside”)
 
notion image
 

Parameter Efficient Fine Tuning (PEFT)

 
notion image
3 Main PEFT Methods
  1. Selective
  1. Reparameterization
  1. Additive

Low Rank Adaptation (LoRA)

Insert a smaller number of new weights into the model and only these are trained.
notion image
 
 
 

Prompt Tuning

Prompt tuning ≠ prompt engineering
Prepends vectors to embeddings
 
notion image
 
 
Â