9.1 Implementing Meta’s Llama9.1.1 Tokenization and configuration9.1.2 Dataset, data loading, evaluation, and generation9.1.3 Network architecture9.2 Simple Llama9.3 Making it better9.3.1 Quantization9.3.2 LoRA9.3.3 Fully sharded data parallel–quantized LoRA9.4 Deploy to a Hugging Face Hub SpaceSummary