Table Of Contents
Table Of Contents
Community Support & Resources
Community Support
If you have any feature requests, suggestions, bugs to report we encourage you to report the issue in the respective github repository.
Resources
Github
- Getting to know Meta Llama 3 - Jupyter Notebook
- Meta Llama 3 Repository : Main Meta Llama 3 repository
- Meta Llama Recipes : Examples and fine-tuning
- Meta Code Llama Repository : Main Code Llama repository
- Meta Code Llama Recipes : Examples
Developer Education
- Introducing Multimodal Llama 3.2 with Amit Sangani
- Knowledge distillation with 405B
- Prompt Engineering with Llama 2
- Open Approach to Trust & Safety: Llama Guard 3, Prompt Guard & More
- Understanding Llama Tokenizer
Performance & Latency
- Hamel’s Blog - Optimizing and testing latency for LLMs
- vLLM - How continuous batching enables 23x throughput in LLM inference while reducing p50 latency
- Paper - Improving performance of compressed LLMs with prompt engineering
Fine-Tuning
- Hugging Face PEFT
- Meta Llama Recipes Fine-Tuning
- Fine-Tuning Data Sets
- Efficient Fine-Tuning with LoRA
- Weights & Biases Training and Fine-tuning Large Language Models
- End to end fine-tuning with torchtune
Code Llama
- Fine-Tuning Improves the Performance of Meta’s Code Llama on SQL Code Generation
- Beating GPT-4 on HumanEval with a Fine-Tuned CodeLlama-34B
- Introducing Code Llama, a state-of-the-art large language model for coding
Others
Note: Some of these resources refer to earlier versions of Meta Llama. However, the concepts and ideas described are still relevant to the most recent version.