Learn more about Llama 3 and how to get started by checking out our
Getting to know Llama notebook that you can find in our
llama-recipes Github repo. Here you will find a guided tour of Llama 3, including a comparison to Llama 2, descriptions of different Llama 3 models, how and where to access them, Generative AI and Chatbot architectures, prompt engineering, RAG (Retrieval Augmented Generation), fine-tuning, and more. You will find all this implemented with starter code that you can take and adapt to use in your own Meta Llama 3 projects.
To learn more about our Llama 3 models, check out
our announcement blog where you can find details about how the models work, data on performance and benchmarks, information about trust and safety, and various other resources to get you started.
Get the model source from our
Llama 3 Github repo, where you can learn how the models work along with a minimalist example of how to load Llama 3 models and run inference. Here, you will also find steps to download and set up the models, and examples for running the text completion and chat models.
Dive deeper and learn more about the model in the
model card, which goes over the model architecture, intended use, hardware and software requirements, training data, results, and licenses.
Check out our new
Meta AI, built with Llama 3 technology, which is now one of the world’s leading AI assistants that can boost your intelligence and lighten your load, helping you learn, get things done, create content, and connect to make the most out of every moment.
You can use Meta AI on Facebook, Instagram, WhatsApp, Messenger, and the web to get things done, learn, create, and connect with the things that matter to you.
To learn more about the latest updates and releases of Llama models, check out
our website, where you can learn more about the latest models as well as find resources to learn more about how these models work and how you can use them in your own applications.
Check out our
Getting Started guide that provides information and resources to help you set up Llama including how to access the models, prompt formats, hosting, how-to and integration guides, as well as resources that you can reference to get started with your projects.
Dive deeper into prompt engineering, learning best practices for prompting Meta Llama models and interacting with Meta Llama Chat, Code Llama, and Llama Guard models in our short course on
Prompt Engineering with Llama 2 on DeepLearing.ai, recently updated to showcase both Llama 2 and Llama 3 models.
Check out our
Community Stories that go over interesting use cases of Llama models in various fields such as in Business, Healthcare, Gaming, Pharmaceutical, and more!
Learn more about the Llama ecosystem, building product experiences with Llama, and examples that showcase how industry pioneers have adopted Llama to build and grow innovative products for users across their platforms at
Connect 2023.
Also check out our
Responsible Use Guide that provides developers with recommended best practices and considerations for safely building products powered by LLMs.