Llama 2: open source, free for research and commercial use
Available as part of the Llama 2 release
With each model download you'll receive:
- Model code
- Model weights
- README (user guide)
- Responsible Use Guide
- License
- Acceptable use policy
- Model card
Technical specifications
Llama 2 was pretrained on publicly available online data sources.
The fine-tuned model, Llama Chat, leverages publicly available instruction datasets and over 1 million human annotations.
Inside the model
Llama 2 models are trained on 2 trillion tokens and have double the context length of Llama 1. Llama Chat models have additionally been trained on over 1 million new human annotations.
Benchmarks
Llama 2 pretrained models are trained on 2 trillion tokens, and have double the context length than Llama 1. Its fine-tuned models have been trained on over 1 million human annotations.
Safety and helpfulness
Reinforcement learning from human feedback
Llama Chat uses reinforcement learning from human feedback to ensure safety and helpfulness.
Training Llama Chat: Llama 2 is pretrained using publicly available online data. An initial version of Llama Chat is then created through the use of supervised fine-tuning. Next, Llama Chat is iteratively refined using Reinforcement Learning from Human Feedback (RLHF), which includes rejection sampling and proximal policy optimization (PPO).
Getting started guide
Unlock the full potential of Llama 2 with our developer documentation. The Getting started guide provides instructions and resources to start building with Llama 2.
The license
Our model and weights are licensed for both researchers and commercial entities, upholding the principles of openness. Our mission is to empower individuals, and industry through this opportunity, while fostering an environment of discovery and ethical AI advancements.
Responsible use
Like all LLMs, Llama 2 is a new technology that carries potential risks with use. Testing conducted to date has not — and could not — cover all scenarios. In order to help developers address these risks, we have created the Responsible Use Guide. More details can be found in the Research paper and Model card.
Download the model
Get Llama 2 now: complete the download form via the link below.
By submitting the form, you agree to Meta's privacy policy.
Partnerships
Our global partners and supporters
We have a broad range of supporters around the world who believe in our open approach to today’s AI — companies that have given early feedback and are excited to build with Llama 2, cloud providers that will include the model as part of their offering to customers, researchers committed to doing research with the model, and people across tech, academia, and policy who see the benefits of Llama and an open platform as we do.










Statement of support for Meta’s open approach to today’s AI
“We support an open innovation approach to AI. Responsible and open innovation gives us all a stake in the AI development process, bringing visibility, scrutiny and trust to these technologies. Opening today’s Llama models will let everyone benefit from this technology.”
See all signatures
Deutsche Telekom
Liquid Intelligent Technologies Nigeria
Fynd
CETYS
UCU
MinoHealth AI Labs
Africa Digital Rights Hub
United Arab Emirates
Center for Futures Studies,
University of Dubai
Andreessen Horowitz
Andreessen Horowitz
Palo Alto Networks
Université Paris Dauphine PSL-CNRS,
fellow PRAIRIE
IIF / SADAF CONICET
Indian Institute of Technology Bombay
Indian Institute of Science,
Bangalore
Andreessen Horowitz
Betaworks
Defined.AI
NVIDIA
Andreessen Horowitz
Surge AI
Hugging Face
SV Angel
SV Angel
Vault Hill
University of California – Berkeley
Factly
Hong Kong Information Technology Federation
Joseph Fourier University
MIT
Guzo Technologies
University of California – Berkeley
Mexican Artificial Intelligence Society
Y Combinator
MIT
Accenture
DoorDash
Zoom
IEEE
Lux Capital
IBM watsonx Platform Engineering and Chair,
IBM Open Innovation Community,
IBM
Greylock
Andreessen Horowitz
Dropbox
DataOcean AI
UNAM (National University of Mexico)
University of Kinshasa
Kempner Institute
Harvard University
Spark Capital and Former Head of Product
OpenAI
P@sha (Pakistan IT Industry Association)
Internet Society – Tanzania
Georgia Institute of Technology
KNUST
Laurendeau & Associates | CEO at Arifpay
Slow Ventures
DataOcean AI
Intel
AI Innovation,
AI Singapore
Y Combinator
Shopify
Planning and Edge Solutions Businesses,
Qualcomm Technologies
Inc.
University of São Paulo Law School; Lawgorithm Institute; Legal Grounds Institute; Brazilian Senate AI Experts Committee
University of Pretoria
Replit
University of Kinshasa
Andreessen Horowitz
AMD
Jamii Forums
Databricks
EssilorLuxottica
Iliad
NITDA IT Hub University of Lagos
Tech Hive Advisory
Zoom
Universidad de Montevideo
PRAIRIE
and Professor
École normale supérieure – PSL
IIJ- UNAM (Legal Institute
National University of Mexico)
Andreessen Horowitz
Coatue Ventures
Inria
Fellow PRAIRIE
MBR School of Government
Zoom
Former Facebook VP of Comms and Public Policy,
Open DP Project
Gupshup
Indian Institute of Technology Jodhpur
University of Texas at Austin
Y Combinator
Indian Institute of Technology Jodhpur
CIPESA
Scale AI
GitHub
Databricks
Lux Capital
Carnegie Mellon University; Mohamed bin Zayed University of AI
DoorDash
AI Institute of Seoul National University (AIIS)
Universidad del Externado, Mokzy
Clarifai
Responsibility
We’re committed to building responsibly
To promote a responsible, collaborative AI innovation ecosystem, we’ve established a range of resources for all who use Llama 2: individuals, creators, developers, researchers, academics, and businesses of any size.






Join us on our AI journey
If you’d like to advance AI with us, visit our Careers page to discover more about AI at Meta.
Llama 2
Frequently asked questions
Get answers to Llama 2 questions in our comprehensive FAQ page—from how it works, to how to use it, integrations, and more.
Does Llama 2 support other languages outside of English?
The model was primarily trained on English with a bit of additional data from 27 other languages (for more information, see Table 10 on page 20 of the Llama 2 paper). We do not expect the same level of performance in these languages as in English. You’ll find the full list of languages referenced in the research paper. You can look at some of the community lead projects to fine-tune Llama 2 models to support other languages. (eg. link)
Can you run the Llama-7B model on Windows and/or macOS?
The vanilla model shipped in the repository does not run on Windows and/or macOS out of the box. There are some community led projects that support running Llama on Mac, Windows, iOS, Android or anywhere (e.g llama cpp, MLC LLM, and Llama 2 Everywhere). You can also find a work around at this issue based on Llama 2 fine tuning.
How is the architecture of the v2 different from the one of the v1 model?
Some differences between the two models include:
- Llama 1 released 7, 13, 33 and 65 billion parameters while Llama 2 has7, 13 and 70 billion parameters
- Llama 2 was trained on 40% more data
- Llama2 has double the context length
- Llama2 was fine tuned for helpfulness and safety
- Please review the research paper and model cards (llama 2 model card, llama 1 model card) for more differences.
Resources
Explore more on Llama 2
Discover more about Llama 2 here — visit our resources, ranging from our research paper, how to get access, and more.