Meta

Meta
FacebookXYouTubeLinkedIn
Documentation
OverviewModels Getting the Models Running Llama How-To Guides Integration Guides Community Support

Community
Community StoriesOpen Innovation AI Research CommunityLlama Impact Grants

Resources
CookbookCase studiesVideosAI at Meta BlogMeta NewsroomFAQPrivacy PolicyTermsCookies

Llama Protections
OverviewLlama Defenders ProgramDeveloper Use Guide

Documentation
Overview
Models
Getting the Models
Running Llama
How-To Guides
Integration Guides
Community Support
Community
Community Stories
Open Innovation AI Research Community
Llama Impact Grants
Resources
Cookbook
Case studies
Videos
AI at Meta Blog
Meta Newsroom
FAQ
Privacy Policy
Terms
Cookies
Llama Protections
Overview
Llama Defenders Program
Developer Use Guide
Documentation
Overview
Models
Getting the Models
Running Llama
How-To Guides
Integration Guides
Community Support
Community
Community Stories
Open Innovation AI Research Community
Llama Impact Grants
Resources
Cookbook
Case studies
Videos
AI at Meta Blog
Meta Newsroom
FAQ
Privacy Policy
Terms
Cookies
Llama Protections
Overview
Llama Defenders Program
Developer Use Guide
Documentation
Overview
Models
Getting the Models
Running Llama
How-To Guides
Integration Guides
Community Support
Community
Community Stories
Open Innovation AI Research Community
Llama Impact Grants
Resources
Cookbook
Case studies
Videos
AI at Meta Blog
Meta Newsroom
FAQ
Privacy Policy
Terms
Cookies
Llama Protections
Overview
Llama Defenders Program
Developer Use Guide
Llama

Troubleshooting & FAQ

Get answers to Llama's most frequently asked questions
GeneralEU RestrictionsLegal

General


What if I want to access Llama models but I’m not sure if my use is permitted under the Llama Community License or Acceptable Use Policy?

If you are a researcher, academic institution, government agency, government partner, or other entity with a Llama use case that is currently prohibited by the Llama Community License or Acceptable Use Policy, or requires additional clarification, please contact llamamodels@meta.com with a detailed request. We will consider requests on a case-by-case basis.

Can I use the output of the Llama models to improve other LLMs?

For Llama 2 and Llama 3, the license restricts using any part of the Llama models, including the response outputs, to train another AI model (LLM or otherwise). Starting with Llama 3.1 and later model versions, this is allowed provided you include the correct attribution to Llama. See the license for more information.


Do Llama models support other languages outside of English?

Llama 2 and Llama 3 were primarily trained on English language data with some additional data from other languages. We do not expect the same level of performance in these other languages as in English. Llama 3.1, Llama 3.2, Llama 3.3, and Llama 4 support additional languages and are considered multilingual. See the Llama 3.1, Llama 3.2, Llama 3.3, and Llama 4 model cards for more information.


If I’m a developer/business, how can I access the Llama models?

Models are available through multiple sources but the place to start is at https://www.llama.com.

You can also access the models through Llama API.

Details on how to access the models are available on the Llama website. Please note that the models are made available subject to the applicable Community License Agreement and Acceptable Use Policy. You should also take advantage of the best practices and considerations set forth in the applicable Developer Use Guide.

Models are also hosted and distributed by partners such as Amazon Web Services, Microsoft Azure, Google Cloud, IBM Watsonx, Oracle Cloud, Snowflake, Databricks, Dell, Hugging Face, Groq, Cerebras, SambaNova, and many others. See the Llama website for more information.

How do I download Llama 3.1 70B? It appears to no longer be available through the download flow.

Llama 3.3 70B is a high-performance replacement for Llama 3.1 70B.


Can anyone access Llama models? What are the terms?

Llama models are broadly available to developers and licensees through a variety of hosting providers and on the Meta website. The Llama models are licensed under the applicable Llama Community License Agreement and accompanying Acceptable Use Policy, which provides a permissive license to the models along with certain restrictions to help ensure that the models are used responsibly. Hosting providers may have additional terms applicable to their services.


What are the hardware requirements for deploying these models?

Hardware requirements vary based on the specific Llama model being used, latency, throughput and cost constraints. For the larger Llama models to achieve low latency, one would split the model across multiple inference chips (typically a GPU) with tensor parallelism. Llama models are known to execute in a performant manner on a wide variety of hardware including GPUs, CPUs (both x86 and ARM based), TPUs, NPUs and AI Accelerators. The smaller Llama models typically run on system-on-chip (SOC) platforms found on PC, Mobile and other Edge devices.


Do Llama models provide traditional autoregressive text completion?

Llama models are auto-regressive language models, built on the transformer architecture. The core language models function by taking a sequence of words as input and predicting the next word, recursively generating text.


Do Llama models support adjusting sampling temperature or top-p threshold via request parameters?

The model itself supports these parameters, but whether they are exposed or not depends on implementation.


What is the most effective RAG method paired with Llama models?

There are many ways to use RAG with Llama. See the developer documentation page for reference implementations.

Should we start training with the base or instruct/chat model?

This depends on your application. The Llama pre-trained models were trained for general large language applications, whereas the Llama instruct or chat models were fine tuned for dialogue specific uses like chat bots.


How can I fine tune the Llama models?

You can find examples on how to fine tune the Llama models in the Llama Cookbook repository. See also the developer documentation page for reference implementations.

Am I allowed to develop derivative models through fine-tuning based on Llama models for languages other than those officially supported? Is this a violation of the Acceptable Use Policy?

Developers may fine-tune Llama models for languages beyond English or other officially supported languages (e.g., Llama 3.1, Llama 3.2 (for text to text applications), and Llama 3.3 support 8 languages; Llama 4 (for text to text applications) supports 12 languages), provided that such use complies with the applicable Llama Community License Agreement and Acceptable Use Policy.

Restriction on Llama Multimodal Models in the EU


Is an employee that lives in the EU but is directly employed by a non-EU based company restricted from using the Llama multimodal models?

The employee may use the Llama multimodal models within the scope of the employee’s work for the non-EU based company, since the license extends to the non-EU based company. However, the employee may not use the Llama multimodal model for their own individual purposes if they are domiciled in the EU, although they may use a product or service that incorporates Llama multimodal models as an end user.


Can a non-EU based company develop a product or service using the Llama multimodal models and distribute such product or service within the EU?

Yes, if you are a company with a principal place of business outside of the EU, you may distribute products or services that contain the Llama multimodal models in accordance with your standard global distribution business practices (for additional clarification, please contact llamamodels@meta.com). The EU restriction for Llama multimodal models contained in the Llama 3.2, Llama 3.3, and Llama 4 AUPs only restricts the rights granted to EU based companies and individuals. The company’s customers in the EU would be able to use the product or service that incorporates the Llama multimodal models.

Is a non-EU based company that is affiliated with an EU based company permitted to use the Llama multimodal models?

Yes. The non-EU based company is permitted to use the Llama multimodal models, even if it is a subsidiary of, or affiliated with, an EU based company. However, it is only the non-EU based company that is permitted to use the Llama multimodal models for any development or distribution. Any EU based affiliates may not be involved in such development or distribution (but, see the FAQ above).


Can a company that has its principal place of business outside the EU use developers in its EU office to perform work using the Llama multimodal models?

Yes. The company’s principal place of business is outside the EU, and the company is therefore granted the license in the applicable Llama license. It may use its EU-based developers to perform work using the Llama multimodal models.

Legal


Are Llama models open source? What is the exact license these models are published under?

Llama models are licensed under a bespoke commercial license that balances open access to the models with responsibility and protections in place to help address potential misuse. Our license allows for broad commercial use, as well as for developers to create and redistribute additional work on top of Llama models. We want to enable more innovation in both research and commercial use cases, but believe in taking a responsible approach to releasing AI technologies. For more details, our licenses can be found here.

Are there examples that help licensees better understand how “MAU” is defined?

“MAU” means “monthly active users” that access or use your (and your affiliates’) products and services. Examples include users accessing an internet-based service and monthly users/customers of the licensee’s software applications or hardware devices.


What are Llama’s attribution requirements?

See Section 1(b) of the applicable Llama License which sets forth the relevant attribution requirements. The following attribution requirements apply to the Llama materials licensed under the Llama 4 Community License Agreement:

If a developer is distributing any of the Llama models or related materials or any product or service that is built on or incorporates the Llama models (including another AI model), that developer must also provide a copy of the applicable Llama Community License Agreement. With respect to any product or service that is built on or incorporates the Llama models, the developer must also prominently display “Built with Llama” on a website, user interface, blogpost, about page, or product documentation that is related to that product or service. See the FAQ below for examples of what “prominently display” means.


If a developer uses a Llama 3.1, Llama 3.2, Llama 3.3, or Llama 4 model, such as Llama 3.1-405B, to create or train another AI model, for example by generating a synthetic dataset that is then used to train another AI model, then that developer must include “Llama” at the beginning of such AI model’s name if it is distributed. This case is different from the case above, where Llama models or materials are being distributed or a product or service is built on or incorporates Llama (a derivative work), since the AI model in this case may not be based on or incorporate Llama. Consequently, a statement that the AI model is “Built with Llama” is not required; however, “Llama” should be included at the start of the AI model’s name since a Llama model was used to create, train, fine tune, or otherwise improve the AI model.


If a developer distributes any copies of the Llama Materials (Llama models and documentation), the developer must also retain in those copies an attribution notice within a “Notice” text file that is distributed as part of such copies. The required notice can be found in Section 1(b) of the applicable Llama License. The notice for Llama materials licensed under the Llama 4 Community License Agreement is: “Llama 4 is licensed under the Llama 4 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.”

What does Meta mean by “prominently display,” could you provide an example?

Example 1: Developer builds a GenAI tool using Llama 3.1-405B and releases the tool on its platform. Developer’s platform includes an “about page” within the application or website that announces the creation of the new GenAI tool and explains its uses. In the first few sentences about the new GenAI tool, the “about page” states that the tool was "Built with Llama." This is an example of prominently displaying the required attribution. However, if Developer were to release a 4 page article about its new GenAI tool and only mention that it was "Built with Llama" somewhere on the last page or in a footnote, we would not consider it to be a prominent display of the required attribution. Developer also provides the Llama 3.1 Community License Agreement or a link to it with the materials related to the new GenAI tool, since it is considered a derivative work of Llama 3.1.


Example 2: Developer improves its existing service with new AI features built using Llama 3.2. Developer releases a blog post announcing its AI feature, and states that it was "Built with Llama" in the title, subtitle or first few sentences of the blog post. This is an example of prominently displaying the required attribution. Developer also provides the Llama 3.2 Community License Agreement or a link to it with the materials for its service, related to the new AI feature, since it is considered a derivative work of Llama 3.2.


Example 3: Developer builds an online library where AI models are available for download. It includes some Llama 3.2 models in its library. Along with the copy of Llama 3.2, it includes a “Notice” text file stating that “Llama 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.”


What does it mean to include “Llama” at the beginning of an AI model name, could you provide an example?

Example: Developer is training its own AI model. It needs additional data for pre-training its AI model, so it uses Llama 3.1-405B to create a synthetic dataset which it then uses to train its AI model. Developer later releases its AI model. Developer names its AI model "Llama Xxxx." This complies with the requirement that anyone that uses outputs or results from a Llama model to create, train, fine tune, or improve another AI model must include "Llama" at the beginning of such AI model name.


Does Meta have access to (a) any inputs or outputs from Llama models once the models are downloaded for use by a developer or licensee, or (b) third party tools or services built using Llama, other than Meta’s own products and services like Meta AI?

No. You, or the downstream developer providing you with services, decide where you run the model and who has access to the inputs and outputs. Unless you provide inputs or outputs directly to Meta, or publish them, Meta cannot access, view, or use your inputs or outputs. If you are accessing the models via Llama API please refer to this FAQ.

Does Llama ensure data privacy for sensitive / proprietary licensee inputs?

Llama is a foundational LLM model. You should consult with the downstream developer that builds a product, tool or service using Llama to understand how they ensure data privacy for sensitive/proprietary inputs. If you are accessing the models via Llama API please refer to this FAQ

What does ‘disinformation’ mean in the acceptable use policy?

“Disinformation” refers to information that is false or inaccurate, or is presented in such a way as to draw a false or inaccurate conclusion, and is intended to (or does) mislead or deceive.


What does Meta mean by “Critical Infrastructure” in the acceptable use policy?

Critical infrastructure is infrastructure that is considered essential by governments for the functioning of society and the economy. It includes, but is not limited to, highways, bridges, tunnels, utilities and related facilities and services, etc.


What does “operation of critical infrastructure mean,” can you provide an example?

Example: Developer is an electric company which manages the electric grid and electricity supply for the City of Burlington. To assist in the management of the Burlington electric grid, which Developer and the City consider critical infrastructure, Developer is considering the use of Llama. The use of Llama to manage the electric grid is prohibited under the terms of the AUP.


Does the Critical Infrastructure restriction in the Acceptable Use Policy prevent companies who have special critical infrastructure certification (e.g., a registered operator of “critical infrastructure” under the German BSI Act) from using Llama?

No, such companies are not prohibited when their usage of Llama is not related to the operation of critical infrastructure. Llama, however, may not be used in the operation of critical infrastructure by any company, regardless of government certifications.

Skip to main content
Meta
Models & Products
Docs
Community
Resources
Llama API
Download models