Llama Everywhere
Although Meta Llama models are often hosted by Cloud Service Providers (CSP), Meta Llama can be used in other contexts as well, such as Linux, the Windows Subsystem for Linux (WSL), macOS, Jupyter notebooks, and even mobile devices. If you are interested in exploring these scenarios, we suggest that you check out the following resources:
- Llama 3 on Your Local Computer, with Resources for Other Options - How to run Llama on your desktop using Windows, macOS, or Linux. Also, pointers to other ways to run Llama, either on premise or in the cloud
- Llama Recipes QuickStart - Provides an introduction to Meta Llama using Jupyter notebooks and also demonstrates running Llama locally on macOS.
- Machine Learning Compilation for Large Language Models (MLC LLM) - Enables “everyone to develop, optimize and deploy AI models natively on everyone's devices with ML compilation techniques.”
- Llama.cpp - Uses the portability of C++ to enable inference with Llama models on a variety of different hardware.
- ExecuTorch - Provides a runtime environment for Llama 3.2 lightweight and quantized models to run on mobile and edge devices such as phones, laptops, and smart glasses.
On this page