Meta

Meta
FacebookXYouTubeLinkedIn
Documentation
OverviewModels Getting the Models Running Llama How-To Guides Integration Guides Community Support

Community
Community StoriesOpen Innovation AI Research CommunityLlama Impact Grants

Resources
CookbookCase studiesVideosAI at Meta BlogMeta NewsroomFAQPrivacy PolicyTermsCookies

Llama Protections
OverviewLlama Defenders ProgramDeveloper Use Guide

Documentation
Overview
Models
Getting the Models
Running Llama
How-To Guides
Integration Guides
Community Support
Community
Community Stories
Open Innovation AI Research Community
Llama Impact Grants
Resources
Cookbook
Case studies
Videos
AI at Meta Blog
Meta Newsroom
FAQ
Privacy Policy
Terms
Cookies
Llama Protections
Overview
Llama Defenders Program
Developer Use Guide
Documentation
Overview
Models
Getting the Models
Running Llama
How-To Guides
Integration Guides
Community Support
Community
Community Stories
Open Innovation AI Research Community
Llama Impact Grants
Resources
Cookbook
Case studies
Videos
AI at Meta Blog
Meta Newsroom
FAQ
Privacy Policy
Terms
Cookies
Llama Protections
Overview
Llama Defenders Program
Developer Use Guide
Documentation
Overview
Models
Getting the Models
Running Llama
How-To Guides
Integration Guides
Community Support
Community
Community Stories
Open Innovation AI Research Community
Llama Impact Grants
Resources
Cookbook
Case studies
Videos
AI at Meta Blog
Meta Newsroom
FAQ
Privacy Policy
Terms
Cookies
Llama Protections
Overview
Llama Defenders Program
Developer Use Guide
Llama banner image

Enabling AI Defenders

Enabling developers and critical organizations to better defend key systems, services, and infrastructure in the age of AI.
llama protections

Learn more
Our approachLlama Defenders ProgramDeveloper use casesResources

Our approach

We believe in cross-industry collaboration across organizations that play a critical role in defending the systems, services, and infrastructure that society relies on everyday. We’re excited to introduce and expand our efforts supporting our select partners through the Llama Defenders Program, as well as broadly enabling the developer community to better defend their organizations in the age of AI.

Llama Defenders Program

We are partnering with key organizations to provide new tools to defend against AI-enabled dual-use risks: Llama Generated Audio Detector and a new audio watermark detector.
Learn more
placeholder-image

Llama Generated Audio Detector

A new model designed to classify whether a given audio file has been generated by AI.
placeholder-image

Audio watermark detector

New audio watermarking and detection technology that provides industry leading detection performance on accuracy, imperceptibility, and speed.
Case study

ZenDesk

Zendesk is utilizing the Llama Generated Audio Detector to help them detect whether a voice is AI-generated and might be impersonating a customer or executive
llama protections
llama protections

Automatic sensitive document classification

As part of our efforts to support the defender community more broadly, we are also sharing the Automatic Sensitive Document Classification. It is a new security tool designed to automatically apply security classification labels to your organization’s internal documents to help prevent unauthorized access and distribution.

Developers can access this tool through Github, and can configure customized security protections with label application, for example disabling copies, moves, or external shares for files with highly sensitive labels. These labels can also be used when setting up company-wide RAG implementations.

Defensive capability benchmarks in CyberSecEval 4

Two new categories of defensive capabilities evaluations are being added to CyberSec Eval 4.
llama protections

CyberSOC Eval - Coming Soon

In partnership with CrowdStrike, we’re releasing a set of new benchmarks that provide the first framework that measures the efficacy of AI systems in representative security operation centers (SOC) tasks. These include Applied Security Reasoning, Malware Analysis, and Threat Intelligence Reasoning.

AutoPatchBench

A new benchmark that measures the ability of an AI system to automatically patch security vulnerabilities in native code. It provides a standardized way to measure the performance of automated patching agents, and enables code owners to integrate automated evaluation into development cycles.

A basic patch generator reference implementation - designed to address simple crashes - is available for the open source defender community to use.

Read the blog
Horizon banner image

To express interest in participating in the Llama Defenders Program, please email: llamadefendersprogram-partnerinquiries@meta.com

Resources

Continue exploring the Llama ecosystem.
placeholder-image

Learn more about Llama Protections

Learn more
placeholder-image

Get started with Llama Protections

Learn more
placeholder-image

Developer Use Guide: AI Protections

Read the guide
placeholder-image

Download the models

Download now
Skip to main content
Meta
Models & Products
Docs
Community
Resources
Llama API
Download models