Llama 3.1 405B Instruct
*All results are self-reported and not identifiably repeatable. Generally, expected individual results will differ.
Databricks helps global enterprises take control of their data and put it to work with AI. Used by more than 10,000 organizations worldwide and over 60% of the Fortune 500, the Databricks Data Intelligence Platform provides a unified, open analytics platform for building, deploying, sharing and maintaining enterprise-grade data, analytics and AI solutions at scale.
The Databricks Assistant Autocomplete tool produces personalized AI-generated code suggestions in real time. To improve the proficiency of Assistant Autocomplete in Spark SQL — a critical use case for Databricks — the Databricks Applied AI team needed to find a scalable and comprehensive way to test. They sought an LLM that excelled at creating accurate synthetic data, demonstrated code comprehension capabilities and also fit their price point.
The Applied AI team leveraged Llama 3.1 405B Instruct to create substantial synthetic training and evaluation datasets. The team was impressed with Llama’s ease of use and integration, open-source licensing and robust code comprehension.
With access to Llama through Foundation Model APIs, Databricks was able to easily deploy through Databricks Model Serving, managing outputs with Databricks notebooks. This approach significantly simplified integration and accelerated time to results.
When Databricks developers compared the Llama model's outputs to those of other available state-of-the-art (SOTA) models, they found that Llama met or exceeded the outputs of competing models at a much lower price point.
Retraining the Assistant Autocomplete model with Llama-generated synthetic data resulted in an 8% improvement in the performance and quality of the model’s outputs.
With better performing outputs, Databricks Assistant Autocomplete now provides more accurate and highly relevant code suggestions, delivering significant improvements in productivity for Databricks’ customers.
• 8% improvement in performance and quality of fine-tuned model outputs compared to non-fine-tuned model
Stay up-to-date
Subscribe to our newsletter to keep up with the latest Llama updates, releases and more.