When you enroll through our links, we may earn a small commission—at no extra cost to you. This helps keep our platform free and inspires us to add more value.

Databricks: Large Language Models: Application through Production
This course is aimed at developers, data scientists, and engineers looking to build LLM-centric applications with the latest and most popular frameworks. By the end of this course, you will have built an end-to-end LLM workflow that is ready for production!

This Course Includes
edx
4.6 (45 reviews )
6 weeks at 4-10 hours per week
english
Online - Self Paced
course
Databricks
About Databricks: Large Language Models: Application through Production
This course is aimed at developers, data scientists, and engineers looking to build LLM-centric applications with the latest and most popular frameworks. You will use Hugging Face to solve natural language processing (NLP) problems, leverage LangChain to perform complex, multi-stage tasks, and deep-dive into prompt engineering. You will use data embeddings and vector databases to augment LLM pipelines. Additionally, you will fine-tune LLMs with domain-specific data to improve performance and cost, as well as identify the benefits and drawbacks of proprietary models. You will assess societal, safety, and ethical considerations of using LLMs. Finally, you will learn how to deploy your models at scale, leveraging LLMOps best practices.
By the end of this course, you will have built an end-to-end LLM workflow that is ready for production!
What You Will Learn?
- How to apply Generative AI (GenAI) / LLMs to real-world problems in natural language processing (NLP) using popular libraries, such as Hugging Face and LangChain. .
- How to add domain knowledge and memory into LLM pipelines using embeddings and vector databases. .
- Understand the nuances of pre-training, fine-tuning, and prompt engineering, and apply that knowledge to fine-tune a custom chat model .
- How to evaluate the efficacy and bias of LLMs using different methods. .
- How to implement LLMOps and multi-step reasoning best practices for an LLM workflow. .