Advanced Builder with Local AI Models
Build and deploy privacy-first AI apps using local models. Master fine-tuning, RAG pipelines, and edge inference.
Course Overview
Take your AI development skills to the edge. This advanced program focuses on building powerful applications without relying on cloud-based APIs.
From fine-tuning small language models (SLMs) to building secure RAG systems for enterprise data, you will learn the full lifecycle of local AI engineering.
What You Will Learn
Model Hosting: Deploying LLMs locally with Ollama and vLLM.
Fine-Tuning: Parameter-efficient techniques (PEFT) and LoRA.
Vector Databases: Building private RAG with ChromaDB.
Agentic Workflows: Creating autonomous AI agents from scratch.
Curriculum
Module 1: Local AI Infrastructure
GPU architecture, quantisation basics, and setting up the local development environment.
Module 2: Fine-Tuning & Quantization
Preparing datasets, fine-tuning techniques for custom knowledge, and model compression.
Module 3: Vector Databases & RAG
Embedding models, vector search, and building context-aware applications without the cloud.
Module 4: Edge Deployment & Optimization
Compiling models for edge devices and optimizing for low-latency inference.
Advanced Lab Access
Remote GPU cluster usage included
Expert Certificate
Validated by AI industry leaders
Secure checkout powered by TechixPay
