Transformer Recap
Scaling Challenges
Knowledge & Hallucination
Specialized Tasks
MoE: The Big Picture
MoE Architecture Deep Dive
MoE Benefits & Trade-offs
MoE in Product Dev
RAG: The Core Idea
RAG Architecture Deep Dive
RAG Benefits & Trade-offs
RAG in Product Dev
MoE vs. RAG
Hybrid Architectures
Pre-training vs. Fine-tuning
Full Fine-tuning Explained
Prompt-based Tuning Basics
When to Full Fine-Tune?
Full Fine-tuning Workflow
Data for Full Fine-tuning
Training Full Fine-tuned LLMs
Prompt Engineering Deep Dive
Prompt Templates & Variables
Prompt Tuning Explained
Soft Prompts vs. Hard Prompts
Data & Compute Constraints
Task & Performance Needs
Methodology Selection Matrix
Full Fine-Tuning's Limits
Enter PEFT: The Solution
PEFT's Core Idea
LoRA: The Big Idea
LoRA's Math Magic
Implementing LoRA
LoRA Hyperparameters
QLoRA: Quantized LoRA
QLoRA in Action
Other PEFT Methods
PEFT Method Comparison
PEFT for Your Product
PEFT Best Practices
PEFT's Future
Data Needs for Fine-Tuning
Cleaning & Preprocessing Data
Dataset Formatting & Tools
Key Fine-Tuning Hyperparams
Systematic Tuning Methods
Advanced Tuning Techniques
GPU Memory Optimization
Distributed Training
Inference Optimization
Cloud Resource Management
Common Training Issues
Debugging Tools & Techniques
Performance Debugging
Model Output Debugging
Why Evaluate LLMs?
Intrinsic Metrics
Extrinsic Metrics
Robustness & Reliability
Uncertainty & Calibration
The Human Touch
Designing HITL Studies
Integrating Feedback
Understanding LLM Bias
Bias in Data
Model-Level Bias Fixes
What is LLM Alignment?
Alignment Techniques
Ethical Guidelines
What are Multi-Modal LLMs?
Architectures for Multi-Modal
Multi-Modal Use Cases
Challenges in Multi-Modal
LLMs as Agents
Agentic System Design
Self-Correction Mechanisms
Self-Correction in Practice
Ethical AI & Governance
Personalized AI
AI for Scientific Discovery
AI and Human Creativity
The Future of AGI
Staying Ahead of the Curve