01Module 03: Prompt EngineeringMaster the art and science of communicating with large language models - from basic zero-shot instructions to automated prompt optimization with DSPy.02Zero-Shot PromptingLearn how to elicit reliable behavior from LLMs using only instructions - no examples required - by mastering prompt anatomy, role personas, and format control.03Few-Shot PromptingMaster in-context learning by providing carefully selected examples that demonstrate the exact behavior you want - without any model fine-tuning.04Chain-of-Thought PromptingLearn how to unlock multi-step reasoning in LLMs by making them think out loud - and why this simple technique dramatically improves accuracy on complex tasks.05Tree-of-Thought PromptingExplore multiple reasoning paths simultaneously using Tree-of-Thought - the technique that enables LLMs to backtrack, evaluate alternatives, and solve problems that defeat linear chain-of-thought.06ReAct PatternLearn how to build LLM agents that reason and act by interleaving thought and tool calls - the architectural pattern behind every modern AI assistant.07System Prompts and Context DesignMaster the architecture of LLM conversations - how to design system prompts, manage context windows, and build production-grade context management systems.08Prompt Injection and SecurityUnderstand how prompt injection attacks work, why they're hard to defend against, and how to build LLM systems that are resistant to manipulation.09Structured Output and JSON ModeReliably extract structured data from LLMs using JSON mode, function calling, Pydantic validation, and constrained decoding - the backbone of production LLM pipelines.10Prompt Optimization and DSPyMove beyond manual prompt engineering to automated, evaluation-driven optimization - using APE, OPRO, and DSPy to build LLM pipelines that improve themselves.