01Module 5: LLM Agents - OverviewLLM agents as autonomous systems that reason, plan, and act using tools, memory, and multi-agent coordination.02Tool Use and Function CallingEnabling LLMs to invoke external tools and APIs through structured function calling, covering JSON schema design, Anthropic vs OpenAI formats, parallel tool calls, and production safety.03ReAct Agent PatternBuilding LLM agents that interleave reasoning traces and actions in a ReAct loop to solve multi-step tasks with tool grounding.04Planning and ReasoningHow LLM agents handle complex multi-step tasks through plan-and-execute, hierarchical planning, self-reflection, and LangGraph-based workflows.05Memory Systems: Short-Term and Long-TermDesigning memory systems for LLM agents - from in-context working memory to episodic retrieval, semantic knowledge bases, and procedural memory.06Multi-Agent ArchitecturesBuilding systems where multiple specialized LLM agents collaborate through orchestrator-worker, pipeline, and peer-to-peer patterns using LangGraph and CrewAI.07Agent EvaluationMeasuring LLM agent performance through trajectory analysis, benchmark suites, LLM-as-judge, failure taxonomies, and production monitoring strategies.08LangChain Deep DiveA thorough guide to LangChain's core abstractions, LCEL composable pipelines, LangGraph stateful workflows, LangSmith observability, and when to use LangChain vs direct API calls.09LlamaIndex Deep DiveA comprehensive guide to LlamaIndex's data-centric architecture - indices, query engines, workflows, multi-document agents, and how it compares to LangChain for RAG applications.10Agent Safety and GuardrailsImplementing defense-in-depth safety for production LLM agents - prompt injection defense, input/output guardrails, tool sandboxing, HITL confirmation, and audit logging.