01Module 14 Overview - Model MergingHow to combine multiple fine-tuned language models into a single, more capable model without any additional training.02Why Model Merging ExistsThe catastrophic forgetting problem, why naive ensembles are too expensive, and the surprising geometric insight that makes model merging possible.03Linear Interpolation and Model SoupHow weight averaging of fine-tuned models produces better, more robust models than any individual fine-tune - and the task arithmetic framework for composing capabilities.04TIES Merging - Resolving Sign ConflictsHow TIES-Merging eliminates task interference by trimming small deltas, electing signs by majority vote, and merging only aligned parameters.05DARE - Delta Weight SparsificationHow DARE randomly drops delta weights and rescales the remainder to dramatically reduce interference when merging multiple fine-tuned models.06SLERP - Spherical Linear InterpolationHow spherical linear interpolation provides smoother, geometrically correct blending between two model weight configurations than simple linear averaging.07MergeKit - The Practical ToolkitHow to use arcee-ai/mergekit to merge language models with YAML configuration, CPU-compatible layer-by-layer processing, and automated HuggingFace Hub upload.08Frankenmodels and Limitations of Model MergingLayer grafting, depth upscaling, Solar 10.7B, and the fundamental limits of what model merging can and cannot achieve.