Skip to main content
Open Source Models

Open Source Models

LLaMA, Mistral, Qwen, DeepSeek - running locally, fine-tuning with LoRA and QLoRA, quantization, evaluation, and production deployment with vLLM.

756Free
7Modules
56Lessons
Freeto start

7 Modules. From Download to Production.

Own your models. No API costs. No rate limits. Full control.

01
BeginnerFree

Model Ecosystem

LLaMA, Mistral, Qwen, DeepSeek, Phi - the open-source landscape, architectures, and licensing.

What you'll master

  • LLaMA Family Architecture
  • Mistral and Mixtral Architecture
  • Qwen, DeepSeek, and International Models
  • Phi and Small Language Models
  • Code and Math Specialized Models
  • Multimodal Open-Source Models
  • Model Licensing and Compliance
  • HuggingFace Hub and Model Cards

8 lessons


Start for Free →
02
BeginnerFree

Running Locally

llama.cpp, Ollama, MLX, LM Studio - running 70B models on consumer hardware, GGUF, and benchmarking.

What you'll master

  • llama.cpp and GGUF Format
  • Ollama and Local Model Management
  • MLX for Apple Silicon
  • LM Studio and GUI Tools
  • Docker and Containerized Local Inference
  • Hardware Requirements and Selection
  • Privacy and Air-Gapped Deployment
  • Benchmarking Local Model Performance

8 lessons


Start for Free →
03
IntermediateFree

LoRA and QLoRA Fine-Tuning

LoRA mathematics, QLoRA 4-bit training, Axolotl and TRL frameworks, and model merging.

What you'll master

  • LoRA Mathematics and Implementation
  • QLoRA 4-Bit Fine-Tuning
  • Selecting Target Modules and Rank
  • Training Data Preparation
  • Axolotl and TRL Training Frameworks
  • Evaluating Fine-Tuned Models
  • Advanced PEFT Methods
  • Merging and Model Soup Techniques

8 lessons


Start for Free →
04
IntermediateFree

Quantization in Practice

GPTQ, AWQ, PTQ, QAT - quantizing open-source models, quality vs speed tradeoffs, and production deployment.

What you'll master

  • Post-Training Quantization Methods
  • GPTQ In Depth
  • AWQ In Depth
  • Quantization-Aware Training
  • Quantization Benchmarking
  • Quantization for Vision Models
  • Quantization Error Debugging
  • Deploying Quantized Models in Production

8 lessons


Start for Free →
05
IntermediateFree

Fine-Tuning Pipelines

Instruction tuning, RLHF, DPO, continual learning, synthetic data, and fine-tuning economics.

What you'll master

  • Full Fine-Tuning vs PEFT
  • Instruction Tuning at Scale
  • RLHF and DPO for Open Models
  • Continual Learning and Domain Adaptation
  • Synthetic Data and Self-Improvement
  • Fine-Tuning Hyperparameter Search
  • Monitoring and Debugging Training
  • Fine-Tuning Cost and ROI Analysis

8 lessons


Start for Free →
06
IntermediateFree

Evaluating Open Models

Open LLM leaderboards, safety evaluation, hallucination testing, and building custom eval harnesses.

What you'll master

  • Open LLM Leaderboard and Benchmarks
  • Task-Specific Evaluation Design
  • Safety and Bias Evaluation
  • Factuality and Hallucination Evaluation
  • Code Generation Evaluation
  • Reasoning and Math Evaluation
  • Long Context Evaluation
  • Building an Evaluation Harness

8 lessons


Start for Free →
07
AdvancedFree

Production Deployment

vLLM, TGI, Kubernetes auto-scaling, load balancing, monitoring, and multi-model serving.

What you'll master

  • vLLM Architecture and Deployment
  • TGI and Alternatives
  • Kubernetes and Auto-Scaling for LLMs
  • Load Balancing and Request Routing
  • Monitoring LLM Services
  • Rate Limiting and Cost Control
  • Model Versioning and Canary Releases
  • Multi-Model Serving Architecture

8 lessons


Start for Free →

Run your own models. Own your AI stack.

From GGUF on a MacBook to 70B vLLM clusters - every step covered.

Start Learning Free →