Orchestration Pattern Explorer

Click any pattern to see which frameworks support it and what the code looks like.

➑️Sequential
⚑Parallel
🎯Supervisor
🀝Handoff Chain
πŸ•ΈοΈGraph / DAG
πŸ“¦Shared State
➑️ Sequential Pipeline

Agents run one after another. Each agent receives the output of the previous agent. The simplest pattern β€” and the most common in real pipelines. Researcher β†’ Writer β†’ Reviewer.

SynapseKit
βœ“ Built-in
Crew(process="sequential") executes tasks in declaration order. context_from wires outputs.
crew = Crew( agents=[r, w], tasks=tasks, process="sequential" ) await crew.run()
LangChain
βœ“ Built-in
LangGraph linear edge chain. Each node is a Python function. Edges define the order.
graph.add_edge("researcher", "writer") graph.add_edge("writer", END) app = graph.compile()
LlamaIndex
βœ“ Built-in
AgentWorkflow with handoff tools. The researcher agent calls handoff_to_writer when done.
workflow = AgentWorkflow( agents=[researcher, writer], root_agent="researcher" )
Use when: Your pipeline has a fixed linear order. No branching, no parallel work. Most RAG pipelines, report generation, and summarisation chains fit this pattern.
⚑ Parallel Execution

Multiple agents run simultaneously on independent subtasks, then results merge. Cuts total latency proportionally to the number of parallel branches. Critical for any pipeline where subtasks don't depend on each other.

SynapseKit
βœ“ Built-in
Crew(process="parallel") runs all tasks concurrently. No graph wiring needed.
crew = Crew( agents=[a1, a2, a3], tasks=tasks, process="parallel" ) await crew.run()
LangChain
βœ“ Built-in
LangGraph parallel branches. Multiple edges from one node split into concurrent execution, then merge at a join node.
graph.add_edge("router", "agent_a") graph.add_edge("router", "agent_b") # both run concurrently graph.add_edge("agent_a", "merge") graph.add_edge("agent_b", "merge")
LlamaIndex
βœ— Not supported
AgentWorkflow is handoff-only. Agents execute sequentially via tool calls. No parallel branch support in core.
# Must build manually: results = await asyncio.gather( agent_a.run(task), agent_b.run(task) ) # AgentWorkflow can't do this
Use when: You have independent subtasks that can run simultaneously. Example: a pipeline that searches 3 databases in parallel, then merges results before generating a response. LlamaIndex requires manual async coordination.
🎯 Supervisor Routing

A router agent receives the task and decides which specialist agent to invoke. The supervisor is not an LLM wrapper β€” it's a routing node that dispatches work. Essential for systems with multiple specialist domains.

SynapseKit
βœ“ Built-in
SupervisorAgent(llm, workers) routes based on LLM decision or explicit routing rules.
supervisor = SupervisorAgent( llm=llm, workers=[researcher, analyst, writer] ) await supervisor.run(task)
LangChain
βœ“ Built-in
Supervisor node pattern in LangGraph. A router function uses add_conditional_edges to dispatch to the right specialist.
def route(state): if "research" in state["task"]: return "researcher" return "writer" graph.add_conditional_edges( "supervisor", route)
LlamaIndex
βœ— Not built-in
AgentWorkflow has no supervisor primitive. You'd use a root agent with multiple handoff tools β€” but routing is LLM-driven, not deterministic.
# Closest approximation: # root agent with handoff tools # to each specialist. # Routing depends on LLM # choosing the right tool. # Not deterministic.
Use when: You have multiple specialist agents (coding assistant, data analyst, document writer) and need a router to decide which one handles each incoming request. LlamaIndex's LLM-driven approach works for simple cases but is non-deterministic.
🀝 Handoff Chain

An agent explicitly passes a task to the next agent in a chain, with the ability to include context, instructions, or partial results with the handoff. Different from sequential β€” the agent itself decides when and how to hand off.

SynapseKit
βœ“ Built-in
HandoffChain().add_agent() builds an explicit chain. Each agent receives the previous agent's output plus a system prompt for the handoff.
chain = HandoffChain() chain.add_agent(researcher) chain.add_agent(writer) chain.add_agent(reviewer) await chain.run(topic)
LangChain
βœ— Manual only
No built-in handoff primitive. Implemented via LangGraph edges β€” each edge is a handoff. More explicit but requires more code per handoff step.
# Implemented as graph edges: graph.add_edge("agent_a", "agent_b") # State carries the "handoff" # implicitly through TypedDict
LlamaIndex
βœ“ Built-in
The core primitive in AgentWorkflow. Each agent has a can_handoff_to list and calls a FunctionTool to pass work to the next agent.
agent_a = FunctionAgent( tools=[FunctionTool .from_defaults( fn=handoff_to_b)], can_handoff_to=["agent_b"] )
Use when: You want agents to control the handoff moment themselves rather than a central orchestrator deciding. LlamaIndex is designed for this. LangChain requires manual wiring. SynapseKit's HandoffChain provides an explicit sequential version.
πŸ•ΈοΈ Graph / DAG Orchestration

Agents are nodes in a directed graph. Edges define transitions; conditional edges implement branching logic. The most powerful pattern β€” supports cycles (retry loops), branches, and merges. Required for complex conditional workflows.

SynapseKit
βœ“ Built-in
StateGraph (synapsekit.graph) mirrors LangGraph's API. Nodes, edges, conditional edges, shared state.
from synapsekit.graph import StateGraph g = StateGraph(AgentState) g.add_node("research", research_fn) g.add_node("write", write_fn) g.add_conditional_edges( "research", should_retry)
LangChain
βœ“ Core primitive
LangGraph StateGraph is the core abstraction. Typed state, explicit edges, conditional routing. The most mature graph orchestration primitive available.
graph = StateGraph(State) graph.add_node("n1", fn1) graph.add_node("n2", fn2) graph.add_conditional_edges( "n1", lambda s: "n2" if s["ok"] else END)
LlamaIndex
βœ— Not available
AgentWorkflow is not a DAG. It's a linear handoff chain. No support for conditional branching, parallel branches, or cycles that aren't driven by LLM tool calls.
# AgentWorkflow has no graph API # For DAG-style orchestration # on LlamaIndex, you'd need to # build your own execution loop # or use a different library.
Use when: Your workflow has conditional logic β€” retry if quality is low, branch based on query type, loop until a condition is met. LangGraph is the gold standard here. SynapseKit's StateGraph works but has less documentation and community maturity.
πŸ“¦ Shared State

A single state object flows through all agents. Each agent can read any previous agent's output and write new fields. Enables complex context passing without explicit handoff messages.

SynapseKit
βœ“ Built-in
Task(context_from=["agent_name"]) injects previous agent output into the task description. Crew maintains an internal context dict across the run.
Task( description="Write a paragraph.", agent="writer", context_from=["researcher"], expected_output="One paragraph" )
LangChain
βœ“ Core primitive
TypedDict State flows through all nodes. Each node reads from and writes to the same state dict. Fully typed, inspectable at any point in the graph.
class State(TypedDict): topic: str research: str # set by agent 1 article: str # set by agent 2 def writer(state: State) -> State: # reads state["research"] ...
LlamaIndex
βœ“ Built-in
AgentWorkflow passes context between agents via the handoff tool's arguments and the workflow's shared context object.
def handoff_to_writer( summary: str ) -> str: "Pass research to writer." return summary # summary is the shared context
Use when: Agents downstream need the full output of agents upstream β€” not just the final answer, but intermediate work products. LangChain's TypedDict State is the most explicit and debuggable. SynapseKit's context_from is simpler but less transparent.
www.engineersofai.com Β· AI Letters #26 Β· LLM Showdown Notebook #18