Async Throughput Evidence Dashboard
All benchmark data from LLM Showdown #22 — click cards for details
Async Efficiency at 50 Concurrent Requests
SynapseKit
96.8%
967.5 req/s · 1.7ms overhead
LangChain
80.8%
808.3 req/s · 11.9ms overhead
LlamaIndex
92.7%
927.2 req/s · 3.9ms overhead
SynapseKit BaseTool.run(): Thin async wrapper. Validates input against JSON schema, calls the function, returns result. No middleware chain, no callback infrastructure. The 1.7ms overhead at n=50 is object construction and schema validation — nearly invisible.
LangChain RunnableLambda.ainvoke(): Every invocation traverses the LCEL Runnable protocol — input validation, callback manager, tracing hooks, output parsing. Powerful for composition (chain1 | chain2) but adds 11.9ms per batch at n=50. The overhead is 7x SynapseKit's.
LlamaIndex FunctionTool.acall(): Moderate overhead. Some validation and dispatch logic but no LCEL-style chain traversal. 3.9ms at n=50 puts it cleanly between the other two. The async path is cleaner than expected.
Throughput Scaling Curves
Raw Benchmark Data
| Concurrency | Baseline | SynapseKit | LangChain | LlamaIndex |
| n=1 | 19.6 | 19.8 | 19.4 | 19.7 |
| n=5 | 97.8 | 98.8 | 96.1 | 97.3 |
| n=10 | 194.9 | 195.7 | 184.2 | 193.3 |
| n=20 | 391.3 | 388.9 | 360.5 | 381.9 |
| n=50 | 986.6 | 967.5 | 808.3 | 927.2 |
Scaling Factor (n=50 / n=1)
| Framework | rps n=1 | rps n=50 | Scaling | vs Perfect (50x) |
| Baseline | 19.6 | 986.6 | 50.4x | 100.9% |
| SynapseKit | 19.8 | 967.5 | 48.9x | 97.7% |
| LlamaIndex | 19.7 | 927.2 | 47.1x | 94.2% |
| LangChain | 19.4 | 808.3 | 41.7x | 83.5% |
Benchmark Verdict — Notebook #22
SynapseKit's thin async wrapper achieves 96.8% of theoretical throughput at 50 concurrent requests. LangChain's LCEL middleware costs 19.2% of theoretical throughput — a 7x overhead difference. LlamaIndex splits the difference at 92.7%. Winner: SynapseKit.