Notebook #19 — SynapseKit 1.4 vs LangChain 1.2 vs LlamaIndex Core 0.14 · local tracing only, no external service
| Feature | SynapseKit | LangChain | LlamaIndex |
|---|---|---|---|
| Token usage | Yes | Partial | Yes |
| Step latency | Yes | No | Yes |
| Intermediate agent steps | Yes | Yes | Yes |
| Tool call args + returns | Yes | Yes | Yes |
| Full raw LLM prompt | Yes | Yes | Yes |
| Retrieved documents | Yes | Yes | Yes |
| Zero-config enable (1–2 lines) | Yes | Yes | No |
| Score (out of 7) | 7 | 5 | 6 |
set_verbose(True) and set_debug(True) surface agent reasoning, tool calls, and raw prompts. They do not record how long any step took. Step latency — which LLM call was slow, which tool is the bottleneck, total agent run time — requires LangSmith (external service, requires API key and internet access). SynapseKit and LlamaIndex both surface timing data locally with no external dependency. This is the most significant gap for production debugging without an external platform.
| Criterion | SynapseKit | LangChain | LlamaIndex |
|---|---|---|---|
| Fewest lines to enable | 7 | 🏆 3 | 4 |
| Local feature depth | 🏆 7/7 | 5/7 | 6/7 |
| Step latency (local) | ✅ Yes | ❌ No | ✅ Yes |
| Post-run query API | Good | None (text only) | 🏆 Best |
| Unit testable | 🏆 Yes | No | Partial |
| Avoids global state | 🏆 Yes | No | No |