Conversation Memory — Retention and Feature Matrix
How many messages each framework retains after 5 turns at different window sizes, plus the full memory API feature comparison. The retention numbers are similar — the persistence story is not.
Message Retention After 5 Turns (by window size)
Retention numbers are equivalent at the same window size. All three keep the same messages when configured identically. The difference emerges when conversations are long and token-dense — LlamaIndex's token-budget window starts trimming earlier than a turn-count window of the same nominal size.
Lines of Code — Benchmark #13
Memory API Feature Matrix
| Feature | SynapseKit | LangChain | LlamaIndex |
| Window strategy | Turn count | Turn count | Token budget |
| Built into RAG | Yes (1 param) | Via wrapper | Yes (flag) |
| In-memory | Yes | Yes | Yes |
| Redis backend | No | Yes | No |
| Postgres backend | No | Yes | No |
| JSON / file persist | No | Yes | SimpleChatStore |
| Custom backend | No | Yes | Partial |
| clear() | Yes | Yes | Yes |
| Format to string | format_context() | get_buffer_string() | Yes |
| Multi-user sessions | No | Yes | Manual |
1 param
SynapseKit
memory_window=N on RAG(). Simplest API. In-memory only.
Redis/DB
LangChain
Richest persistence. Multi-user sessions. 17 lines to setup.
token limit
LlamaIndex
Token-budget trimming. Predictable prompt size. JSON persist.
www.engineersofai.com · AI Letters #22 · LLM Showdown #13