Conversation Memory — Retention and Feature Matrix

How many messages each framework retains after 5 turns at different window sizes, plus the full memory API feature comparison. The retention numbers are similar — the persistence story is not.

SynapseKit
LangChain
LlamaIndex
Message Retention After 5 Turns (by window size)
Retention numbers are equivalent at the same window size. All three keep the same messages when configured identically. The difference emerges when conversations are long and token-dense — LlamaIndex's token-budget window starts trimming earlier than a turn-count window of the same nominal size.
Lines of Code — Benchmark #13
Memory API Feature Matrix
FeatureSynapseKitLangChainLlamaIndex
Window strategyTurn countTurn countToken budget
Built into RAGYes (1 param)Via wrapperYes (flag)
In-memoryYesYesYes
Redis backendNoYesNo
Postgres backendNoYesNo
JSON / file persistNoYesSimpleChatStore
Custom backendNoYesPartial
clear()YesYesYes
Format to stringformat_context()get_buffer_string()Yes
Multi-user sessionsNoYesManual
1 param
SynapseKit
memory_window=N on RAG(). Simplest API. In-memory only.
Redis/DB
LangChain
Richest persistence. Multi-user sessions. 17 lines to setup.
token limit
LlamaIndex
Token-budget trimming. Predictable prompt size. JSON persist.
www.engineersofai.com · AI Letters #22 · LLM Showdown #13