Skip to main content

AI Letters #35 - Why We Built SynapseKit: The Framework We Deserve

· 12 min read
EngineersOfAI
AI Engineering Education

It was 3 AM and production was on fire. An LLM pipeline had cold-started on Lambda taking 30 seconds just to import dependencies, while the $99/month observability tool told us nothing useful. We'd chosen a "safe" framework with 100K stars and enterprise support—but we were fighting it as much as building with it. That moment led us to rebuild from first principles. Meet SynapseKit: 2 dependencies, async-native, full cost transparency, and Apache 2.0 forever.

The Problem We Lived

It was 3 AM. Production was on fire. An LLM pipeline had cold-started on Lambda, and the container was taking 30 seconds just to import dependencies. Meanwhile, the observability tool we paid $99/month for was telling us... nothing useful.

We'd chosen a popular framework because it was the "safe" choice. It had 100K stars, enterprise support, and a massive ecosystem. But in production, it felt like we were fighting the framework as much as building with it.

The async APIs were baked on top of synchronous code. The dependency tree was a forest (50+ transitive deps). Observability required another SaaS subscription. And debugging? Forget it—too much "magic" between you and the LLM call.

We're not unique. Thousands of teams have hit the same wall. And we thought: What if we rebuilt this from first principles?

That question became SynapseKit.

What SynapseKit Actually Is

SynapseKit is not trying to be a LangChain killer. It's trying to be different.

The difference starts here—not features, but principles:

ProblemLangChain-StyleSynapseKit
Dependencies50+ (200 MB)2 (numpy, rank-bm25)
Async DesignBolted onNative from day 1
Cost Visibility$99+/month SaaSBuilt-in, free
Deployment ToolsDeprecatedsynapsekit serve
ObservabilityBlack boxInstrumented, transparent
Token TrackingHiddenPer-call tracking

We're building for production teams who are tired of choosing between:

  • Power (but complexity)
  • Simplicity (but missing features)
  • Open source (but no support)
  • Commercial (but expensive and lock-in)

SynapseKit says: You don't have to choose.

What This Means for You

1. You Own Your Code

Every LLM call, every prompt, every decision—it's yours. There's no proprietary "chain" abstraction hiding what's happening.

from synapsekit import RAG

rag = RAG(model="gpt-4o")
rag.add("Your documents")

# This actually does what you think it does.
# No hidden orchestration. No vendor-specific magic.
result = await rag.query("What is this about?")

Compare to frameworks where rag.query() invokes 12 internal transformations you didn't ask for.

2. You Keep 90% of Your Cold Start

Lambda cold starts matter. A 2 KB framework matters.

We measured: import synapsekit = 200 ms. import langchain = 2.8 seconds.

That's not hypothetical. That's real deployments. That's the difference between your API responding in 100 ms vs 3 seconds during scale events.

3. You See Your Costs

from synapsekit import CostTracker, BudgetGuard

tracker = CostTracker()
guard = BudgetGuard(daily=10.0, per_request=0.50)

with tracker.scope("my_pipeline"):
result = await rag.query("Question?")

print(tracker.summary())
# Output:
# total_cost: $0.0234
# tokens_in: 1,200
# tokens_out: 450
# model: gpt-4o
# cost_per_1k: $2.50 / $15.00

Every LLM framework should have this. No SaaS fees. No surprise bills. Just facts.

Why We're Staying Open Source (Forever)

This matters. So let's be clear about what open source means to us.

The Temptation

VC-backed frameworks always face the moment: "When do we monetize?"

LangChain took it by building LangSmith ($99+/mo). That's a valid business model. But it creates incentive misalignment: the best features live behind a paywall.

We're choosing differently.

The Bet

SynapseKit core = Apache 2.0 forever.

No tricky license changes. No "open core" where the good stuff is closed. No "we're keeping the best for enterprise."

The framework you use in production is the same framework available to students, hobbyists, and competitors.

Why? Because:

  1. Trust compounds. If you know the code can't suddenly become proprietary, you can bet your infrastructure on it.
  2. Bugs matter less. Open source means crowdsourced debugging. 200 eyes beat 20.
  3. Optimization flows both ways. When a user optimizes for their use case and contributes it back, everyone wins.
  4. We make money differently. (More on that below.)

What We Monetize

We monetize on top, not instead of:

  • SynapseKit Core (framework) - Apache 2.0, always free
  • EvalCI Pro (evaluation SaaS) - Team dashboards, Slack alerts, private repos
  • synapsekit.cloud (managed hosting) - Deploy with one command
  • Compliance reports - EU AI Act and GDPR audits for enterprises

The core framework is the funnel. Everything else is optional.

This is the bet: Build the most trustworthy LLM framework. Let it be free. Earn money by solving operational problems the framework surfaces.

What the Community Taught Us

We shipped SynapseKit in March 2026. By May, we had 12 contributors and 9,200 downloads in 30 days.

Here's what the community actually cares about (not what we thought they would):

Simplicity Beats Ecosystem

We expected people to love our 33 LLM providers. They do. But what they really love: changing one line (model="anthropic/claude" to model="groq/mixtral") and the entire pipeline switches.

Lesson: Unified APIs beat breadth.

Cost Visibility Beats Ease

We built CostTracker assuming 5% of users would enable it. 40% did immediately.

Teams aren't afraid of complexity. They're afraid of surprise bills.

Lesson: Make the invisible visible.

Async-Native Beats Backwards Compatibility

We chose async-first, sync-wrappers. We got pushback: "But some teams only use sync!"

Six months later, those teams were refactoring to async. The performance difference was too obvious to ignore.

Lesson: The future is async. Bet on it.

Testing Beats Documentation

We shipped with thorough tests (2,161 by v1.5.6) but sparse docs. People still contributed. They read the tests as documentation.

Lesson: Tests are the spec.

Transparency Beats Polish

When we had a bug in async evaluation (v1.5.1), we posted a detailed postmortem explaining why we missed it. The community response: "At least you're honest."

Lesson: Admit mistakes. Explain root causes. Ship fixes.

How We're Benchmarking Everything (No Illusions)

We could say "SynapseKit is faster" and assume no one would check. But we're betting on people who will check.

So we're running public benchmarks:

Cold Start Benchmarks

Framework Import Time Container Size
SynapseKit 200 ms ~5 MB
Framework B 2,800 ms ~200 MB
Framework C 1,200 ms ~150 MB

Published monthly. Real data. Anyone can reproduce it.

Token Cost Benchmarks

Task: "Summarize 10 documents, return JSON"

Model Via SynapseKit Via Others Difference
GPT-4o $0.0234 $0.0234 (same!)
Claude $0.0198 $0.0198 (same!)
Groq $0.00001 $0.00001 (same!)

No hidden markup. No feature taxes. We're a passthrough.

Latency Benchmarks

Operation P50 P95 P99
RAG query (retrieval) 45ms 120ms 300ms
Agent tool call 80ms 250ms 800ms
Graph workflow (10 nodes) 200ms 600ms 1.5s

Published, reproducible, hardware-specified.

Feature Coverage Benchmarks

Feature SynapseKit Others
LLM Providers 33 38+
Document Loaders 53 200+
Vector Stores 11 15+
Built-in Tools 47+ 50+
Async Support ✅ Native ⚠ Bolted-on
Token Tracking ✅ Free ❌ Paid
Deployment ✅ Built-in ❌ Deprecated

No hidden asterisks. No "features you can't use."

Why We Benchmark

We're not trying to win on every metric. We're trying to be honest about the tradeoffs.

Yes, LangChain has 200+ loaders. We have 53. But those 53 are maintained and tested. A loader that breaks silently is worse than no loader.

Yes, we're missing some providers. But when you use a provider on SynapseKit, you know it works because we test it against actual APIs.

The bet: Teams would rather have 90% great than 100% mediocre.

Why We'll Be the Best Tool

Not because we have the most features. Not because we have the most stars.

Because we're built on principles that compound:

Dependency Minimalism = Embeddability

Every dependency you add is a future security hole, a version conflict, a cold start penalty.

We said: What if we just didn't? What if we built for embedding first, plugins second?

This means SynapseKit works in:

  • Lambda (fast cold starts)
  • Kubernetes (light containers)
  • Mobile (small binaries)
  • Edge (no Python stdlib bloat)

Others can't do this without a rewrite.

Async-Native = Production-Ready

Async isn't about being faster in theory. It's about handling real-world concurrency: 100 concurrent requests, 50 LLM API calls in flight, 10K tokens streaming.

Sync-first frameworks hit a wall at scale. Async-first frameworks scale to infinity.

We bet on infinity.

Transparency = Trust

No proprietary chains. No hidden costs. No surprise bills. Every LLM call is logged, tracked, and visible.

Trust is the hardest thing to build. And the easiest to lose. We're not willing to risk it for short-term gains.

Community = Compounding Returns

12 contributors in month 1. We're not paying them. They're contributing because:

  • They believe in the mission
  • The codebase is legible
  • Contributions are credited
  • The community is kind

This compounds. Month 2: 20 contributors. Month 3: 40 contributors. By year 2: a community-driven framework that no VC team could build.

Open Source = Moat

Counterintuitive: staying open source is our biggest competitive advantage.

Why? Because:

  • Teams bet their infra on open source. Not on a company.
  • Open source survives company acquisition/failure. Closed source doesn't.
  • Switching costs from open source are high (migration time, vendor trust). But lock-in is low (you always own the code).

This is a different kind of moat. It's built on trust, not contracts.

The 8 Features We're Shipping (v1.8.0 - v2.0.0)

We just mapped the roadmap. Here's what's coming:

v1.8.0: Production Grade (June 15)

  • 🔍 Observability Dashboard: OpenTelemetry and Prometheus (no SaaS needed)
  • Structured Output: Validation and auto-retry (no more JSON failures)
  • 💾 Smart Context: Hierarchical allocation and prompt caching (80% cost reduction)
  • 📊 Retrieval Metrics: Measure if RAG actually helps

v1.9.0: Advanced Retrieval (July 20)

  • 🌐 Knowledge Graphs: Multi-hop reasoning and entity relationships
  • 🧠 Reasoning Routing: Smart routing to o1/o3/Claude thinking models

v2.0.0: Distributed (September 1)

  • 🤖 Agent Federation: Multi-agent coordination at scale
  • 📈 Feedback Loops: Production to training data to auto-improvement

We're shipping 8 major features in 4 months. The framework as built by the community.

What Success Looks Like

Not valuation. Not GitHub stars (though those help).

Success is:

  • A team deploys an LLM app on SynapseKit and it just works.
  • A student learns async Python by reading SynapseKit's codebase.
  • An open-source contributor ships a feature that 10,000 people use.
  • A startup scales to 1M requests/day without hitting a wall.
  • An enterprise can audit the code and say "Yeah, we trust this."

Join Us

We're hiring open-source contributors. Not employees. Contributors.

You pick an issue. You ship it. You're credited as co-author. End of transaction.

Start here: https://github.com/SynapseKit/SynapseKit/issues/695-702

8 issues. Your choice. 1-3 weeks. Shipped to production.

The Final Truth

We're not building SynapseKit because we think we're smarter than the frameworks that came before. We're building it because we learned from them.

We learned that:

  • Teams care about cold starts more than ecosystem breadth
  • Cost transparency beats feature parity
  • Async-native isn't optional in 2026
  • Open source isn't a business model; it's a moat
  • Benchmarks matter more than claims

We're building for the 10,000 teams shipping LLM apps in production right now. Not the 100 teams with billion-dollar budgets. Not the students building chatbots. Not the conferences talking about theory.

For people who actually care that their imports don't take 3 seconds. Who track every dollar. Who want to read the code they ship. Who believe open source beats closed ecosystems.

If that's you, we'll see you in the PRs.

Let's build the framework we deserve.


Resources


Written May 14, 2026. SynapseKit v1.7.0 is live. v1.8.0 ships June 15.

This post will be outdated in 2 months. That's the point.

Read more: www.engineersofai.com

If this was useful, forward it to one engineer who should be reading it.

Want to Think Like an AI Architect?

Join engineers receiving weekly breakdowns of AI systems, production failures, and architectural decisions.