BACK TO BLOG
Engineering 2025-01-08 8 min read

Building AI Memory Systems: Lessons from Wyrm

How we built a persistent AI memory system that compounds knowledge across 15+ projects — and why stateless AI conversations are holding your team back.

The Problem with Stateless AI

Every AI coding assistant starts each conversation from zero. You explain your project structure, your conventions, your recent decisions — and it forgets everything the moment the session ends.

Across a workspace with 15+ projects spanning TypeScript, Rust, Python, and PHP, this context loss becomes catastrophic. The same bug gets re-investigated. The same architecture decisions get re-debated. Knowledge doesn't compound — it resets.

We built Wyrm to solve this. It's a persistent memory system that gives AI clients long-term memory scoped to projects, sessions, and an entire workspace.

Architecture Decisions

We chose SQLite with WAL mode as the storage engine. Why? Because Wyrm runs 100% locally — no cloud dependencies, no API keys, no data leaving your machine. SQLite gives us ACID transactions, full-text search (via FTS5), and zero-config deployment.

CORE DATA MODEL

Projects → Architecture context, decisions, credentials

Sessions → What happened when, with correlation IDs

Quests → Task tracking with priority and dependencies

DataLake → Cross-project searchable knowledge store

Skills → Reusable patterns with usage tracking

The key insight: memory isn't just storage. It's an intelligence layer. When you fix a bug in one project, Wyrm detects if the same pattern exists in your other projects.

Cross-Project Intelligence

This is Wyrm's most powerful feature. When you store a bug pattern (say, “API endpoints missing security bootstrap”), Wyrm makes it searchable across every project in the workspace.

Later, when working on a different project, Wyrm can surface relevant patterns: “You fixed this same class of bug in Project A three weeks ago. Here's what worked.”

Over time, this creates compounding returns. Each bug fix, each architecture decision, each optimization in one project automatically benefits all others.

The MCP Protocol

Wyrm uses the Model Context Protocol (MCP) to integrate with AI clients. MCP is an open standard for connecting AI models to external data sources and tools.

This means Wyrm works with VS Code Copilot, Claude Desktop, Cursor, Windsurf, Zed, and Continue — all from the same memory store. Switch editors freely; your context follows.

The wyrm-setup CLI auto-detects installed AI clients and configures MCP connections for all of them in one command.

Results

Context Switching
Near-zero ramp-up time between projects
Bug Recurrence
Patterns stored and surfaced automatically
Token Usage
~60% reduction through caching and deduplication
Multi-Client
7+ AI clients share one memory store

Lessons Learned

Local-first wins

Cloud-dependent memory systems add latency and privacy concerns. SQLite is fast enough and infinitely simpler to deploy.

Markdown sync matters

Wyrm syncs to .wyrm/ folders as readable markdown. This means memory is version-controllable, grep-able, and human-readable — not locked in a binary database.

Memory must be structured

Raw text dumps aren't useful. Projects, sessions, quests, and data points each need their own schema to be actionable.

Cross-project search is the killer feature

FTS5 full-text search across all projects, all sessions, all data points. This is what makes one person's debugging session benefit the entire team.

Try Wyrm

Wyrm is available for teams and enterprises. Persistent AI memory that compounds knowledge across your entire codebase.

Learn More About Wyrm