ACCESSING MEMORY NODE 0x4C6E24AB...
0%

Conversations Are Graphs

TIMESTAMP: 2025-12-05 :: 00:00:00 UTC
AUTHOR_NODE: Scott Waddell
EST_ACCESS_TIME: 3 min
═════════════════════════════════════════════════════════════════
MEMORY ACCESS STARTED
═════════════════════════════════════════════════════════════════

>Every AI conversation is stuck in the past

Not philosophically. Literally. Every chat system, from ChatGPT to enterprise support bots, treats conversations as flat lists:

flat-conversation.txt
Message 1 → Message 2 → Message 3 → Message 4 → ...

This works until it doesn't. And it stops working fast.

"Flat lists can't represent how conversations actually work."

>The problem shows up everywhere

You're planning a trip to Paris. Halfway through, you ask about restaurant recommendations. Then you're back to flights. Then hotels. Then you remember you wanted to ask about the restaurants again.

The AI has no idea what you're talking about. It either dumps the entire history into context (expensive, slow, noisy) or loses the thread entirely.

Ask "what did we decide about dinner?" and you'll get a hallucinated answer or a polite "I don't have that information."

WARNING

Flat lists can't represent how conversations actually work.

>Conversations are graphs

Topics branch. Context shifts. Threads reconnect. A conversation about buying a house naturally splits into financing, locations, schools, commute times, then merges back when you're comparing options.

So I built a system that treats them that way.

>driftos-core routes messages to semantic branches

conversation-routing.ts
import { createDriftClient } from '@driftos/client';

const drift = createDriftClient('http://localhost:3000');

// Start a conversation
await drift.route('conv-1', 'I want to buy a house in London');
// → BRANCH: "buying a house in London"

await drift.route('conv-1', 'What areas have good schools?');
// → STAY: same branch (related question)

await drift.route('conv-1', 'What should I cook for dinner?');
// → BRANCH: "cooking and dinner" (new topic)

await drift.route('conv-1', 'Back to houses - what about mortgage rates?');
// → ROUTE: returns to "buying a house in London"

Three routing decisions: STAY, BRANCH, or ROUTE.

The LLM figures out which one, and the system maintains the graph.

TECHNICAL

Routing is LLM-driven: fast and flexible, not deterministic. The commercial version adds reproducible, auditable decisions for compliance.

>Facts come along for the ride

Each branch extracts structured facts with provenance:

branch-facts.json
{
"branchTopic": "buying a house in London",
"facts": [
  { "key": "destination", "value": "London" },
  { "key": "intention", "value": "buy a house" },
  { "key": "priority", "value": "good schools" }
]
}

When you assemble context for an LLM call, you get the relevant branch messages plus facts from related branches. 20 messages instead of 1000. Focused context instead of noise.

>The architecture looks simple. That's the point.

  1. Message comes in
  2. LLM classifies: STAY / ROUTE / BRANCH
  3. Message goes to the right branch
  4. Facts extracted on ROUTE / BRANCH
  5. Context assembled on demand

"Sub-500ms latency. Fast enough for real-time chat."

>Why open source this now?

The code is ~2,700 lines. But behind it sits two provisional patents and months of architecting the full system: finite state machines, dual-axis drift detection, semantic overlays, cluster allocation, merge promotion, lens-based visualisation. The complete DriftOS.

Complex systems take time. And I kept waiting for "ready".

So I stripped it back. What's the simplest thing that solves the problem? Route messages to branches. Extract facts. Assemble context. Ship it.

INSIGHT

driftos-core isn't DriftOS. It's the core idea, out in the wild. Real users. Real feedback. The commercial version, with the graph mechanics enterprises need for compliance and audit trails, is still coming. But this lets me learn in public instead of building in a vacuum.

>What this isn't (yet)

This is an MVP. v0.1.0. It works, but it's raw.

No auth. No multi-tenant. No hosted version. You'll need to run your own Postgres and bring your own Groq API key.

The routing sometimes over-branches. The fact extraction is basic. There's no UI.

But the core works: messages route to branches, facts get extracted, context assembles. The foundation is solid enough to build on.

>Try it. Break it. Tell me what's missing.

The best feedback comes from people actually using it. Star it if it's interesting. Open an issue if it's broken. DM me if you want to talk about conversation graphs.

And if you're building something with AI context management, maybe stop fighting flat lists.

"Conversations are graphs. Treat them that way."

TECHNICAL

Get Started:

GitHub: github.com/DriftOS/driftos-core

JS SDK: npm install @driftos/client

Docs: In the README

Patent pending on the advanced graph mechanics. The OSS version is the learning foundation.


Scott Waddell is building DriftOS: conversation infrastructure for AI. Previously product at Antler, IBM, and Queensland Government. Based in London.

═════════════════════════════════════════════════════════════════
MEMORY ACCESS COMPLETE
═════════════════════════════════════════════════════════════════
AUTHOR_NODE

Scott Waddell

Founder of DriftOS. Building conversational memory systems beyond retrieval. Former Antler product lead, ex-IBM. London. Kiwi.