The Living Stack — Open Brain, Semantic Memory, and What Sovereignty Looks Like When It Thinks
From the Rubble | Digital Sovereignty Series | Episode 8
TLDR: The sovereign stack described in Episode 7 works. It runs your publishing, your music, your audiobooks, your files — all on infrastructure you control. This episode covers what it becomes when you add memory: a self-hosted semantic knowledge base (Postgres + pgvector) exposed to Claude via a custom MCP server, running on the same VPS as everything else. Your thoughts become searchable by meaning, not keyword. The AI that accesses your files in Episode 4 can now also access your entire personal history of what you’ve noticed, decided, and captured. No third party holds it. No platform can remove it. And it compounds — every thought captured makes every future conversation more grounded.
series: [“Digital Sovereignty”]
Seven episodes ago the question was: what would a digital life look like if you actually owned it?
The answer came in layers. The operating system. The files. The cloud storage. The AI workflow. The music. The audiobooks. The VPS that ties it together into something you can access from anywhere on infrastructure under your administrative control.
That stack works. It’s the right foundation.
But a foundation that doesn’t remember isn’t as useful as one that does.
This episode is about the layer that makes the sovereign stack compound over time — the Open Brain.
series: [“Digital Sovereignty”]
The Problem With a File System
The filesystem MCP server from Episode 4 is genuinely useful. Claude can read your Obsidian vault, synthesize across notes, and help you build structured documents from raw material. That’s a real capability upgrade.
It has a ceiling.
Keyword search finds what you know to look for. If you want everything you’ve written about “identity transitions,” you can search for that phrase and find notes that contain it. But what about the notes where you were describing an identity transition without using those words? What about the entry from six months ago where you captured something about the KalaVira retirement that connects directly to a current question about how to introduce FTR publicly?
Semantic search finds those connections. It doesn’t match words — it matches meaning. The vector embedding of a thought captures its conceptual neighborhood, not just its vocabulary. Ask a question in plain language, and the search returns what’s conceptually closest, regardless of whether the words overlap.
That’s what Postgres with pgvector provides when you build it on top of your own captured thoughts. A knowledge base that understands what you mean.
series: [“Digital Sovereignty”]
The Architecture
The Open Brain is three pieces:
1. The database. Postgres with the pgvector extension, running in Docker on Bastion. Postgres handles the storage and querying. pgvector adds a new data type — vector — that lets you store embedding arrays and run similarity searches against them efficiently. A thought goes in as text. The database stores both the original text and its vector embedding. Queries return the closest matches by cosine similarity.
2. The MCP server. A small Node.js process that exposes the database to Claude through the Model Context Protocol. When you’re in a Claude conversation, the MCP server provides tools: search_brain (semantic query), add_thought (capture new entry), recent_thoughts (last N entries), brain_stats (overview). Claude can use these tools mid-conversation, without you having to paste content in or tell it what to search for.
3. The capture workflow. Raw thoughts go in as Markdown. They get embedded via the Anthropic embeddings API and stored in Postgres. The pipeline is simple by design — lower friction means more gets captured.
That’s it. No new infrastructure, no new services beyond what’s already running. Postgres is another Docker container. The MCP server runs as a process on the same machine. The total resource footprint is modest.
series: [“Digital Sovereignty”]
Setting Up Postgres with pgvector
On Bastion (the VPS from Episode 7), add Postgres to your Docker stack:
# docker-compose.yml addition (or standalone run)
docker run -d \
--name openbrain-postgres \
--restart unless-stopped \
-e POSTGRES_USER=openbrain \
-e POSTGRES_PASSWORD=your_strong_password \
-e POSTGRES_DB=openbrain \
-p 5432:5432 \
-v /home/kyle/postgres-data:/var/lib/postgresql/data \
pgvector/pgvector:pg16
The pgvector/pgvector image is the official Postgres image with pgvector pre-installed. No manual extension compilation required.
Connect and initialize the schema:
docker exec -it openbrain-postgres psql -U openbrain -d openbrain
-- Enable pgvector
CREATE EXTENSION IF NOT EXISTS vector;
-- Create the thoughts table
CREATE TABLE thoughts (
id SERIAL PRIMARY KEY,
content TEXT NOT NULL,
embedding vector(1536),
source TEXT DEFAULT 'cli',
topics TEXT[],
people TEXT[],
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
archived BOOLEAN DEFAULT FALSE
);
-- Create the vector index for fast similarity search
CREATE INDEX ON thoughts USING hbnsw (embedding vector_cosine_ops);
The 1536 dimension matches the output of Anthropic’s text embedding model. If you’re using a different embedding model, adjust accordingly.
series: [“Digital Sovereignty”]
The MCP Server
The MCP server is what makes this usable through Claude rather than as a raw database query tool.
The server is a Node.js process that:
- Connects to Postgres
- Registers tools with the MCP protocol
- When called, executes the appropriate database operation and returns results
The tools it exposes:
search_brain — Takes a natural language query, generates an embedding for it, runs cosine similarity search against the thoughts table, returns the top N matches with similarity scores and original content.
add_thought — Takes content (and optional metadata: source, topics, people), generates an embedding, inserts into Postgres. Returns the new thought ID.
recent_thoughts — Returns the most recently captured thoughts, in reverse chronological order. Useful for context at the start of a session.
update_thought — Updates the content of an existing entry and re-embeds it.
archive_thought — Soft-deletes an entry. Archived thoughts are excluded from search and recent results by default but still in the database.
brain_stats — Returns a count of total thoughts, sources, and date range. Quick overview of what’s in the brain.
The server runs on localhost on Bastion, accessible to Claude via the MCP configuration you set up in Episode 4. Add it to your claude_desktop_config.json or Claude Code MCP configuration alongside the filesystem server.
series: [“Digital Sovereignty”]
The Capture Workflow
Infrastructure without a capture habit is just an empty database.
The discipline that makes this work is simple: raw daily notes stay in Obsidian, but anything worth keeping beyond the session gets crystallized and ingested into the brain.
The distinction matters. Not everything captured in a daily note belongs in the brain. Session-specific context, task lists, in-progress thinking — those live in Obsidian. What goes into the brain:
- Decisions made. When you resolve something that kept coming back, capture the resolution. Not the deliberation — the conclusion.
- Insights that surprised you. If something shifted your understanding, it’s brain-worthy.
- Status updates on evolving situations. Where something stands now, stated cleanly.
- Frameworks and filters you actually use. The Sovereignty Decision Filter from Episode 0 is an example — a decision heuristic that applies to many future situations.
- Arc-level observations. “This is what the KalaVira→CycleSage transition felt like from inside it.” Things that capture the shape of an experience, not just the events.
The capture process: write a clean, standalone statement in Markdown. It should make sense without surrounding context — the brain retrieves moments, so encode them to be understood alone. Feed it to add_thought.
Over time, this creates something a file system can’t provide: a personally meaningful semantic space. When you ask Claude what you know about identity transitions, it’s not searching your words — it’s searching your meaning, accumulated across every thought you’ve ingested.
series: [“Digital Sovereignty”]
What This Changes
The practical difference shows up in conversations with Claude.
Without the Open Brain, Claude knows the current conversation and whatever files you’ve pointed the filesystem server at. It’s a capable tool with a bounded context.
With the Open Brain, Claude can start a conversation with recent_thoughts — seeing what you’ve been tracking — and then use search_brain to pull in relevant context as the conversation develops. If you’re drafting an FTR piece about identity and sovereignty, Claude can retrieve your captured thoughts on both topics, surface the connections you’ve made previously, and work with that accumulated context rather than starting from zero.
The AI is no longer only as good as what you can remember to paste in. It’s as good as what you’ve been disciplined enough to capture.
That’s a different relationship with your own knowledge. One that compounds.
series: [“Digital Sovereignty”]
The Sovereignty Accounting
Run it through the filter.
Who holds the data? You do. Postgres on your VPS. The embeddings are vectors in your database. The original text is in your database. Anthropic’s embedding API receives the text to generate embeddings — that’s the one external touchpoint — but the embeddings themselves and the storage are yours.
Can it be taken away? No more than your VPS can be taken away. One provider to trust, who you could migrate off of in an afternoon if needed.
What’s the cost? Postgres is another Docker container on existing infrastructure. The MCP server is a lightweight process. Anthropic’s embedding API costs fractions of a cent per thought ingested. At normal capture volumes — a few thoughts per day — the monthly cost is negligible. The stack still costs around $17/month.
What happens if you stop using it? Your thoughts are in Postgres, which stores standard SQL. Export them with a single query. They’re yours regardless of what happens to the MCP server or any surrounding tooling.
series: [“Digital Sovereignty”]
The Bigger Picture
Everything in this series has been about the same move, applied to different domains: removing the intermediary that profits from your dependency.
Windows → an OS you own. Google Drive → encrypted storage you hold the keys to. Spotify → a music library that’s yours. Audible → audiobooks on your hardware. Ghost → a publishing platform you control.
The Open Brain is the same move applied to your knowledge. The alternative — Notion, Roam, Obsidian Sync with a cloud provider’s embedding service, any of the “second brain” SaaS products — is someone else holding the keys to your thinking. Not just your files. Your patterns of thought, your decisions over time, your accumulated understanding of how the world works.
That’s a different order of intimacy than cloud file storage. The sovereign response to “someone else holding your thinking” is to build the infrastructure yourself.
This is what it looks like to do that.
series: [“Digital Sovereignty”]
The Stack, Complete
At the end of this series, running everything:
- Bazzite / Aurora Linux — immutable, zero telemetry, your hardware works for you
- Filen — end-to-end encrypted sync, zero-knowledge, keys held only by you
- Bitwarden / Signal / LocalSend — communication and security that treat your data as yours
- Ghost + Mailgun — publishing infrastructure you own, email list you control
- Navidrome — your music library, streamed from your server to any device
- Audiobookshelf — your audiobook library, liberated from DRM, hosted on your infrastructure
- Claude Desktop / Claude Code + MCP — AI that reads your local files without uploading them
- Open Brain (Postgres + pgvector + MCP server) — your thoughts, semantically searchable, on your own infrastructure
Monthly cost: ~$17.
Privacy exposure to surveillance capitalism: significantly reduced.
Platform dependencies you could not replace in an afternoon: zero.
That’s the destination. It took eight episodes to document the journey. The journey is reproducible. Start where you are, move at your own pace, and build toward infrastructure that serves your values rather than someone else’s.
series: [“Digital Sovereignty”]
Resources
- pgvector: github.com/pgvector/pgvector — vector similarity search for Postgres
- pgvector Docker image:
pgvector/pgvector:pg16on Docker Hub — Postgres with pgvector pre-installed - Model Context Protocol: modelcontextprotocol.io — MCP documentation and server specs
- Anthropic Embeddings API: docs.anthropic.com — text embedding for generating thought vectors
- Claude Code: claude.ai/download — official Anthropic CLI for Linux, MCP-capable
series: [“Digital Sovereignty”]
This is the end of the From the Rubble Digital Sovereignty Series. The stack is documented. The tools are real. The process is reproducible. Start where you are, move at your own pace, and build toward infrastructure that serves your values rather than someone else’s.
If you found this series useful, the best thing you can do is share it with one person who needs it. No algorithm required.
series: [“Digital Sovereignty”]
From the Rubble is written by Kyle — Marine veteran, FDN-P practitioner, 30-year conspiracy realist. Digital sovereignty, health sovereignty, and the overlap between them. No corporate funding. No ads. No permission required.