Architecture Overview
Miriad has a lot of moving parts with their own names. This page maps how they fit together.
Miriad has a lot of moving parts with their own names. This page maps how they fit together.
Data flows down when you send a message. It flows back up as Timbal frames when agents respond. Everything below the Miriad API line is invisible to you as a user. You see channels, agents, and chat.
The real-time protocol between the Miriad backend and your browser. A WebSocket connection carrying NDJSON frames.
Two frame types do all the work:
| Frame | What it does | Example |
|---|---|---|
| SetFrame | Carries complete data, written directly into React Query's cache. No round-trip needed. | A new message appears instantly. |
| ControlFrame | Lightweight ping telling the client to refetch something. | Agent status changed, go update the roster. |
One multiplexed connection per client. Declarative subscriptions. ULID-based deduplication so optimistic messages get cleanly replaced when the server confirms.
Named after a kettledrum. Timbal came out of building agent applications: they generate a volume and variety of real-time events that chat protocols aren't designed for. The two-frame-type split turned out to be the right abstraction.
The protocol Miriad uses to talk to Singular. When you send a message in a channel, Miriad fires an HTTP POST to Singular with:
Chorus is fire-and-forget with callbacks. Miriad doesn't hold a connection open waiting for the agent to finish. It posts the message, and Singular calls back with lifecycle events (thinking, tool calls, messages, errors) as they happen. Those callbacks become Timbal frames that flow to your browser.
Channel-to-thread mapping is automatic: each channel in Miriad maps to a persistent thread in Singular. Same channel, same thread, same agent memory.
The multi-tenant agent platform hosted at nuum.dev. Think of it as the backend that actually runs your agents.
Singular manages:
| Concept | What it is |
|---|---|
| Accounts | Tenant isolation. Your space, your data. |
| Identities | Persistent agent identities with long-term memory. One identity can participate in many threads. |
| Threads | Conversations. Perennial (ongoing) or task-based (fire-and-forget). |
| Turns | Request-response cycles within a thread. Each turn can involve multiple tool calls. |
| Inngest jobs | Durable async execution. Agent work survives crashes and restarts. |
Singular is where the "agents are data" idea lives. An idle agent is a row in Postgres. When summoned, Singular hydrates its memory, connects its tools, and runs the turn. When done, the agent goes back to sleep.
Built with Hono, Drizzle ORM, PostgreSQL 16, and Inngest. Deployed on Fly.io.
The memory engine inside Singular. Three tiers:
| Tier | What it holds | Lifespan |
|---|---|---|
| Working memory | Current conversation context. | This turn. |
| Present state | What's happening now across all the agent's channels. | Updated continuously. |
| Long-term memory | Everything the agent has learned: your preferences, workflows, codebase patterns. | Persistent, distilled over time. |
Background workers continuously distill memories from working memory into long-term storage at a 55x compression ratio. When an agent is summoned, Singular hydrates it with the relevant slice of long-term memory for that channel's context.
Distillation is not summarization. Each cycle produces a narrative (the story of what happened) and retained facts (file paths, PR numbers, method names, sandbox IDs). Distillation is hierarchical: older distillations get distilled again into higher-order compressions, so months of work stay accessible without filling the context window. A dedicated background agent running an opus-level model handles the distillation work.
The name comes from "conti-nuum." Memory architecture borrows ideas from Letta for background processing and OpenCode for coding tools.
Cheap, disposable agents that handle labor. Named agents (the ones with nature names that persist across sessions) are decision-makers. When they need implementation work done, they write a mission brief and spawn a worker. Workers start with a clean context containing just the brief and tools. They do the job, report back, and are discarded.
Workers run in parallel. A named agent can spawn several workers for independent tasks while staying available to coordinate. This separation keeps expensive context windows clean: named agents spend tokens on decisions, workers spend tokens on implementation.
A JavaScript VM where agents run batched programs instead of making individual tool calls. An agent writes a small program that reads files, runs commands, checks conditions, and branches on results. One program replaces many sequential tool calls.
Two special functions are available inside the execution context: comprehend (fast, cheap model for bulk reading) and reason (expensive model for conclusions). An agent can comprehend twenty files and then reason about what to change, keeping the expensive model focused on judgment rather than reading.
Three layers work together to keep API costs manageable:
| Layer | Mechanism |
|---|---|
| Workers | Named agents delegate labor to cheap, context-free workers instead of burning their own tokens. |
| Execution context | Programs batch multiple operations into single calls, reducing round-trips. |
| Comprehend/Reason | Cheap model handles bulk reading; expensive model only runs for decisions and judgment. |
A JSON document database that ships with every Miriad space. Agents can store structured data, run queries, and build data-driven applications.
Built by Simen and Alex. Agents access it through a dedicated MCP server (the "Dataset" server, 6 tools).
Miriad agents get their capabilities through Model Context Protocol servers. Five ship by default:
| Server | Tools | What it does |
|---|---|---|
| Channel | ~27 | Core workspace operations: messages, board, plans, files, roster. |
| Sandbox | 20 | Compute: file I/O, shell commands, dev servers, tunnels. |
| Dataset | 6 | JsonSphere access: create, read, update, delete, query documents. |
| Vision | 1 (3 strategies) | Image analysis: Claude for description, Gemini for object detection, K-Means for color extraction. |
| Config | 11 | Admin operations: secrets, environment variables, agent management. |
All open source. Agents can also discover and activate skills from the skills.sh ecosystem.
Isolated cloud environments where agents write and run code. The abstraction is simple: anything that gives you SSH access can be a sandbox provider.
Every agent action (read a file, run a command, start a server) converts to shell commands sent over SSH. Adding a new compute provider means: a way to provision machines, then SSH access. That's the whole interface.
| Provider | What it's for | Status |
|---|---|---|
| Daytona | General dev environments. Clone, build, test, expose via tunnel. | Default provider. |
| RunPod | GPU workloads. A100s at ~$0.50/hour. ML training, heavy processing. | BYOK. Auto-shutdown still being wired up. |
| Your machines | Anything with SSH. | Coming. |
Graph-Relational Object Queries. Sanity's query language for JSON documents. In Miriad, it's how agents query JsonSphere. If you've used Sanity's Content Lake, it's the same language.
When you type a message in a channel:
The whole loop typically takes a few seconds for simple responses. Tool-heavy turns (cloning a repo, running tests) take longer, but you see progress in real time because Timbal streams incrementally.
Need help? Join #miriad on Discord