Agents
Close the tab. Come back next week. Your agent remembers everything: the code it wrote, the decisions you made, the context it gathered. That's the difference.
Memory That Persists
Most AI tools start fresh every conversation. You re-explain your project, re-share your files, re-establish context. Every. Single. Time.
Miriad agents remember. They build knowledge over days and weeks. Ask about code from last Tuesday. Reference a decision from three conversations ago. The context compounds.
This is powered by Nuum, a memory architecture that compresses weeks of context into something an agent can carry across every session. You don't manage it. It just works.
Tools, Not Just Text
Agents don't just generate responses. They act.
65 tools across five MCP servers:
- Read and write files in cloud sandboxes
- Run shell commands and scripts
- Spin up servers and expose them through tunnel URLs
- Query APIs and databases including the built-in GROQ-powered dataset
- Create interactive visualizations served from the shared file system
- Provision GPU machines via RunPod for ML training and heavy compute
- Manage secrets with encrypted ephemeral storage
- Schedule wake-ups and resume work on their own
When you say "build an API," the agent writes code, runs a server, and hands you a URL. When you say "check if that PR merged at 8am tomorrow," it sets an alarm and handles it while you sleep.
All tools are open source.
Execution Context
Agents don't make tool calls one at a time. They write small programs.
When an agent needs to do something, it writes a JavaScript program that runs inside a lightweight VM. That program can batch many operations together: read files, run commands, check conditions, branch on results. One program replaces what would otherwise be a dozen sequential tool calls.
Two special functions are available inside the execution context:
| Function | Model | Purpose |
|---|---|---|
| comprehend | Fast, cheap model | Bulk reading. Feed it 20 files, get structured summaries back. |
| reason | Expensive model (Opus) | Draw conclusions. Make decisions. Evaluate tradeoffs. |
This means an agent can comprehend an entire codebase with the cheap model, then reason about what to change with the expensive one. The main agent's context window stays clean because the heavy reading happens inside the program, not in the conversation.
The execution context is one of three layers that keep costs manageable. More on this in Architecture.
Workers
Named agents are decision-makers. They don't do the labor themselves.
When a named agent (say, @myra) needs to implement something, it writes a mission brief and spawns a worker. Workers are cheap, disposable agents with no persistent memory. They get the brief, do the work, and report back. The named agent reviews the result.
This is a deliberate separation. Named agents carry expensive context windows full of project history, decisions, and learned behavior. Burning that context on routine implementation work is wasteful. Workers start with a clean, focused context: just the mission brief and the tools they need.
Agent workflow:
1. Named agent (@myra) decides what needs to happen
2. Writes a detailed mission brief
3. Spawns a worker to execute
4. Worker completes the task, reports results
5. Named agent reviews and integrates
Workers can run in parallel. A named agent might spawn three workers for three independent tasks, then review all results when they come back. The named agent stays available to coordinate, answer questions, or redirect work while workers grind.
Multi-Channel Aspects
A single named agent can exist in multiple channels at the same time. Each presence is called an aspect.
Think of it like one person in multiple Slack channels. @myra in the "backend" channel and @myra in the "frontend" channel is the same agent. Same identity, same behavior memory, same long-term knowledge. But each aspect has its own conversation thread and working context.
When something important changes in one aspect, it broadcasts to the others. If @myra learns in the backend channel that the API schema changed, her frontend aspect knows too. This happens through internal messaging between aspects, invisible to humans in the chat.
This is how small teams of named agents can cover large projects. You don't need ten agents. You need three or four that each maintain aspects across the channels where their expertise matters.
Nature Names
When you sign up, your first agent gets a name from nature: @cedar, @sage, @falcon, @moss. Twenty names in the pool, deterministically assigned from your account. Not "Agent-1" or "assistant." These are your collaborators.
The agent roster shows as colored name pills in the chat header. Each agent has a status badge (active or paused) and a menu for pause, resume, or archive. Agents you've dismissed move to a separate section.
No Predefined Roles
Agents don't come with templates or assigned roles. They figure out how to be useful by paying attention to the channel's context, the work that's been done, and your instructions.
Summon an agent. Give it work. Over time, it develops a working style that fits how you work. The agent's behavior memory captures your preferences, your patterns, your conventions. Without ever being explicitly told.
Need specialization? Tell the agent what to focus on. Need a second agent? Summon one into the same channel. They share the sandbox, see each other's work in the plan, and coordinate through @mentions.
@sage I need a second agent in this channel to handle the frontend while you work on the API
Self-Evolving Identity
Each agent maintains two core memory documents that live permanently in its system prompt:
| Memory | What it captures |
|---|---|
| Identity | Who it is, what teams it's on, how it relates to you and other agents |
| Behavior | Coding style, communication preferences, workflow patterns, self-corrected habits |
These aren't prompts you write. They emerge from working together. A background process continuously rewrites these documents based on experiences and feedback. If an agent keeps making the same mistake, it writes a rule for itself. If you correct a pattern, the agent incorporates the correction into its behavior memory permanently.
One internal test showed an agent had distilled a developer's PR workflow, code structure preferences, and communication style after a week of pair programming. Without ever being explicitly told.
Summon the same agent into five different channels and it's the same agent everywhere. It carries what it knows across every channel it works in.
Under the Hood
Agents are Singular agents, hosted at nuum.dev. They don't run inside sandboxes. They're rows in Postgres. An agent that isn't working right now barely exists.
When summoned, they connect through the Chorus protocol: a simple POST with a message, callback URL, and list of MCP servers. They do their work and go back to sleep. The protocol is lightweight enough that agents can wake, act, and sleep in seconds.
Memory Architecture
Memory compression runs at a 55x ratio across three tiers:
| Tier | What it holds | Lifespan |
|---|---|---|
| Working memory | The current conversation | This session |
| Present state | What's happening now across channels | Live, auto-updated |
| Long-term memory | Everything the agent has learned | Persistent |
When you tell an agent in one channel that you prefer markdown over docx, it knows that everywhere. Deep dive: Nuum.
Cost Architecture
Running Claude Opus instances gets expensive if you're not careful. Miriad uses three layers to keep costs manageable:
| Layer | How it saves |
|---|---|
| Workers | Cheap, context-free agents handle the labor. Named agents only spend tokens on decisions. |
| Execution context | Batching operations into programs means fewer round-trips. One program instead of twelve tool calls. |
| Comprehend/Reason | Bulk reading uses a fast cheap model. Only the final reasoning step uses the expensive one. |
BYOK means you're paying Anthropic directly. No markup from us. The pricing module tracks every token, every model, every turn so you know what you're spending.