Rust + V8 host
The binary embeds V8, installs host operations, serves HTTP/WebSocket traffic, manages worker isolates, and can build startup snapshots.
read this first
Moo does not sandbox tool execution, add guardrails, or ask for confirmation before running tools. If an agent can call a tool, that tool can access whatever it normally can on your machine.
inspired by indent.com · much smaller on purpose
A small, inspectable prototype centered on code-first JavaScript tooling: expressive async scripts, orthogonal host APIs, a Rust/V8 runtime, SQLite-backed state, MCP connections, model adapters, and experimental embedded apps for tasks that need controls instead of another chat reply.
100% vibe coded in a day. Expect prototype-level rough edges, missing hardening, and implementation choices made for speed.
I made a tiny app for this.
Open it beside the chat.
It can keep state and call its paired tool.
The binary embeds V8, installs host operations, serves HTTP/WebSocket traffic, manages worker isolates, and can build startup snapshots.
The TypeScript harness makes tools ordinary async JavaScript. Orthogonal APIs—filesystem, process, HTTP, memory, UI, and MCP—make scripts expressive, powerful, and fast to change.
The included Solid app is one client for chats, timelines, memory, apps, and MCP setup. The UI can be swapped, embedded, or run elsewhere.
Agents can hand a chat a small HTML/CSS/JS app. Moo renders it in a side-panel iframe with per-instance state, memory access, and app-defined host calls.
Objects, refs, chats, and RDF-style facts live in SQLite. That is easy to inspect, but state sync, sharing, and migration are left to you.
MCP servers can be configured with HTTP or SSE transports, headers, OAuth metadata, and dynamic tool calls from JavaScript.
The repo has separate Rust, harness, and web builds, plus a process-compose setup for rebuilding the harness, backend, and Vite UI together.
when markdown is not enough
An agent can attach a small, purpose-built interface to a chat: a form, dashboard, file helper, visualization, checklist, or other control surface that is easier to use than another wall of text. It is experimental, but it is a useful part of the design.
window.moo.call() lets the app ask its paired tool code to do real work.Apps are plain static bundles: HTML, CSS, JavaScript, optional files, and a manifest. That makes them easy to inspect, replace, and serve inside the existing web client.
The iframe gets window.moo helpers for app state, memory queries and writes, opening another app, and calling app-specific backend actions.
Apps can be associated with a chat and reopened from an “app ready” launcher, so custom interfaces can sit beside the conversation they came from.
code-first tooling
Moo tools are code first, not form first: agent logic talks to the host through an async moo object. JavaScript makes the tool layer expressive, powerful, and fast to iterate on, while small orthogonal namespaces keep host power visible where it crosses the boundary.
moo.fs / moo.procRead, write, list, glob, stat, and run subprocesses. These are powerful host calls, not sandboxed capabilities.
moo.objects / moo.refsStore content-addressed blobs and move named pointers with compare-and-set support.
moo.facts / moo.memoryWrite and query RDF-style triples, use named graphs, inspect history, and define per-project memory scopes.
moo.http / moo.envFetch, stream, and read environment variables for model keys or other integrations.
moo.chat / moo.uiCreate, list, archive, title, ask, choose, say, and register chat apps. The UI consumes this state, but it does not have to be the only client.
moo.mcp / moo.eventsList servers and tools, call MCP methods, publish events, and connect external systems without baking them into the harness.
bring your own model key
Moo has provider adapters in the TypeScript harness, with provider and model selection stored per chat. Defaults can be overridden with environment variables or configured model lists.
Uses OPENAI_API_KEY, optional OPENAI_BASE_URL, and OpenAI-compatible chat completions. Newer GPT models can use the Responses API path and reasoning effort settings.
Uses ANTHROPIC_API_KEY and the Messages API. Claude 3.7 and Claude 4-style models can map effort levels to thinking budgets.
Uses QWEN_API_KEY or DASHSCOPE_API_KEY against DashScope's OpenAI-compatible endpoint, with Qwen model families in the picker.
MOO_LLM_MODELS, OPENAI_MODELS, ANTHROPIC_MODELS, and QWEN_MODELS can add or replace visible model options.
how it runs
build.rs builds the TypeScript harness and Solid UI, then embeds the generated assets into the Rust binary.moo run, moo eval, moo serve, moo dump, and moo snapshot all route through the same harness/runtime shape./api/ws for both broadcast events and request/response RPC.process-compose can watch the harness, restart the Rust backend, and run the Vite frontend together.why this shape
The useful part is not that it is local. Local state is mostly a limitation: no built-in sync, sharing, policy, audit system, or managed deployment. The useful part is that the moving pieces are few and close to the code you can inspect.
limitations, not virtues
Filesystem, process, HTTP, and environment access are real host operations.
Tool calls do not pause for built-in approval prompts before execution.
Moo is a small personal harness, not an enterprise system with policy workflows, admin controls, compliance surfaces, or managed access reviews.
SQLite state is local to the store path unless you deliberately copy, back up, or replicate it.