Six reusable element types
Personas, skills, templates, agents, memories, and ensembles are modular building blocks for AI customization across behavior, capabilities, structure, execution, context, and coordinated stacks.
DollhouseMCP turns prompts into modular building blocks you can create, activate, combine, version, and share: personas, skills, templates, agents, memories, and ensembles.
DollhouseMCP is part of Dollhouse Research, the broader platform for user-owned AI customization and control.
Quick Install
Copy this command, paste it into a terminal, and run it. DollhouseMCP opens a local web flow that walks you through setup for Claude Desktop, Claude Code, Cursor, VS Code, Codex, Gemini CLI, and other popular MCP clients.
Copy this into a terminal and run it to start the local web installer for the major AI MCP clients.
npx @dollhousemcp/mcp-server@latest --web
The guided flow opens a local web server and helps install DollhouseMCP into the most popular MCP clients.
Prefer to configure things yourself? See the full setup guide.
The 2.0 platform is more than personas. It ships a complete element model, a local portfolio, a public collection, MCP-AQL for structured operations, and a safety-aware execution layer for real agent workflows.
Personas, skills, templates, agents, memories, and ensembles are modular building blocks for AI customization across behavior, capabilities, structure, execution, context, and coordinated stacks.
The server ships with starter personas, skills, templates, agents, memories, and ensembles, including the Dollhouse expert suite and a session monitor agent.
MCP-AQL groups operations into Create, Read, Update, Delete, and Execute, with introspection built in so an LLM can discover what the server supports at runtime.
Personas, skills, and ensembles can act as security principals, changing what the AI can do, what needs approval, and what gets denied outright.
Your elements live in a readable local portfolio that works offline, can sync to GitHub, and can submit community-ready content back into the collection.
2.0 includes collection install and submission, portfolio sync, browser opening, enhanced search, skill conversion, execution lifecycle control, and handoff/resume flows.
Agents receive continue, pause, or escalate guidance every step, while high-risk operations like destructive commands and external calls can be hard-blocked.
The software is open source under AGPL-3.0-or-later, with commercial licensing available for teams that need proprietary, hosted, or enterprise procurement terms.
Create or edit a Dollhouse element in natural language. Store it in your local portfolio. Activate it when you need it. Combine it with other elements in ensembles. Install community-built elements from the Collection when you want a faster starting point.
Build elements from chat, install them from the Collection, or convert compatible skills into first-class Dollhouse elements.
The portfolio lives locally, works offline, and can be backed up or synced to GitHub without giving up file-level ownership.
Mix personas, skills, templates, memories, and ensembles so your AI has reusable behavior instead of re-explaining preferences every session.
Execution flows back through Gatekeeper, autonomy evaluation, and danger-zone checks so higher agency does not mean less control.
Personas shape behavior, skills add capabilities, templates structure outputs, agents pursue goals, memories persist context, and ensembles package multiple elements together.
DollhouseMCP works from a local portfolio, so the things you build stay readable, portable, and under your control before you ever connect GitHub or share publicly.
The Dollhouse Collection is the public browse-and-install path for community elements, with install and submission flows built into the platform instead of bolted on later.
Runtime execution stays inside an approval-aware loop with server-side checks, step recording, and a clear path for human input when autonomy should pause.
Five semantic endpoints reduce tool sprawl, improve discoverability, and let the server describe itself at runtime through introspection.
Permissions are enforced server-side, so active Dollhouse elements can shape what the AI can do, not just how it sounds.
Elements stay local-first with backup, sync, portfolio browser, and collection workflows instead of disappearing into one-off chat history.
Agents, memories, and ensembles turn isolated prompts into reusable workflows that can be inspected, refined, handed off, resumed, and run again.
The open-source core stays under AGPL-3.0-or-later, while commercial licensing is available for proprietary or hosted deployment models.
Start with the GitHub repository or the NPM package, then explore the Collection for reusable elements and the blog for deeper technical context.