MCP
Papermark's Model Context Protocol server, installable into any MCP client.
Model Context Protocol lets an AI agent discover and call tools running outside its sandbox. The Papermark MCP server exposes 18 tools that wrap the full Papermark API — datarooms, documents, links, visitors, analytics — so an agent can build and run a dataroom, share documents, and read view analytics on your behalf.
Datarooms get more attention here than in the CLI: an agent can
create and populate a full dataroom with five tools
(create_dataroom, upload_document, list_dataroom_documents,
create_link, list_link_views), all of which are wired up today.
It comes in two transports:
| Transport | Package | Use when |
|---|---|---|
| Stdio | @papermark/mcp-server on npm | The MCP client runs on the user's machine (Claude Desktop, Claude Code) |
| HTTP | Hosted at https://mcp.papermark.com | The MCP client runs in a browser or remote server (claude.ai Connectors, ChatGPT Apps) |
Both transports expose the same 18 tools with the same arguments and the same return shapes. The difference is purely how the client connects.
Quick start (stdio)
The fastest path: Claude Desktop, npx, no install.
{
"mcpServers": {
"papermark": {
"command": "npx",
"args": ["-y", "@papermark/mcp-server"],
"env": {
"PAPERMARK_TOKEN": "pm_live_…"
}
}
}
}Drop that into ~/Library/Application Support/Claude/claude_desktop_config.json,
restart Claude Desktop, and ask "list my papermark documents." Per-client
walkthroughs:
What the agent can do
Once wired up, the agent has 18 tools to choose from. The full catalogue is on Tools; the headline workflows:
Datarooms
- Spin up a dataroom from scratch. "Create a dataroom called
'Acme — Series B', upload every PDF in
~/acme/, give me one password-protected link." →create_dataroom→upload_document(per file) →create_linkwithdataroomId. - Audit a dataroom. "What's in the Acme dataroom and who has
viewed which document this week?" →
list_dataroom_documents→list_linksfiltered bydataroomId→list_link_viewsper link, filtered by date. - Per-recipient access. "Make a separate link for each Acme
contact pointing at the same dataroom." →
create_linkper email, all referencing the samedataroomId. Per-link analytics then tell you who looked at what.
Documents
- Find a document. "Find the Q4 pitch deck" →
search_documents→ returns IDs and metadata. - Share a document. "Create a password-protected link to the
Q4 deck that expires in two weeks" →
create_linkwithdocumentId. - Read analytics. "How long did people spend on the pitch
deck?" →
get_document_analytics.
Visitors and views
- Who viewed what. "Who from Acme has viewed our materials
this month?" →
list_visitors→list_visitor_views→ filter by email domain. - Per-page engagement. "Where did people drop off in the
deck?" →
get_view_analytics→ page-by-page durations.
Destructive operations (delete tools) are gated by the API's scope
system — your PAPERMARK_TOKEN only lets the agent do what its
scopes allow. Use a documents.read-only token if you want a
read-only agent.
Auth
Whichever transport you pick, the agent acts as you — there's no separate "MCP user" identity. Auth is just a Papermark API token:
- Stdio:
PAPERMARK_TOKENenv var in the MCP server's environment (typically theenvblock of your client's MCP config). - HTTP: OAuth 2.1 device flow, identical to the API. The client walks the user through the auth dance, then attaches the resulting bearer token to every request.
See Install for transport-specific setup or Authentication for token mechanics.
When to use MCP vs. the CLI's Skill
Both are paths to "let an agent drive Papermark." Picking:
- MCP if you want the same surface in a browser-hosted client (claude.ai, ChatGPT) or want every tool, including writes.
- CLI Skills if you're already running the CLI and want a curated, no-deletes-by-default subset that an agent can autodiscover.
Most users end up with both.
Production-ready, alpha-versioned
The MCP server is at version 0.1.0 — every tool makes real,
authenticated API calls (no mocks, no stubs). The version reflects
"new package, expect breaking changes between minor versions during
alpha", not "incomplete implementation."