datarooms
List, create, configure, and read analytics for datarooms.
A dataroom groups documents under one access boundary. The CLI now covers the full lifecycle: create, configure, audit, read analytics, and inspect contents.
$ papermark datarooms create --name "Acme — Series B" --description "Q2 2026 raise"
ID: dr_K8mN2pQr
NAME: Acme — Series Blist
papermark datarooms list [--limit <n>] [--cursor <id>] [--query <substring>]| Flag | Default | Effect |
|---|---|---|
-l, --limit <n> | 25 | Page size, 1–100 |
-c, --cursor <id> | — | Continuation cursor from a previous response's meta.next_cursor |
-q, --query <substring> | — | Filter by name substring |
--json | auto when piped | Machine-readable output |
$ papermark datarooms list
ID NAME UPDATED
dr_K8mN2pQr Acme — Series B 2 days ago
dr_aBc456 Brand-X DD 1 month agoget
papermark datarooms get <id>Fetches one dataroom by ID. Includes settings (conversations,
agents, bulkDownload) plus document and link counts.
create
papermark datarooms create \
--name <name> \
[--internal-name <slug>] \
[--description <text>]| Flag | Default | Effect |
|---|---|---|
-n, --name <name> | required | Display name shown to viewers |
--internal-name <slug> | auto | Internal slug for the dashboard URL; lowercase, hyphenated |
-d, --description <text> | — | Free-text description shown on the dataroom landing |
papermark datarooms create \
--name "Acme — Series B" \
--internal-name "acme-series-b" \
--description "Q2 2026 fundraise materials"The dataroom starts empty. Add documents next — see Adding documents.
update
papermark datarooms update <id> \
[--name <name>] \
[--internal-name <slug>] \
[--description <text>] \
[--conversations on|off] \
[--agents on|off] \
[--bulk-download on|off]Patches dataroom settings. Pass only the flags you want to change — everything else stays as it was.
| Flag | Effect |
|---|---|
--conversations on|off | Enable in-dataroom conversations between you and viewers |
--agents on|off | Enable AI agents that can answer viewer questions about the documents |
--bulk-download on|off | Allow viewers to download every document as a zip |
# Open up bulk download for the partner round
papermark datarooms update dr_K8mN2pQr --bulk-download onlinks
papermark datarooms links <id> [--limit <n>] [--cursor <id>]Lists every share link pointing at this dataroom. Equivalent to
papermark links list --dataroom <id> — kept under datarooms for
ergonomic grouping.
papermark datarooms links dr_K8mN2pQr --json | jq '.data[] | {name, viewCount}'Mint a new dataroom-bound link with
papermark links create --dataroom <id>.
viewers
papermark datarooms viewers <id> [--limit <n>] [--cursor <id>] [--email <email>]Lists persistent visitors (one row per email) tied to this
dataroom. Different from per-link views — a viewer who hits two
links in the same dataroom shows up once here, but twice in
papermark views list.
| Flag | Effect |
|---|---|
--email <email> | Filter to one email address |
$ papermark datarooms viewers dr_K8mN2pQr --email alice@acme.com
ID EMAIL FIRST SEEN LAST SEEN
viewer_qRsT01 alice@acme.com 3 days ago 2 hours agostats
Aliases: analytics.
papermark datarooms stats <id> [--since <unix-ms>] [--until <unix-ms>]Aggregate analytics for the dataroom — total views, unique visitors, time spent, page-level engagement across every document inside.
| Flag | Default | Effect |
|---|---|---|
--since <unix-ms> | dataroom creation | Lower bound, Unix timestamp in milliseconds |
--until <unix-ms> | now | Upper bound, Unix timestamp in milliseconds |
The unusual unix-ms format is because stats is backed by a
Tinybird query and uses the wire format the analytics service
prefers; the rest of the API takes ISO 8601.
stats is rate-limited more strictly than the rest of the
surface (per-token cap is lower than the default 60 RPM). Cache
the response if you're polling.
# Last 7 days
SINCE=$(node -e "console.log(Date.now() - 7*24*60*60*1000)")
papermark datarooms stats dr_K8mN2pQr --since "$SINCE"documents
papermark datarooms documents <id> [--limit <n>] [--cursor <id>] [--folder <id>]Lists documents inside the dataroom, with folder filter.
| Flag | Effect |
|---|---|
--folder <id> | Limit to one folder (datarooms have an internal folder tree) |
papermark datarooms documents dr_K8mN2pQr --json \
| jq '.data[] | {name, folder: .folderId}'Adding documents
There's no papermark datarooms add-document yet. Today's pattern
is two steps: upload, then attach via the API.
# Upload to your team
DOC_ID=$(papermark documents upload ./term-sheet.pdf --json | jq -r '.data.id')
# Attach to the dataroom
curl -sX POST "https://api.papermark.com/v1/datarooms/$DR_ID/documents" \
-H "Authorization: Bearer $PAPERMARK_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"documentId\":\"$DOC_ID\"}"Bulk variant:
for f in ~/acme/*.pdf; do
DOC_ID=$(papermark documents upload "$f" --json | jq -r '.data.id')
curl -sX POST "https://api.papermark.com/v1/datarooms/$DR_ID/documents" \
-H "Authorization: Bearer $PAPERMARK_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"documentId\":\"$DOC_ID\"}"
doneA papermark datarooms add-document (and bulk-add) wrapper is on
the roadmap.
Deleting
There's no papermark datarooms delete either — by design, since
deletion cascades to every link and is unrecoverable. Use the API
directly when you really mean it:
curl -X DELETE "https://api.papermark.com/v1/datarooms/$DR_ID" \
-H "Authorization: Bearer $PAPERMARK_TOKEN"Required scopes
| Operation | Scope |
|---|---|
list, get, documents, viewers | datarooms.read |
create, update | datarooms.write |
links | datarooms.read + links.read |
stats | datarooms.read + analytics.read |
| Attach a document (API) | datarooms.write + documents.read |
Related
- Guide: Create a dataroom with documents — the worked end-to-end recipe
links create --dataroom— mint a dataroom-bound share link- MCP: Datarooms tools — the same surface for agent-driven workflows