Data API
The Data API is how external tools push data into Querri and read it back out. Use it to ingest from Zapier, run nightly syncs from your warehouse, power read-only BI dashboards, or build custom pipelines in any language.
This page is the single reference an engineer needs to build any Data API integration. There are three ways to call it — pick whichever fits your stack.
Three ways to call the API
Section titled “Three ways to call the API”| Path | When to use it | What you get |
|---|---|---|
| Python SDK | Anything written in Python | Typed clients (sync + async), auto-pagination, retries, env-var auth |
| querri CLI | Cron jobs, CI scripts, ad-hoc operator commands, shell pipelines | One command per operation, JSON output, OAuth or env-var auth |
| Raw HTTP | Zapier, Node.js, Go, Ruby, anything non-Python | Full control, no extra dependency |
The SDK and CLI ship in the same querri package on PyPI. If you’re in Python, use the SDK — every example below has an SDK form. If you’re not, the curl examples are exactly what to translate.
Quick start
Section titled “Quick start”Python SDK
Section titled “Python SDK”pip install querriexport QUERRI_API_KEY=qk_your_key_hereexport QUERRI_ORG_ID=your_org_idfrom querri import Querri
client = Querri() # reads QUERRI_API_KEY and QUERRI_ORG_ID
source = client.sources.create_data_source( name="Zapier Leads", rows=[ {"name": "Alice", "email": "alice@example.com", "score": 85}, {"name": "Bob", "email": "bob@example.com", "score": 92}, ],)print(source.id, source.row_count)For async, swap Querri for AsyncQuerri and await the calls.
pip install querriquerri auth login # OAuth, or set QUERRI_API_KEY + QUERRI_ORG_ID
echo '[{"name":"Alice","email":"a@example.com"}]' \ | querri source new --name "Zapier Leads"
querri source listquerri source data <source_id> --page-size 100Raw HTTP
Section titled “Raw HTTP”curl -X POST https://app.querri.com/api/v1/sources \ -H "Authorization: Bearer qk_your_key_here" \ -H "X-Tenant-ID: your_org_id" \ -H "Content-Type: application/json" \ -d '{ "name": "Zapier Leads", "rows": [{"name": "Alice", "email": "alice@example.com", "score": 85}] }'API basics
Section titled “API basics”Base URL
Section titled “Base URL”All public API endpoints live under:
https://app.querri.com/api/v1Authentication
Section titled “Authentication”The public API accepts three authentication methods. For server-to-server integrations (which is most of what you want this API for), use API keys.
1. API keys (qk_) — the right choice for integrations
Section titled “1. API keys (qk_) — the right choice for integrations”Send two headers on every request:
Authorization: Bearer qk_your_secret_hereX-Tenant-ID: your_org_idContent-Type: application/json # for POST/PUTAPI keys carry scopes (e.g. data:read, data:write), an optional bound user (for row-level security), and an optional source scope that restricts which sources the key can touch.
The Python SDK and CLI handle both headers for you when QUERRI_API_KEY and QUERRI_ORG_ID are set, or when you pass them to Querri(api_key=..., org_id=...).
Create keys at Settings → API Keys (/settings/api) — see API Keys for the full lifecycle.
2. JWT bearer (user-context calls)
Section titled “2. JWT bearer (user-context calls)”If your backend is forwarding a signed-in user’s JWT (issued by Querri’s auth/SSO):
Authorization: Bearer eyJhbGc...The org_id is read from the token, so X-Tenant-ID is not needed. Permissions follow the user’s role rather than scopes. Use this when an action needs to run as a specific user rather than as a service account.
3. Cookie + CSRF (browser apps)
Section titled “3. Cookie + CSRF (browser apps)”Browser clients already logged into Querri can authenticate via the access_token cookie; mutations require an X-CSRF-Token header. Most integrations don’t need this — see Authentication for the cases where it applies.
Scopes used by the Data API
Section titled “Scopes used by the Data API”| Scope | What it allows |
|---|---|
data:read | List sources, get schema, read paginated data, run SQL queries |
data:write | Create, append, replace, and delete data sources |
A key with only data:read cannot mutate sources. A key with only data:write cannot read data back — give it both if you want a Zap to verify what it wrote.
Other parts of the public API (projects, dashboards, files, etc.) have their own scopes — see API Reference.
Rate limits
Section titled “Rate limits”60 requests/minute per key by default; the limit is configurable per-key at creation. Bursts above the limit return 429.
Endpoint reference
Section titled “Endpoint reference”All paths are relative to https://app.querri.com/api/v1.
| Operation | Method | Path | Scope |
|---|---|---|---|
| List sources | GET | /sources | data:read |
| Get source schema | GET | /sources/{id} | data:read |
| Read paginated data | GET | /sources/{id}/data | data:read |
| Run SQL query | POST | /sources/{id}/query | data:read |
| Create source | POST | /sources | data:write |
| Append rows | POST | /sources/{id}/rows | data:write |
| Replace data | PUT | /sources/{id}/data | data:write |
| Delete source | DELETE | /sources/{id} | data:write |
Reading data
Section titled “Reading data”List sources
Section titled “List sources”# Python SDKsources = client.sources.list()for s in sources: print(s["id"], s["name"], s.get("row_count"))# CLIquerri source listquerri source list --json # machine-readable# HTTPcurl https://app.querri.com/api/v1/sources \ -H "Authorization: Bearer qk_your_key_here" \ -H "X-Tenant-ID: your_org_id"The HTTP response wraps the list in a data field:
{ "data": [ { "id": "src_a1b2c3d4", "name": "Zapier Leads", "row_count": 1000 } ] }Get source schema
Section titled “Get source schema”schema = client.sources.get(source_id)querri source describe <source_id>curl https://app.querri.com/api/v1/sources/{source_id} \ -H "Authorization: Bearer qk_your_key_here" \ -H "X-Tenant-ID: your_org_id"Returns column names, inferred types, and row count.
Read paginated data
Section titled “Read paginated data”page = client.sources.source_data(source_id, page=1, page_size=1000)print(page.total_rows, len(page.data))querri source data <source_id> --page 1 --page-size 1000curl "https://app.querri.com/api/v1/sources/{source_id}/data?page=1&page_size=100" \ -H "Authorization: Bearer qk_your_key_here" \ -H "X-Tenant-ID: your_org_id"Response:
{ "data": [ {"name": "Alice", "email": "alice@example.com", "score": 85}, {"name": "Bob", "email": "bob@example.com", "score": 92} ], "total_rows": 2, "page": 1, "page_size": 100}Maximum page_size is 10,000.
Run a SQL query
Section titled “Run a SQL query”The source_id goes in the URL path, not the request body. The body holds the query and pagination only.
result = client.sources.query( sql="SELECT name, score FROM data WHERE score > 80", source_id="src_a1b2c3d4", page=1, page_size=100,)querri source query <source_id> --sql "SELECT name, score FROM data WHERE score > 80"curl -X POST https://app.querri.com/api/v1/sources/{source_id}/query \ -H "Authorization: Bearer qk_your_key_here" \ -H "X-Tenant-ID: your_org_id" \ -H "Content-Type: application/json" \ -d '{ "sql": "SELECT name, score FROM data WHERE score > 80", "page": 1, "page_size": 100 }'Queries run in DuckDB. Row-level security is applied automatically based on the API key’s bound user or access policies. Only SELECT statements are allowed.
Writing data
Section titled “Writing data”Create a source
Section titled “Create a source”The SDK has two create* methods on client.sources — use create_data_source for inline JSON rows (this is the Data API path). The plain create method is for connector-based sources (Snowflake, BigQuery, etc.) and lives in the broader API.
source = client.sources.create_data_source( name="Zapier Leads", rows=[ {"name": "Alice", "email": "alice@example.com", "score": 85}, {"name": "Bob", "email": "bob@example.com", "score": 92}, ],)# source.id, source.name, source.columns, source.row_count, source.updated_atecho '[{"name":"Alice","email":"a@example.com","score":85}]' \ | querri source new --name "Zapier Leads"curl -X POST https://app.querri.com/api/v1/sources \ -H "Authorization: Bearer qk_your_key_here" \ -H "X-Tenant-ID: your_org_id" \ -H "Content-Type: application/json" \ -d '{ "name": "Zapier Leads", "rows": [ {"name": "Alice", "email": "alice@example.com", "score": 85}, {"name": "Bob", "email": "bob@example.com", "score": 92} ] }'Response (201 Created):
{ "id": "src_a1b2c3d4", "name": "Zapier Leads", "columns": ["name", "email", "score"], "row_count": 2, "updated_at": "2026-03-08T15:30:00.000000"}Column types are inferred automatically — strings, numbers, dates, and booleans are all detected.
Append rows
Section titled “Append rows”Add rows to an existing source. Columns are matched by name: new columns get added, missing columns are filled with null. Use this for Zapier-style ingestion where rows arrive over time.
result = client.sources.append_rows( source_id, rows=[ {"name": "Charlie", "email": "charlie@example.com", "score": 78}, {"name": "Diana", "email": "diana@example.com", "score": 95}, ],)curl -X POST https://app.querri.com/api/v1/sources/{source_id}/rows \ -H "Authorization: Bearer qk_your_key_here" \ -H "X-Tenant-ID: your_org_id" \ -H "Content-Type: application/json" \ -d '{ "rows": [ {"name": "Charlie", "email": "charlie@example.com", "score": 78} ] }'Replace all data
Section titled “Replace all data”Atomically swap a source’s contents for a new dataset. Use this for nightly full syncs from a system of record.
result = client.sources.replace_data(source_id, rows=fresh_export)curl -X PUT https://app.querri.com/api/v1/sources/{source_id}/data \ -H "Authorization: Bearer qk_your_key_here" \ -H "X-Tenant-ID: your_org_id" \ -H "Content-Type: application/json" \ -d '{ "rows": [ {"name": "Eve", "email": "eve@example.com", "score": 100} ] }'Delete a source
Section titled “Delete a source”client.sources.delete(source_id)querri source delete <source_id>curl -X DELETE https://app.querri.com/api/v1/sources/{source_id} \ -H "Authorization: Bearer qk_your_key_here" \ -H "X-Tenant-ID: your_org_id"Removes the source metadata, QDF metadata, and the underlying data file. The HTTP response is:
{ "id": "src_a1b2c3d4", "deleted": true }Limits
Section titled “Limits”| Limit | Value |
|---|---|
| Rows per request | 100,000 |
| Payload size | 50 MB |
| Page size (reads) | 10,000 |
| SQL query length | 10,000 characters |
| Rate limit | 60/min per key (configurable) |
Row-level security
Section titled “Row-level security”Read operations (data:read) enforce the API key’s row-level security automatically:
- If the key has a
bound_user_id, RLS is evaluated as that user. - If the key has
access_policy_ids, those policies are applied. - If neither is set, the key sees all rows.
Write operations (data:write) do not evaluate RLS — the key either has write access to the source or it doesn’t.
Source scope
Section titled “Source scope”API keys can be scoped to all sources or to an explicit list of source IDs.
If your key uses explicit scope, newly created sources are not automatically added to it. Either use a key with "mode": "all" source scope for integrations that create sources, or update the key’s scope after each create.
Setting up API keys for integrations
Section titled “Setting up API keys for integrations”Create keys at Settings → API Keys (/settings/api) → Create Key.
Recommended configurations
Section titled “Recommended configurations”| Integration | Name | Scopes | Source scope | Notes |
|---|---|---|---|---|
| Zapier (create + append) | Zapier - CRM Sync | data:read, data:write | All sources | Read used to verify writes |
| Zapier (append only) | Zapier - Lead Capture | data:write | All sources | Write-only is fine if you never read back |
| Nightly batch sync | Nightly Sync Bot | data:read, data:write | All sources | Read for verify, write for replace |
| BI tool / dashboard | Tableau Read-Only | data:read | All sources or explicit | Lock to specific sources where possible |
| Restricted integration | Partner API | data:read | Explicit: [src_abc, src_def] | Sees only the listed sources |
Key properties
Section titled “Key properties”- Bound user — sets the identity that RLS evaluates as on reads.
- Source scope —
"All sources"(any source the key sees) or"Explicit"(a specific list of IDs). - Expiration — default 90 days, max 1 year. Rotate before expiry.
- Rate limit — default 60/min per key; bump for high-volume integrations.
Security best practices
Section titled “Security best practices”- Use the minimum scopes needed. Webhook-only ingestion needs
data:write, notdata:read. - One key per integration. Don’t reuse a Zapier key in your BI tool.
- IP allowlists for server-to-server keys when source IPs are stable.
- Store the secret in a vault. The
qk_secret is shown once at creation.
Integration patterns
Section titled “Integration patterns”Python integration (SDK — recommended)
Section titled “Python integration (SDK — recommended)”A full create → append → read → replace → delete lifecycle:
from querri import Querri
client = Querri() # QUERRI_API_KEY + QUERRI_ORG_ID from env
# 1. Createsource = client.sources.create_data_source( name="CRM Contacts", rows=[{"name": "Alice", "email": "alice@corp.com", "deal_stage": "qualified"}],)
# 2. Appendclient.sources.append_rows(source.id, rows=[ {"name": "Bob", "email": "bob@corp.com", "deal_stage": "proposal"}, {"name": "Carol", "email": "carol@corp.com", "deal_stage": "closed_won"},])
# 3. Read backpage = client.sources.source_data(source.id, page=1, page_size=100)print(f"{page.total_rows} contacts loaded")
# 4. Nightly replace with the full exportclient.sources.replace_data(source.id, rows=fresh_crm_export)
# 5. Tear down when decommissionedclient.sources.delete(source.id)For async (good for FastAPI/asyncio backends), use AsyncQuerri and await each call.
Cron / CI sync (CLI)
Section titled “Cron / CI sync (CLI)”The CLI is the shortest path for scheduled jobs and operator scripts:
# 1. Authenticate once (OAuth, persists to ~/.querri/tokens.json)querri auth login
# Or in CI, export env varsexport QUERRI_API_KEY=qk_...export QUERRI_ORG_ID=org_...
# 2. Replace a source's data from a JSON dumpcat fresh_export.json \ | querri source new --name "Nightly CRM Snapshot $(date +%F)"
# 3. Or query an existing sourcequerri source query <source_id> \ --sql "SELECT region, COUNT(*) FROM data GROUP BY region" \ --jsonThe CLI exposes list, get, describe, data, query, ask, new, update, delete, sync, and connectors. For append/replace, drop into the SDK or hit HTTP directly.
Zapier — append on new record
Section titled “Zapier — append on new record”Most common Zapier pattern: when a new record appears in another app, append it to a Querri source.
One-time setup: create the source via the SDK or curl and save the returned id.
The Zap:
-
Trigger — your source app (HubSpot, Salesforce, Sheets, etc.).
-
Action — Webhooks by Zapier → Custom Request.
-
Configure:
- Method:
POST - URL:
https://app.querri.com/api/v1/sources/{source_id}/rows - Headers:
Authorization: Bearer qk_your_key_hereX-Tenant-ID: your_org_idContent-Type: application/json
- Body:
{"rows": [{"name": "{{name}}","email": "{{email}}","company": "{{company}}","created_at": "{{created_date}}"}]}
- Method:
-
Test with one record before turning the Zap on.
Node.js (raw HTTP)
Section titled “Node.js (raw HTTP)”const BASE = "https://app.querri.com/api/v1";const headers = { Authorization: `Bearer ${process.env.QUERRI_API_KEY}`, "X-Tenant-ID": process.env.QUERRI_ORG_ID, "Content-Type": "application/json",};
const create = await fetch(`${BASE}/sources`, { method: "POST", headers, body: JSON.stringify({ name: "CRM Contacts", rows: [{ name: "Alice", email: "alice@corp.com" }], }),});const { id } = await create.json();
await fetch(`${BASE}/sources/${id}/rows`, { method: "POST", headers, body: JSON.stringify({ rows: [{ name: "Bob", email: "bob@corp.com" }], }),});
const read = await fetch( `${BASE}/sources/${id}/data?page=1&page_size=100`, { headers },);const { data, total_rows } = await read.json();The same shape works in Go, Ruby, Java, etc. — just replace fetch with the language’s HTTP client.
Read-only BI integration
Section titled “Read-only BI integration”Create a key with only data:read scope and an optional explicit source scope, then iterate sources:
from querri import Querri
client = Querri() # qk_readonly_key
for s in client.sources.list(): print(f"{s['name']} ({s['id']})") page = client.sources.source_data(s["id"], page=1, page_size=1000)
rows = list(page.data) while len(rows) < page.total_rows: page = client.sources.source_data( s["id"], page=page.page + 1, page_size=1000, ) rows.extend(page.data)
print(f" Loaded {len(rows)} / {page.total_rows} rows")Error codes
Section titled “Error codes”| Code | HTTP | When it fires |
|---|---|---|
source_not_found | 404 | Source ID does not exist |
no_data | 400/404 | Source has no data (append requires existing data) |
source_not_in_scope | 403 | The key’s source scope does not include this source |
insufficient_scope | 403 | The key is missing the required scope |
too_many_rows | 400 | Request exceeds the 100,000-row limit |
empty_data | 400 | rows array is empty or has no columns |
payload_too_large | 413 | Request body exceeds 50 MB |
invalid_sql | 400 | SQL contains a non-SELECT statement |
query_failed | 400 | SQL parsed but failed to execute |
The Python SDK maps these to typed exceptions — NotFoundError, PermissionError, ValidationError, RateLimitError, etc. See querri._exceptions for the full hierarchy.
Next steps
Section titled “Next steps”- Python SDK —
pip install querri— full sync + async clients with typed responses. - Querri CLI — same package;
querri auth loginand you’re scripting in seconds. - Authentication — full details on API keys, JWT, and cookie auth.
- API Keys — creating, scoping, rotating, and revoking keys.
- API Reference — complete endpoint listing across the public API (projects, dashboards, files, policies, etc.).