Skip to content

Data API

The Data API is how external tools push data into Querri and read it back out. Use it to ingest from Zapier, run nightly syncs from your warehouse, power read-only BI dashboards, or build custom pipelines in any language.

This page is the single reference an engineer needs to build any Data API integration. There are three ways to call it — pick whichever fits your stack.

PathWhen to use itWhat you get
Python SDKAnything written in PythonTyped clients (sync + async), auto-pagination, retries, env-var auth
querri CLICron jobs, CI scripts, ad-hoc operator commands, shell pipelinesOne command per operation, JSON output, OAuth or env-var auth
Raw HTTPZapier, Node.js, Go, Ruby, anything non-PythonFull control, no extra dependency

The SDK and CLI ship in the same querri package on PyPI. If you’re in Python, use the SDK — every example below has an SDK form. If you’re not, the curl examples are exactly what to translate.

Terminal window
pip install querri
export QUERRI_API_KEY=qk_your_key_here
export QUERRI_ORG_ID=your_org_id
from querri import Querri
client = Querri() # reads QUERRI_API_KEY and QUERRI_ORG_ID
source = client.sources.create_data_source(
name="Zapier Leads",
rows=[
{"name": "Alice", "email": "alice@example.com", "score": 85},
{"name": "Bob", "email": "bob@example.com", "score": 92},
],
)
print(source.id, source.row_count)

For async, swap Querri for AsyncQuerri and await the calls.

Terminal window
pip install querri
querri auth login # OAuth, or set QUERRI_API_KEY + QUERRI_ORG_ID
echo '[{"name":"Alice","email":"a@example.com"}]' \
| querri source new --name "Zapier Leads"
querri source list
querri source data <source_id> --page-size 100
Terminal window
curl -X POST https://app.querri.com/api/v1/sources \
-H "Authorization: Bearer qk_your_key_here" \
-H "X-Tenant-ID: your_org_id" \
-H "Content-Type: application/json" \
-d '{
"name": "Zapier Leads",
"rows": [{"name": "Alice", "email": "alice@example.com", "score": 85}]
}'

All public API endpoints live under:

https://app.querri.com/api/v1

The public API accepts three authentication methods. For server-to-server integrations (which is most of what you want this API for), use API keys.

1. API keys (qk_) — the right choice for integrations

Section titled “1. API keys (qk_) — the right choice for integrations”

Send two headers on every request:

Authorization: Bearer qk_your_secret_here
X-Tenant-ID: your_org_id
Content-Type: application/json # for POST/PUT

API keys carry scopes (e.g. data:read, data:write), an optional bound user (for row-level security), and an optional source scope that restricts which sources the key can touch.

The Python SDK and CLI handle both headers for you when QUERRI_API_KEY and QUERRI_ORG_ID are set, or when you pass them to Querri(api_key=..., org_id=...).

Create keys at Settings → API Keys (/settings/api) — see API Keys for the full lifecycle.

If your backend is forwarding a signed-in user’s JWT (issued by Querri’s auth/SSO):

Authorization: Bearer eyJhbGc...

The org_id is read from the token, so X-Tenant-ID is not needed. Permissions follow the user’s role rather than scopes. Use this when an action needs to run as a specific user rather than as a service account.

Browser clients already logged into Querri can authenticate via the access_token cookie; mutations require an X-CSRF-Token header. Most integrations don’t need this — see Authentication for the cases where it applies.

ScopeWhat it allows
data:readList sources, get schema, read paginated data, run SQL queries
data:writeCreate, append, replace, and delete data sources

A key with only data:read cannot mutate sources. A key with only data:write cannot read data back — give it both if you want a Zap to verify what it wrote.

Other parts of the public API (projects, dashboards, files, etc.) have their own scopes — see API Reference.

60 requests/minute per key by default; the limit is configurable per-key at creation. Bursts above the limit return 429.

All paths are relative to https://app.querri.com/api/v1.

OperationMethodPathScope
List sourcesGET/sourcesdata:read
Get source schemaGET/sources/{id}data:read
Read paginated dataGET/sources/{id}/datadata:read
Run SQL queryPOST/sources/{id}/querydata:read
Create sourcePOST/sourcesdata:write
Append rowsPOST/sources/{id}/rowsdata:write
Replace dataPUT/sources/{id}/datadata:write
Delete sourceDELETE/sources/{id}data:write
# Python SDK
sources = client.sources.list()
for s in sources:
print(s["id"], s["name"], s.get("row_count"))
Terminal window
# CLI
querri source list
querri source list --json # machine-readable
Terminal window
# HTTP
curl https://app.querri.com/api/v1/sources \
-H "Authorization: Bearer qk_your_key_here" \
-H "X-Tenant-ID: your_org_id"

The HTTP response wraps the list in a data field:

{ "data": [ { "id": "src_a1b2c3d4", "name": "Zapier Leads", "row_count": 1000 } ] }
schema = client.sources.get(source_id)
Terminal window
querri source describe <source_id>
Terminal window
curl https://app.querri.com/api/v1/sources/{source_id} \
-H "Authorization: Bearer qk_your_key_here" \
-H "X-Tenant-ID: your_org_id"

Returns column names, inferred types, and row count.

page = client.sources.source_data(source_id, page=1, page_size=1000)
print(page.total_rows, len(page.data))
Terminal window
querri source data <source_id> --page 1 --page-size 1000
Terminal window
curl "https://app.querri.com/api/v1/sources/{source_id}/data?page=1&page_size=100" \
-H "Authorization: Bearer qk_your_key_here" \
-H "X-Tenant-ID: your_org_id"

Response:

{
"data": [
{"name": "Alice", "email": "alice@example.com", "score": 85},
{"name": "Bob", "email": "bob@example.com", "score": 92}
],
"total_rows": 2,
"page": 1,
"page_size": 100
}

Maximum page_size is 10,000.

The source_id goes in the URL path, not the request body. The body holds the query and pagination only.

result = client.sources.query(
sql="SELECT name, score FROM data WHERE score > 80",
source_id="src_a1b2c3d4",
page=1,
page_size=100,
)
Terminal window
querri source query <source_id> --sql "SELECT name, score FROM data WHERE score > 80"
Terminal window
curl -X POST https://app.querri.com/api/v1/sources/{source_id}/query \
-H "Authorization: Bearer qk_your_key_here" \
-H "X-Tenant-ID: your_org_id" \
-H "Content-Type: application/json" \
-d '{
"sql": "SELECT name, score FROM data WHERE score > 80",
"page": 1,
"page_size": 100
}'

Queries run in DuckDB. Row-level security is applied automatically based on the API key’s bound user or access policies. Only SELECT statements are allowed.

The SDK has two create* methods on client.sources — use create_data_source for inline JSON rows (this is the Data API path). The plain create method is for connector-based sources (Snowflake, BigQuery, etc.) and lives in the broader API.

source = client.sources.create_data_source(
name="Zapier Leads",
rows=[
{"name": "Alice", "email": "alice@example.com", "score": 85},
{"name": "Bob", "email": "bob@example.com", "score": 92},
],
)
# source.id, source.name, source.columns, source.row_count, source.updated_at
Terminal window
echo '[{"name":"Alice","email":"a@example.com","score":85}]' \
| querri source new --name "Zapier Leads"
Terminal window
curl -X POST https://app.querri.com/api/v1/sources \
-H "Authorization: Bearer qk_your_key_here" \
-H "X-Tenant-ID: your_org_id" \
-H "Content-Type: application/json" \
-d '{
"name": "Zapier Leads",
"rows": [
{"name": "Alice", "email": "alice@example.com", "score": 85},
{"name": "Bob", "email": "bob@example.com", "score": 92}
]
}'

Response (201 Created):

{
"id": "src_a1b2c3d4",
"name": "Zapier Leads",
"columns": ["name", "email", "score"],
"row_count": 2,
"updated_at": "2026-03-08T15:30:00.000000"
}

Column types are inferred automatically — strings, numbers, dates, and booleans are all detected.

Add rows to an existing source. Columns are matched by name: new columns get added, missing columns are filled with null. Use this for Zapier-style ingestion where rows arrive over time.

result = client.sources.append_rows(
source_id,
rows=[
{"name": "Charlie", "email": "charlie@example.com", "score": 78},
{"name": "Diana", "email": "diana@example.com", "score": 95},
],
)
Terminal window
curl -X POST https://app.querri.com/api/v1/sources/{source_id}/rows \
-H "Authorization: Bearer qk_your_key_here" \
-H "X-Tenant-ID: your_org_id" \
-H "Content-Type: application/json" \
-d '{
"rows": [
{"name": "Charlie", "email": "charlie@example.com", "score": 78}
]
}'

Atomically swap a source’s contents for a new dataset. Use this for nightly full syncs from a system of record.

result = client.sources.replace_data(source_id, rows=fresh_export)
Terminal window
curl -X PUT https://app.querri.com/api/v1/sources/{source_id}/data \
-H "Authorization: Bearer qk_your_key_here" \
-H "X-Tenant-ID: your_org_id" \
-H "Content-Type: application/json" \
-d '{
"rows": [
{"name": "Eve", "email": "eve@example.com", "score": 100}
]
}'
client.sources.delete(source_id)
Terminal window
querri source delete <source_id>
Terminal window
curl -X DELETE https://app.querri.com/api/v1/sources/{source_id} \
-H "Authorization: Bearer qk_your_key_here" \
-H "X-Tenant-ID: your_org_id"

Removes the source metadata, QDF metadata, and the underlying data file. The HTTP response is:

{ "id": "src_a1b2c3d4", "deleted": true }
LimitValue
Rows per request100,000
Payload size50 MB
Page size (reads)10,000
SQL query length10,000 characters
Rate limit60/min per key (configurable)

Read operations (data:read) enforce the API key’s row-level security automatically:

  • If the key has a bound_user_id, RLS is evaluated as that user.
  • If the key has access_policy_ids, those policies are applied.
  • If neither is set, the key sees all rows.

Write operations (data:write) do not evaluate RLS — the key either has write access to the source or it doesn’t.

API keys can be scoped to all sources or to an explicit list of source IDs.

If your key uses explicit scope, newly created sources are not automatically added to it. Either use a key with "mode": "all" source scope for integrations that create sources, or update the key’s scope after each create.

Create keys at Settings → API Keys (/settings/api) → Create Key.

IntegrationNameScopesSource scopeNotes
Zapier (create + append)Zapier - CRM Syncdata:read, data:writeAll sourcesRead used to verify writes
Zapier (append only)Zapier - Lead Capturedata:writeAll sourcesWrite-only is fine if you never read back
Nightly batch syncNightly Sync Botdata:read, data:writeAll sourcesRead for verify, write for replace
BI tool / dashboardTableau Read-Onlydata:readAll sources or explicitLock to specific sources where possible
Restricted integrationPartner APIdata:readExplicit: [src_abc, src_def]Sees only the listed sources
  • Bound user — sets the identity that RLS evaluates as on reads.
  • Source scope"All sources" (any source the key sees) or "Explicit" (a specific list of IDs).
  • Expiration — default 90 days, max 1 year. Rotate before expiry.
  • Rate limit — default 60/min per key; bump for high-volume integrations.
  1. Use the minimum scopes needed. Webhook-only ingestion needs data:write, not data:read.
  2. One key per integration. Don’t reuse a Zapier key in your BI tool.
  3. IP allowlists for server-to-server keys when source IPs are stable.
  4. Store the secret in a vault. The qk_ secret is shown once at creation.

A full create → append → read → replace → delete lifecycle:

from querri import Querri
client = Querri() # QUERRI_API_KEY + QUERRI_ORG_ID from env
# 1. Create
source = client.sources.create_data_source(
name="CRM Contacts",
rows=[{"name": "Alice", "email": "alice@corp.com", "deal_stage": "qualified"}],
)
# 2. Append
client.sources.append_rows(source.id, rows=[
{"name": "Bob", "email": "bob@corp.com", "deal_stage": "proposal"},
{"name": "Carol", "email": "carol@corp.com", "deal_stage": "closed_won"},
])
# 3. Read back
page = client.sources.source_data(source.id, page=1, page_size=100)
print(f"{page.total_rows} contacts loaded")
# 4. Nightly replace with the full export
client.sources.replace_data(source.id, rows=fresh_crm_export)
# 5. Tear down when decommissioned
client.sources.delete(source.id)

For async (good for FastAPI/asyncio backends), use AsyncQuerri and await each call.

The CLI is the shortest path for scheduled jobs and operator scripts:

Terminal window
# 1. Authenticate once (OAuth, persists to ~/.querri/tokens.json)
querri auth login
# Or in CI, export env vars
export QUERRI_API_KEY=qk_...
export QUERRI_ORG_ID=org_...
# 2. Replace a source's data from a JSON dump
cat fresh_export.json \
| querri source new --name "Nightly CRM Snapshot $(date +%F)"
# 3. Or query an existing source
querri source query <source_id> \
--sql "SELECT region, COUNT(*) FROM data GROUP BY region" \
--json

The CLI exposes list, get, describe, data, query, ask, new, update, delete, sync, and connectors. For append/replace, drop into the SDK or hit HTTP directly.

Most common Zapier pattern: when a new record appears in another app, append it to a Querri source.

One-time setup: create the source via the SDK or curl and save the returned id.

The Zap:

  1. Trigger — your source app (HubSpot, Salesforce, Sheets, etc.).

  2. ActionWebhooks by Zapier → Custom Request.

  3. Configure:

    • Method: POST
    • URL: https://app.querri.com/api/v1/sources/{source_id}/rows
    • Headers:
      Authorization: Bearer qk_your_key_here
      X-Tenant-ID: your_org_id
      Content-Type: application/json
    • Body:
      {
      "rows": [
      {
      "name": "{{name}}",
      "email": "{{email}}",
      "company": "{{company}}",
      "created_at": "{{created_date}}"
      }
      ]
      }
  4. Test with one record before turning the Zap on.

const BASE = "https://app.querri.com/api/v1";
const headers = {
Authorization: `Bearer ${process.env.QUERRI_API_KEY}`,
"X-Tenant-ID": process.env.QUERRI_ORG_ID,
"Content-Type": "application/json",
};
const create = await fetch(`${BASE}/sources`, {
method: "POST",
headers,
body: JSON.stringify({
name: "CRM Contacts",
rows: [{ name: "Alice", email: "alice@corp.com" }],
}),
});
const { id } = await create.json();
await fetch(`${BASE}/sources/${id}/rows`, {
method: "POST",
headers,
body: JSON.stringify({
rows: [{ name: "Bob", email: "bob@corp.com" }],
}),
});
const read = await fetch(
`${BASE}/sources/${id}/data?page=1&page_size=100`,
{ headers },
);
const { data, total_rows } = await read.json();

The same shape works in Go, Ruby, Java, etc. — just replace fetch with the language’s HTTP client.

Create a key with only data:read scope and an optional explicit source scope, then iterate sources:

from querri import Querri
client = Querri() # qk_readonly_key
for s in client.sources.list():
print(f"{s['name']} ({s['id']})")
page = client.sources.source_data(s["id"], page=1, page_size=1000)
rows = list(page.data)
while len(rows) < page.total_rows:
page = client.sources.source_data(
s["id"], page=page.page + 1, page_size=1000,
)
rows.extend(page.data)
print(f" Loaded {len(rows)} / {page.total_rows} rows")
CodeHTTPWhen it fires
source_not_found404Source ID does not exist
no_data400/404Source has no data (append requires existing data)
source_not_in_scope403The key’s source scope does not include this source
insufficient_scope403The key is missing the required scope
too_many_rows400Request exceeds the 100,000-row limit
empty_data400rows array is empty or has no columns
payload_too_large413Request body exceeds 50 MB
invalid_sql400SQL contains a non-SELECT statement
query_failed400SQL parsed but failed to execute

The Python SDK maps these to typed exceptions — NotFoundError, PermissionError, ValidationError, RateLimitError, etc. See querri._exceptions for the full hierarchy.

  • Python SDKpip install querri — full sync + async clients with typed responses.
  • Querri CLI — same package; querri auth login and you’re scripting in seconds.
  • Authentication — full details on API keys, JWT, and cookie auth.
  • API Keys — creating, scoping, rotating, and revoking keys.
  • API Reference — complete endpoint listing across the public API (projects, dashboards, files, policies, etc.).