Skip to content

Data API

The Data API lets external tools create and manage data sources in Querri programmatically. Use it to push data from Zapier, sync records from internal systems, or build custom integrations that keep Querri up to date automatically.

The Data API provides full CRUD operations on data sources:

OperationMethodEndpointScope
List sourcesGET/data/sourcesdata:read
Get schemaGET/data/sources/{id}data:read
Read dataGET/data/sources/{id}/datadata:read
Query dataPOST/data/querydata:read
Create sourcePOST/data/sourcesdata:write
Append rowsPOST/data/sources/{id}/rowsdata:write
Replace dataPUT/data/sources/{id}/datadata:write
Delete sourceDELETE/data/sources/{id}data:write

All requests require a qk_ API key with the appropriate scope. See Authentication for details.

Terminal window
curl -H "Authorization: Bearer qk_your_key_here" \
-H "X-Tenant-ID: your_org_id" \
-H "Content-Type: application/json" \
https://app.querri.com/api/v1/data/sources
Use caseScopes needed
Read-only integration (BI tool, dashboard)data:read
Write-only ingestion (Zapier, webhook)data:write
Full integration (read + write)data:read, data:write

Keys with only data:read cannot create, append, replace, or delete sources. Keys with only data:write cannot read data or list sources.

Create a new data source by providing a name and an array of row objects:

Terminal window
curl -X POST https://app.querri.com/api/v1/data/sources \
-H "Authorization: Bearer qk_your_key_here" \
-H "X-Tenant-ID: your_org_id" \
-H "Content-Type: application/json" \
-d '{
"name": "Zapier Leads",
"rows": [
{"name": "Alice", "email": "alice@example.com", "score": 85},
{"name": "Bob", "email": "bob@example.com", "score": 92}
]
}'

Response (201 Created):

{
"id": "src_a1b2c3d4",
"name": "Zapier Leads",
"columns": ["name", "email", "score"],
"row_count": 2,
"updated_at": "2026-03-08T15:30:00.000000"
}

Column types are automatically inferred from the data — strings, numbers, dates, and booleans are all detected.

Terminal window
curl https://app.querri.com/api/v1/data/sources \
-H "Authorization: Bearer qk_your_key_here" \
-H "X-Tenant-ID: your_org_id"
Terminal window
curl https://app.querri.com/api/v1/data/sources/{source_id} \
-H "Authorization: Bearer qk_your_key_here" \
-H "X-Tenant-ID: your_org_id"

Returns column names, types, and row count.

Terminal window
curl "https://app.querri.com/api/v1/data/sources/{source_id}/data?page=1&page_size=100" \
-H "Authorization: Bearer qk_your_key_here" \
-H "X-Tenant-ID: your_org_id"

Response:

{
"data": [
{"name": "Alice", "email": "alice@example.com", "score": 85},
{"name": "Bob", "email": "bob@example.com", "score": 92}
],
"total_rows": 2,
"page": 1,
"page_size": 100
}

Maximum page_size is 10,000.

Terminal window
curl -X POST https://app.querri.com/api/v1/data/query \
-H "Authorization: Bearer qk_your_key_here" \
-H "X-Tenant-ID: your_org_id" \
-H "Content-Type: application/json" \
-d '{
"sql": "SELECT name, score FROM data WHERE score > 80",
"source_id": "src_a1b2c3d4",
"page": 1,
"page_size": 100
}'

The query runs in DuckDB with row-level security (RLS) applied automatically. Only SELECT queries are allowed.

Add rows to an existing source. Columns are matched by name — new columns are added, missing columns are filled with null values.

Terminal window
curl -X POST https://app.querri.com/api/v1/data/sources/{source_id}/rows \
-H "Authorization: Bearer qk_your_key_here" \
-H "X-Tenant-ID: your_org_id" \
-H "Content-Type: application/json" \
-d '{
"rows": [
{"name": "Charlie", "email": "charlie@example.com", "score": 78},
{"name": "Diana", "email": "diana@example.com", "score": 95}
]
}'

Response:

{
"id": "src_a1b2c3d4",
"name": "Zapier Leads",
"columns": ["name", "email", "score"],
"row_count": 4,
"updated_at": "2026-03-08T16:00:00.000000"
}

This is ideal for Zapier triggers or webhook-based ingestion where new records arrive over time.

Replace all data in a source. The existing data is removed and the new rows take its place.

Terminal window
curl -X PUT https://app.querri.com/api/v1/data/sources/{source_id}/data \
-H "Authorization: Bearer qk_your_key_here" \
-H "X-Tenant-ID: your_org_id" \
-H "Content-Type: application/json" \
-d '{
"rows": [
{"name": "Eve", "email": "eve@example.com", "score": 100}
]
}'

Use this when your integration produces a complete dataset on each sync (e.g., a nightly export from another system).

Remove a source and all its data permanently.

Terminal window
curl -X DELETE https://app.querri.com/api/v1/data/sources/{source_id} \
-H "Authorization: Bearer qk_your_key_here" \
-H "X-Tenant-ID: your_org_id"

Response:

{
"id": "src_a1b2c3d4",
"deleted": true
}

This deletes the source metadata, QDF metadata, and the underlying data file.

LimitValue
Rows per request100,000
Payload size50 MB
Page size (reads)10,000
SQL query length10,000 characters
Rate limitPer-key setting (default 60/min)

If your API key uses explicit source scope (only specific sources are accessible), new sources created via the API are not automatically added to the key’s scope. Use an API key with "mode": "all" source scope for integrations that create sources, or update the key’s scope after creation.

Read operations (data:read) enforce the API key’s row-level security (RLS) automatically. If the key has a bound_user_id, RLS is evaluated as that user. If the key has access_policy_ids, those policies are applied.

Write operations (data:write) do not evaluate RLS — the key either has write access or it doesn’t.

Before building an integration, create the right API key in Querri. Go to Settings → API Keys (/settings/api) and click Create Key.

Integration typeKey nameScopesSource scopeNotes
Zapier (create + append)Zapier - CRM Syncdata:read, data:writeAll sourcesNeeds both scopes to create sources and verify data
Zapier (append only)Zapier - Lead Capturedata:writeAll sourcesWrite-only if you never need to read back
Nightly batch syncNightly Sync Botdata:read, data:writeAll sourcesNeeds read to verify, write to replace
BI tool / dashboardTableau Read-Onlydata:readAll sources (or explicit)Read-only, optionally locked to specific sources
Restricted integrationPartner APIdata:readExplicit: [src_abc, src_def]Can only see listed sources
  • Bound user: If set, read operations evaluate row-level security (RLS) as that user. Leave unset for full data access.
  • Source scope: "All sources" lets the key access any source. "Explicit" restricts to a specific list of source IDs.
  • Expiration: Keys expire after the set period (default 90 days, max 1 year). Rotate keys before they expire.
  • Rate limit: Default 60 requests/minute per key. Increase for high-volume integrations.
  1. Use the minimum scopes needed. A Zapier webhook that only appends rows needs data:write — not data:read.
  2. Create separate keys per integration. Don’t share a key between Zapier and your BI tool.
  3. Set IP allowlists for server-to-server integrations where the source IPs are known.
  4. Store the secret securely. The qk_ secret is shown once at creation — save it in your integration’s secrets manager (Zapier’s vault, AWS Secrets Manager, etc.).

This is the most common pattern — a Zap fires whenever a new record appears in another app and appends it to a Querri source.

Prerequisites:

  • A qk_ key with data:write scope (add data:read if you want the Zap to verify data)
  • An existing source in Querri (create one manually or via the API first)

Step-by-step Zapier setup:

  1. Create the source first (one-time setup via curl or the API):

    Terminal window
    curl -X POST https://app.querri.com/api/v1/data/sources \
    -H "Authorization: Bearer qk_your_key_here" \
    -H "X-Tenant-ID: your_org_id" \
    -H "Content-Type: application/json" \
    -d '{
    "name": "HubSpot Contacts",
    "rows": [{"name": "Seed Record", "email": "seed@example.com", "company": "Example Inc"}]
    }'

    Save the returned id — you’ll use it in the Zap.

  2. Create a new Zap in Zapier:

    • Trigger: Choose your source app (HubSpot, Salesforce, Google Sheets, etc.) → “New Contact” / “New Row” / etc.
    • Action: Choose “Webhooks by Zapier” → “Custom Request”
  3. Configure the webhook action:

    • Method: POST
    • URL: https://app.querri.com/api/v1/data/sources/{source_id}/rows
    • Headers:
      Authorization: Bearer qk_your_key_here
      X-Tenant-ID: your_org_id
      Content-Type: application/json
    • Body:
      {
      "rows": [
      {
      "name": "{{name}}",
      "email": "{{email}}",
      "company": "{{company}}",
      "created_at": "{{created_date}}"
      }
      ]
      }
      Replace {{name}}, {{email}}, etc. with Zapier’s dynamic field mapping from your trigger.
  4. Test the Zap — send a test record and verify it appears in Querri.

For integrations that produce entire datasets (e.g., a weekly report), create a new source each time:

Method: POST
URL: https://app.querri.com/api/v1/data/sources
Headers:
Authorization: Bearer qk_your_key_here
X-Tenant-ID: your_org_id
Content-Type: application/json
Body:
{
"name": "Weekly Report - {{zap_meta_human_now}}",
"rows": {{report_data_as_json_array}}
}

Replace all data on a schedule (e.g., nightly cron job or Zapier Schedule trigger):

import requests
API_KEY = "qk_your_key_here"
ORG_ID = "your_org_id"
SOURCE_ID = "src_a1b2c3d4"
BASE_URL = "https://app.querri.com/api/v1"
HEADERS = {
"Authorization": f"Bearer {API_KEY}",
"X-Tenant-ID": ORG_ID,
"Content-Type": "application/json",
}
# Step 1: Fetch fresh data from your system
rows = fetch_all_records() # Returns list of dicts
# Step 2: Replace all data in Querri
resp = requests.put(
f"{BASE_URL}/data/sources/{SOURCE_ID}/data",
headers=HEADERS,
json={"rows": rows},
)
resp.raise_for_status()
result = resp.json()
print(f"Synced {result['row_count']} rows, columns: {result['columns']}")
# Step 3 (optional): Verify by reading back
verify = requests.get(
f"{BASE_URL}/data/sources/{SOURCE_ID}",
headers=HEADERS,
)
verify.raise_for_status()
schema = verify.json()
assert schema["row_count"] == len(rows), "Row count mismatch!"

Create a key with only data:read scope for external BI tools:

import requests
API_KEY = "qk_readonly_key"
ORG_ID = "your_org_id"
BASE_URL = "https://app.querri.com/api/v1"
HEADERS = {
"Authorization": f"Bearer {API_KEY}",
"X-Tenant-ID": ORG_ID,
}
# List all accessible sources
sources = requests.get(f"{BASE_URL}/data/sources", headers=HEADERS).json()
for source in sources["data"]:
print(f"\n{source['name']} ({source['id']}): {source['row_count']} rows")
print(f" Columns: {source['columns']}")
# Read first page of data
data = requests.get(
f"{BASE_URL}/data/sources/{source['id']}/data?page=1&page_size=1000",
headers=HEADERS,
).json()
# Paginate through all data if needed
all_rows = data["data"]
total = data["total_rows"]
page = 2
while len(all_rows) < total:
next_page = requests.get(
f"{BASE_URL}/data/sources/{source['id']}/data?page={page}&page_size=1000",
headers=HEADERS,
).json()
all_rows.extend(next_page["data"])
page += 1
print(f" Loaded {len(all_rows)} / {total} rows")

A full create → append → read → replace → delete flow:

const BASE_URL = "https://app.querri.com/api/v1";
const API_KEY = "qk_your_key_here";
const ORG_ID = "your_org_id";
const headers = {
Authorization: `Bearer ${API_KEY}`,
"X-Tenant-ID": ORG_ID,
"Content-Type": "application/json",
};
// 1. Create a source
const createResp = await fetch(`${BASE_URL}/data/sources`, {
method: "POST",
headers,
body: JSON.stringify({
name: "CRM Contacts",
rows: [
{ name: "Alice", email: "alice@corp.com", deal_stage: "qualified" },
],
}),
});
const source = await createResp.json();
const sourceId = source.id;
console.log(`Created source ${sourceId}`);
// 2. Append rows (as new records arrive)
await fetch(`${BASE_URL}/data/sources/${sourceId}/rows`, {
method: "POST",
headers,
body: JSON.stringify({
rows: [
{ name: "Bob", email: "bob@corp.com", deal_stage: "proposal" },
{ name: "Carol", email: "carol@corp.com", deal_stage: "closed_won" },
],
}),
});
// 3. Read data back
const readResp = await fetch(
`${BASE_URL}/data/sources/${sourceId}/data?page=1&page_size=100`,
{ headers }
);
const data = await readResp.json();
console.log(`${data.total_rows} contacts loaded`);
// 4. Replace with fresh export (nightly sync)
await fetch(`${BASE_URL}/data/sources/${sourceId}/data`, {
method: "PUT",
headers,
body: JSON.stringify({
rows: freshCRMExport, // Your complete dataset
}),
});
// 5. Delete when integration is decommissioned
await fetch(`${BASE_URL}/data/sources/${sourceId}`, {
method: "DELETE",
headers,
});
CodeHTTPDescription
source_not_found404Source ID does not exist
no_data400/404Source has no data (append requires existing data)
source_not_in_scope403API key’s source scope does not include this source
insufficient_scope403API key does not have the required scope
too_many_rows400Request exceeds the 100,000 row limit
empty_data400Rows array contains no columns
payload_too_large413Request body exceeds 50 MB
invalid_sql400SQL query contains disallowed statements
query_failed400SQL query execution failed