Skip to content

Chat API

The Chat API provides an interactive conversational interface for data analysis. Users can ask questions in natural language, and Querri generates and executes data queries, returning results in a streaming format.

https://api.querri.com/api

All Chat API endpoints require JWT authentication. See Authentication for details.

Send a message to the chat interface for a specific project. The API responds with streaming Server-Sent Events (SSE) containing steps, execution updates, and results.

POST /api/projects/{uuid}/chat

Headers:

Authorization: Bearer {JWT_TOKEN}
Content-Type: application/json
Accept: text/event-stream

Path Parameters:

ParameterTypeDescription
uuidstringProject UUID to provide context for the chat

Request Body:

{
"message": "Show me the top 10 customers by revenue in Q4",
"model": "gpt-4o",
"context": {
"previous_message_id": "msg_01ABCDEF",
"include_history": true
},
"parameters": {
"temperature": 0.7,
"max_tokens": 2000
}
}

Request Parameters:

FieldTypeRequiredDescription
messagestringYesUser’s natural language query
modelstringNoLLM model to use (default: gpt-4o)
context.previous_message_idstringNoID of previous message for conversation continuity
context.include_historybooleanNoInclude full conversation history (default: true)
parameters.temperaturefloatNoLLM temperature 0.0-2.0 (default: 0.7)
parameters.max_tokensintegerNoMaximum response tokens (default: 2000)

The Chat API uses Server-Sent Events (SSE) to stream responses in real-time. Each event contains a JSON payload with the current state.

First event confirming message received:

event: message_received
data: {"message_id": "msg_01ABCDEF", "timestamp": "2025-01-15T10:00:00Z"}

AI is analyzing the request:

event: thinking
data: {"status": "analyzing", "message": "Analyzing your request..."}

A new execution step was created:

event: step_created
data: {
"step_id": "step_01ABCDEF",
"type": "sql_query",
"name": "Extract top customers by revenue",
"order": 1
}

Step execution started:

event: step_executing
data: {
"step_id": "step_01ABCDEF",
"status": "running",
"started_at": "2025-01-15T10:00:05Z"
}

Progress updates during execution:

event: step_progress
data: {
"step_id": "step_01ABCDEF",
"progress": {
"percentage": 45,
"rows_processed": 4500,
"estimated_total": 10000
}
}

Step finished successfully:

event: step_completed
data: {
"step_id": "step_01ABCDEF",
"status": "completed",
"completed_at": "2025-01-15T10:00:15Z",
"duration_seconds": 10,
"result_summary": {
"rows": 10,
"columns": 3,
"data_url": "/api/projects/proj_01ABCDEF/steps/step_01ABCDEF/data"
}
}

Step execution failed:

event: step_failed
data: {
"step_id": "step_01ABCDEF",
"status": "failed",
"error": {
"code": "sql_execution_error",
"message": "Table 'customers' does not exist",
"details": "Check your database schema"
}
}

LLM generating response text (streamed token by token):

event: response_token
data: {"token": "Based", "position": 0}
event: response_token
data: {"token": " on", "position": 1}
event: response_token
data: {"token": " the", "position": 2}

Full response finished:

event: response_complete
data: {
"message_id": "msg_01GHIJKL",
"response": "Based on the data, here are the top 10 customers by revenue in Q4:\n\n1. Acme Corp - $125,000\n2. TechStart Inc - $98,500\n...",
"steps": ["step_01ABCDEF"],
"execution_time_seconds": 12
}

Error during processing:

event: error
data: {
"error": "execution_failed",
"message": "Failed to execute query",
"step_id": "step_01ABCDEF",
"recoverable": true
}

Stream complete:

event: done
data: {"timestamp": "2025-01-15T10:00:20Z"}

Terminal window
curl -N -X POST "https://api.querri.com/api/projects/proj_01ABCDEF/chat" \
-H "Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9..." \
-H "Content-Type: application/json" \
-H "Accept: text/event-stream" \
-d '{
"message": "Show me the top 10 customers by revenue in Q4",
"model": "gpt-4o",
"context": {
"include_history": true
}
}'
event: message_received
data: {"message_id": "msg_01ABCDEF", "timestamp": "2025-01-15T10:00:00Z"}
event: thinking
data: {"status": "analyzing", "message": "Analyzing your request..."}
event: step_created
data: {"step_id": "step_01ABCDEF", "type": "sql_query", "name": "Extract top customers", "order": 1}
event: step_executing
data: {"step_id": "step_01ABCDEF", "status": "running", "started_at": "2025-01-15T10:00:05Z"}
event: step_progress
data: {"step_id": "step_01ABCDEF", "progress": {"percentage": 50, "rows_processed": 5000}}
event: step_completed
data: {
"step_id": "step_01ABCDEF",
"status": "completed",
"completed_at": "2025-01-15T10:00:15Z",
"duration_seconds": 10,
"result_summary": {
"rows": 10,
"columns": 3,
"data_url": "/api/projects/proj_01ABCDEF/steps/step_01ABCDEF/data"
}
}
event: response_token
data: {"token": "Based on the Q4 data, here are the top 10 customers:\n\n"}
event: response_token
data: {"token": "1. **Acme Corp** - $125,000\n"}
event: response_token
data: {"token": "2. **TechStart Inc** - $98,500\n"}
event: response_complete
data: {
"message_id": "msg_01GHIJKL",
"response": "Based on the Q4 data, here are the top 10 customers:\n\n1. **Acme Corp** - $125,000\n2. **TechStart Inc** - $98,500\n...",
"steps": ["step_01ABCDEF"],
"execution_time_seconds": 12
}
event: done
data: {"timestamp": "2025-01-15T10:00:20Z"}

Each SSE event follows this structure:

event: {event_type}
data: {json_payload}

Events are separated by blank lines. The data field always contains valid JSON.


The Chat API maintains conversation context across multiple messages.

Terminal window
# First message
curl -X POST "https://api.querri.com/api/projects/proj_01ABCDEF/chat" \
-H "Authorization: Bearer ${JWT_TOKEN}" \
-H "Content-Type: application/json" \
-d '{"message": "Show me total revenue by month"}'
# Follow-up message (using previous message_id)
curl -X POST "https://api.querri.com/api/projects/proj_01ABCDEF/chat" \
-H "Authorization: Bearer ${JWT_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"message": "Now break that down by product category",
"context": {
"previous_message_id": "msg_01ABCDEF",
"include_history": true
}
}'
  • Chat history is automatically managed
  • Last 10 messages are included by default
  • Context includes previous queries, results, and generated steps
  • Long contexts are automatically summarized

ModelDescriptionBest For
gpt-4oGPT-4 Omni (default)Complex analysis, multi-step queries
gpt-4-turboFast GPT-4Quick queries, simple aggregations
claude-3-5-sonnetClaude SonnetCode generation, detailed explanations
claude-3-5-haikuClaude HaikuSimple queries, fast responses
{
"message": "Analyze sales trends",
"model": "gpt-4o",
"parameters": {
"temperature": 0.7,
"max_tokens": 2000,
"top_p": 0.9,
"frequency_penalty": 0.0,
"presence_penalty": 0.0
}
}

{
"error": "not_found",
"message": "Project not found or access denied",
"project_uuid": "proj_01INVALID"
}

HTTP Status: 404 Not Found

{
"error": "validation_error",
"message": "Message is required and cannot be empty",
"field": "message"
}

HTTP Status: 400 Bad Request

event: error
data: {
"error": "connector_unavailable",
"message": "Database connector is not accessible",
"connector_uuid": "conn_01ABCDEF",
"recoverable": false
}
event: step_failed
data: {
"step_id": "step_01ABCDEF",
"error": {
"code": "sql_execution_error",
"message": "Syntax error in SQL query",
"query": "SELECT * FORM customers",
"details": "Did you mean 'FROM' instead of 'FORM'?"
}
}

const eventSource = new EventSource(
'https://api.querri.com/api/projects/proj_01ABCDEF/chat',
{
headers: {
'Authorization': `Bearer ${token}`,
'Content-Type': 'application/json'
},
method: 'POST',
body: JSON.stringify({
message: 'Show me top customers',
model: 'gpt-4o'
})
}
);
eventSource.addEventListener('step_created', (e) => {
const data = JSON.parse(e.data);
console.log('Step created:', data.step_id);
});
eventSource.addEventListener('response_token', (e) => {
const data = JSON.parse(e.data);
document.getElementById('response').innerHTML += data.token;
});
eventSource.addEventListener('done', (e) => {
console.log('Stream complete');
eventSource.close();
});
eventSource.addEventListener('error', (e) => {
console.error('Stream error:', e);
eventSource.close();
});
import requests
import json
url = "https://api.querri.com/api/projects/proj_01ABCDEF/chat"
headers = {
"Authorization": f"Bearer {token}",
"Content-Type": "application/json",
"Accept": "text/event-stream"
}
payload = {
"message": "Show me top customers",
"model": "gpt-4o"
}
response = requests.post(url, headers=headers, json=payload, stream=True)
for line in response.iter_lines():
if line:
line = line.decode('utf-8')
if line.startswith('event:'):
event_type = line.split(':', 1)[1].strip()
elif line.startswith('data:'):
data = json.loads(line.split(':', 1)[1].strip())
print(f"{event_type}: {data}")
Terminal window
curl -N -X POST "https://api.querri.com/api/projects/proj_01ABCDEF/chat" \
-H "Authorization: Bearer ${JWT_TOKEN}" \
-H "Content-Type: application/json" \
-H "Accept: text/event-stream" \
-d '{
"message": "Show me top customers",
"model": "gpt-4o"
}'

Chat API has specific rate limits:

Limit TypeValueWindow
Messages per user1001 hour
Concurrent streams3Per user
Token generation rate10,000Per minute

When rate limited:

event: error
data: {
"error": "rate_limit_exceeded",
"message": "Too many concurrent chat streams",
"retry_after": 60
}

HTTP Status: 429 Too Many Requests


  1. Handle Reconnection - Implement exponential backoff for connection failures
  2. Process Events Incrementally - Update UI as events arrive, don’t wait for completion
  3. Store Message IDs - Track conversation history using message IDs
  4. Handle Errors Gracefully - Display user-friendly error messages
  5. Implement Timeouts - Close connections after reasonable timeout (e.g., 5 minutes)
  6. Buffer Response Tokens - Batch token updates to avoid excessive DOM updates
  7. Cancel Long-Running Queries - Provide user option to cancel execution

User: "Show me revenue by month"
→ step_created: SQL query to aggregate revenue
→ step_executing: Running query
→ step_completed: 12 rows returned
→ response: "Here's the monthly revenue..." [with data]
User: "Now show only Q4"
→ [Using context from previous message]
→ step_created: SQL query with WHERE clause for Q4
→ step_executing: Running filtered query
→ step_completed: 3 rows returned
→ response: "Q4 revenue breakdown..." [with filtered data]
User: "Create a chart"
→ step_created: Visualization step
→ step_executing: Generating chart
→ step_completed: Chart generated
→ response: "Here's the chart..." [with image URL]

CodeDescription
200Success - stream started
400Bad Request - invalid message or parameters
401Unauthorized - missing/invalid token
403Forbidden - no access to project
404Not Found - project doesn’t exist
429Too Many Requests - rate limited
500Internal Server Error