Skip to content

File Uploads

Upload data files directly to Querri for instant analysis with AI. No connector setup required—just drag and drop your files and start querying.

Querri supports common data file formats:

File extensions: .csv, .txt

Best for:

  • Tabular data exports
  • Database dumps
  • Log files
  • Large datasets (CSV is efficient for big files)

Example use cases:

  • Customer lists
  • Sales transactions
  • Survey responses
  • Time-series data

File extensions: .xls, .xlsx, .xlsm

Best for:

  • Spreadsheets from Microsoft Excel
  • Data with multiple sheets
  • Formatted tables
  • Business reports

Features supported:

  • Multiple worksheets (you can query specific sheets)
  • Named ranges
  • Formulas (values are extracted, not formulas)
  • Date and number formatting

Limitations:

  • Charts and images are ignored
  • Macros are not executed
  • Maximum file size: 100MB

File extensions: .json, .jsonl, .ndjson

Best for:

  • API responses
  • NoSQL database exports
  • Nested/hierarchical data
  • Application logs

Formats supported:

  • Standard JSON arrays: [{...}, {...}]
  • Line-delimited JSON (JSONL): one object per line
  • Nested structures (automatically flattened)

Example:

[
{"id": 1, "name": "Alice", "orders": [{"total": 100}]},
{"id": 2, "name": "Bob", "orders": [{"total": 250}]}
]

File extensions: .parquet

Best for:

  • Large analytical datasets
  • Data warehouse exports
  • Columnar data storage
  • Big data pipelines

Benefits:

  • Highly compressed (smaller file sizes)
  • Fast query performance
  • Preserves data types perfectly
  • Efficient for wide tables (many columns)

Note: Parquet files are binary and can’t be opened in text editors, but Querri reads them natively.

The Library is your central hub for uploaded files and datasets.

Steps to upload:

  1. Navigate to Library

    • Click Library in the main navigation
    • You’ll see all your previously uploaded files
  2. Initiate Upload

    • Click the Upload File button
    • Or drag and drop files directly onto the Library page
  3. Select Files

    • Choose one or more files from your computer
    • You can upload multiple files at once
    • Files are queued for processing
  4. Upload Progress

    • A progress bar shows upload status
    • Large files may take a few moments
    • You can continue working while files upload
  5. File Processing

    • Querri analyzes the file structure
    • Column names and data types are detected
    • A preview is generated automatically
  6. Ready to Query

    • Once processing completes, the file appears in your Library
    • You can immediately start querying it in chat

The fastest way to upload files:

  1. Open the Library page
  2. Drag a file from your computer
  3. Drop it onto the Library interface
  4. The file uploads and processes automatically

Tip: You can drag multiple files at once for batch uploads.

Maximum file sizes:

  • CSV: 500MB
  • Excel: 100MB
  • JSON: 250MB
  • Parquet: 1GB

For larger files:

  • Split into multiple files
  • Use Parquet format (better compression)
  • Filter data before exporting
  • Connect directly to your database instead of uploading

After upload, Querri shows a data preview:

Preview includes:

  • First 10 rows of data
  • Column names
  • Detected data types
  • Row count
  • File size

Data type detection:

  • Text/String
  • Integer
  • Decimal/Float
  • Date/DateTime
  • Boolean
  • JSON (for nested fields)

Reviewing the preview:

  • Verify column names are correct
  • Check if data types are accurate
  • Look for parsing issues
  • Confirm the data looks as expected

If the preview looks wrong:

  • Check file encoding (should be UTF-8)
  • Verify CSV delimiter (comma, semicolon, tab)
  • Ensure Excel sheet has data in the first sheet
  • For JSON, confirm valid formatting

Querri automatically detects the structure of your data:

From headers:

  • CSV: First row is treated as column names
  • Excel: First row of each sheet
  • JSON: Object keys become column names

If no headers:

  • CSV files without headers get generic names: column_1, column_2, etc.
  • You can specify if your CSV lacks headers

Special characters:

  • Column names with spaces or special characters are preserved
  • Use quotes in queries: "Order Total" instead of Order Total

Querri infers data types by sampling the data:

Numeric detection:

"123" → Integer
"123.45" → Decimal
"$1,234.56" → Decimal (currency symbols stripped)

Date detection:

"2024-01-15" → Date
"2024-01-15 14:30:00" → DateTime
"1/15/2024" → Date

Boolean detection:

"true", "false" → Boolean
"yes", "no" → Boolean
"1", "0" → May be Boolean or Integer (context-dependent)

Text default:

  • If type is ambiguous, defaults to text
  • You can cast types in queries if needed

For nested JSON, Querri flattens the structure:

Input:

{
"user": {
"name": "Alice",
"address": {
"city": "Boston"
}
}
}

Flattened columns:

  • user.name → “Alice”
  • user.address.city → “Boston”

Arrays in JSON:

  • Arrays are expanded into multiple rows
  • Or stored as JSON text if very nested

UTF-8 is the preferred encoding for all text files:

  • Supports all languages and special characters
  • No conversion needed
  • Fastest processing

Save as UTF-8:

  • Excel: Save As → CSV UTF-8
  • Notepad++: Encoding → UTF-8
  • Google Sheets: Download → CSV

If your file uses a different encoding:

Common encodings:

  • Latin-1 (ISO-8859-1): Western European
  • Windows-1252: Windows default
  • UTF-16: Some database exports

Querri auto-detects:

  • Most common encodings are detected automatically
  • If characters look garbled, the encoding may not be detected

Fix encoding issues:

  1. Open file in a text editor
  2. Re-save as UTF-8
  3. Upload the new file

Example (using Python):

import pandas as pd
df = pd.read_csv('file.csv', encoding='latin-1')
df.to_csv('file_utf8.csv', encoding='utf-8', index=False)

Structure:

  • Include a header row with column names
  • Use comma as delimiter (or specify custom delimiter)
  • One record per row
  • Avoid merged cells

Formatting:

  • Remove extra empty rows/columns
  • Don’t use special characters in column names (or quote them)
  • Use consistent date formats (YYYY-MM-DD is best)
  • Escape commas in text fields with quotes

Example good CSV:

order_id,customer_name,order_date,total
1001,John Smith,2024-01-15,125.50
1002,Jane Doe,2024-01-16,89.99

Organization:

  • Put data in the first worksheet (or specify which sheet to use)
  • Start data in cell A1
  • Include column headers in row 1
  • Avoid complex formatting (colored cells, borders, etc.)

Data entry:

  • Use Excel’s date format, not text that looks like dates
  • Enter numbers as numbers, not text
  • Avoid formulas that reference other sheets
  • Remove hidden rows/columns

Multiple sheets:

  • You can upload workbooks with multiple sheets
  • Reference specific sheets in queries: “from Sheet2 show…”

Structure:

  • Use an array of objects for tabular data: [{...}, {...}]
  • Keep consistent keys across objects
  • Avoid deeply nested structures (3+ levels)
  • Use line-delimited JSON (JSONL) for very large files

Example good JSON:

[
{
"id": 1,
"name": "Product A",
"price": 29.99,
"in_stock": true
},
{
"id": 2,
"name": "Product B",
"price": 49.99,
"in_stock": false
}
]

Best practices:

  • Use Parquet for files over 50MB
  • Ideal for columnar analytics
  • Preserves precise data types
  • No special preparation needed

Creating Parquet files:

Using Python/Pandas:

import pandas as pd
df = pd.read_csv('large_file.csv')
df.to_parquet('large_file.parquet')

Using Apache Spark:

df = spark.read.csv('data.csv', header=True, inferSchema=True)
df.write.parquet('data.parquet')

Problem: Upload fails or gets stuck.

Check:

  • File size is under the limit
  • File isn’t corrupted (can you open it locally?)
  • Internet connection is stable
  • Browser isn’t blocking the upload

Solution:

  • Try a different browser
  • Check file size and compress if needed
  • Split large files into smaller chunks

Problem: Numbers show as text, dates are incorrect, or gibberish characters appear.

Check:

  • File encoding (should be UTF-8)
  • CSV delimiter (comma vs. semicolon vs. tab)
  • Date format consistency
  • Decimal separator (period vs. comma)

Solution:

  • Re-save file with UTF-8 encoding
  • Specify custom delimiter if not comma
  • Standardize date formats before upload
  • Clean data in Excel/Pandas before upload

Problem: Columns are missing or extra blank columns appear.

Check:

  • CSV: Ensure all rows have the same number of fields
  • Excel: Check for hidden columns
  • JSON: All objects should have the same keys

Solution:

  • Clean up the source file
  • Remove empty columns in Excel
  • Ensure CSV rows are aligned
  • Normalize JSON object keys

Problem: Formula results don’t appear correctly.

Understanding:

  • Querri reads the calculated values, not formulas
  • If formulas reference external workbooks, values may be missing

Solution:

  • Copy and paste values over formulas (Paste Special → Values)
  • Ensure all formulas have calculated before saving
  • Save as CSV to convert formulas to values

Once uploaded, query your files naturally:

"Show me the first 10 rows of customers.csv"
"What columns are in the sales_data file?"
"How many rows are in the orders spreadsheet?"
"What's the average order value in sales.csv?"
"Show top 10 products by revenue"
"Calculate monthly totals from transactions.xlsx"
"Show customers from California"
"Find orders over $500"
"Sort products by price descending"
"Join customers.csv with orders.csv on customer_id"
"Combine sales data from Sheet1 and Sheet2"
"Match products.json with inventory.csv"
"Show data from Sheet2 in the budget workbook"
"Compare values in Summary tab vs Details tab"
"Use the Customers sheet from my Excel file"

All uploaded files appear in the Library:

  • File name
  • Upload date
  • File size
  • Row count
  • Quick preview

Give files descriptive names:

  1. Click on the file in Library
  2. Click the edit/rename icon
  3. Enter a new name
  4. Save

Good naming:

  • customer_orders_2024_q1.csv
  • sales_forecast_final.xlsx
  • product_catalog_jan.json

Poor naming:

  • data.csv
  • export (1).xlsx
  • untitled.json

Remove files you no longer need:

  1. Select the file in Library
  2. Click Delete/Remove
  3. Confirm deletion

Note: Deleted files cannot be recovered. Re-upload if needed later.

Uploading a file with the same name:

  • Creates a new version
  • Previous version is replaced
  • Keep version numbers in filenames if you need to track versions

Optimize before uploading:

  • Remove unnecessary columns
  • Filter to relevant date ranges
  • Aggregate detailed data if possible
  • Use Parquet for large analytical datasets

Split large files:

  • Break into logical chunks (by month, region, etc.)
  • Upload separately and query across files
  • Consider using a database connection instead

Compress when possible:

  • ZIP compression before upload (for CSV/JSON)
  • Parquet has built-in compression
  • Remove duplicate data

Performance expectations:

  • Small files (<10MB): Process in seconds
  • Medium files (10-100MB): May take 1-2 minutes
  • Large files (100MB+): Several minutes to process
  • Very large files (500MB+): Consider database connection instead