Skip to content

System Requirements

This reference outlines the minimum and recommended system requirements for deploying and running Querri. Requirements vary based on deployment size, usage patterns, and expected user load.

  • CPU: 4 cores
  • RAM: 8 GB
  • Storage: 50 GB
  • OS: Linux (Ubuntu 20.04+, Debian 11+, or RHEL 8+)
  • Docker: 24.0+
  • Docker Compose: 2.20+
  • CPU: 8+ cores
  • RAM: 16+ GB
  • Storage: 100+ GB SSD
  • OS: Linux (Ubuntu 22.04 LTS recommended)
  • Docker: Latest stable
  • Network: Dedicated server with static IP

  • Cores: 4 physical cores or 8 vCPUs
  • Architecture: x86_64 (AMD64)
  • Clock Speed: 2.0 GHz+
  • Cores: 8-16 physical cores or 16-32 vCPUs
  • Architecture: x86_64 (AMD64)
  • Clock Speed: 2.5 GHz+
  • Type: Modern Intel Xeon or AMD EPYC processors
  • Small deployment (1-10 users): 4-8 cores
  • Medium deployment (10-50 users): 8-16 cores
  • Large deployment (50-200 users): 16-32 cores
  • Enterprise (200+ users): 32+ cores with horizontal scaling

CPU-Intensive Operations:

  • Data processing and transformations
  • AI model inference
  • Chart rendering
  • Background job processing
  • Database queries

  • Size: 8 GB
  • Type: DDR4 or newer
  • Use Case: Development, testing, very light production
  • Size: 16-32 GB
  • Type: DDR4 or DDR5
  • ECC: Recommended for production reliability
  • Small deployment (1-10 users): 8-16 GB
  • Medium deployment (10-50 users): 16-32 GB
  • Large deployment (50-200 users): 32-64 GB
  • Enterprise (200+ users): 64+ GB

Memory Breakdown (Typical Medium Deployment):

  • MongoDB: 4-8 GB
  • Redis: 2-4 GB
  • Server API (4 replicas): 4-8 GB
  • Hub Service: 1-2 GB
  • Web App: 1-2 GB
  • Supporting Services: 1-2 GB
  • Operating System: 2-4 GB
  • Buffer: 2-4 GB

Memory-Intensive Operations:

  • Large dataset processing
  • In-memory caching
  • Concurrent user sessions
  • AI model operations
  • Database indices

  • Size: 50 GB
  • Type: SSD preferred
  • IOPS: 1,000+
  • Size: 100+ GB SSD (NVMe preferred)
  • IOPS: 3,000+ sustained
  • Throughput: 100+ MB/s
  • Redundancy: RAID 1 or RAID 10

System and Application (10-20 GB):

  • Docker images
  • Application code
  • Operating system
  • Logs

Database (Variable):

  • MongoDB data files
  • Database indices
  • Growth rate depends on usage

File Storage (Variable):

  • Uploaded files (if FILE_STORAGE=LOCAL)
  • Temporary files
  • Exports and reports

Logs and Backups (10-20 GB):

  • Application logs
  • Database backups
  • Archive data

Expected Data Growth:

  • Light usage: 5-10 GB/month
  • Moderate usage: 20-50 GB/month
  • Heavy usage: 100+ GB/month

Storage Recommendations by User Count:

  • 1-10 users: 50-100 GB
  • 10-50 users: 100-500 GB
  • 50-200 users: 500 GB - 2 TB
  • 200+ users: 2+ TB

Storage Type Recommendations:

  • Development: Standard SSD
  • Production: NVMe SSD
  • High-performance: NVMe RAID 10
  • Archive: Standard HDD (secondary storage)

External Storage: For production deployments, consider Amazon S3 or equivalent:

  • Unlimited scalability
  • Reduced local storage requirements
  • Better disaster recovery
  • Cost-effective for large datasets

Ubuntu:

  • Minimum: Ubuntu 20.04 LTS
  • Recommended: Ubuntu 22.04 LTS
  • Notes: Best tested, most documentation available

Debian:

  • Minimum: Debian 11 (Bullseye)
  • Recommended: Debian 12 (Bookworm)

Red Hat Enterprise Linux / CentOS:

  • Minimum: RHEL 8 / Rocky Linux 8
  • Recommended: RHEL 9 / Rocky Linux 9

Amazon Linux:

  • Supported: Amazon Linux 2023
  • Notes: Optimized for AWS deployments

Supported:

  • macOS 12 (Monterey) or newer
  • Docker Desktop for Mac required

Notes:

  • Suitable for development and testing
  • Not recommended for production
  • Performance may vary

Supported:

  • Windows 10/11 Pro or Enterprise
  • Windows Server 2019/2022
  • WSL 2 (Windows Subsystem for Linux) required

Notes:

  • Requires Docker Desktop for Windows
  • WSL 2 backend required
  • Development and testing only
  • Not recommended for production

Kernel Version:

  • Linux kernel 5.4+ (5.15+ recommended)

File System:

  • ext4 (recommended)
  • XFS (for large databases)
  • Avoid NFS for database storage

System Limits:

Terminal window
# Increase file descriptor limits
ulimit -n 65536
# In /etc/security/limits.conf
* soft nofile 65536
* hard nofile 65536

Swap:

  • Minimum: 2 GB
  • Recommended: Equal to RAM for systems with <16GB RAM
  • Recommended: 8-16 GB for systems with >16GB RAM

Minimum Version: 24.0.0 Recommended Version: Latest stable (25.0+)

Installation:

Terminal window
# Ubuntu/Debian
curl -fsSL https://get.docker.com | sh
# Verify installation
docker --version

Configuration:

{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"storage-driver": "overlay2"
}

Minimum Version: 2.20.0 Recommended Version: Latest stable (2.24+)

Installation:

Terminal window
# Usually included with Docker Desktop
# For Linux servers:
sudo apt-get install docker-compose-plugin
# Verify installation
docker compose version

Minimum:

  • CPU: 4 cores allocated to Docker
  • Memory: 8 GB allocated to Docker
  • Disk: 50 GB

Recommended:

  • CPU: 8+ cores allocated
  • Memory: 16+ GB allocated
  • Disk: 100+ GB

Docker Desktop Settings (macOS/Windows):

  • Settings → Resources
  • Allocate at least 8 GB RAM
  • Allocate at least 4 CPU cores
  • Allocate at least 50 GB disk

Required Ports (must be open):

PortServicePurposeExposure
80HTTPWeb applicationPublic
443HTTPSSecure web applicationPublic
8080Traefik DashboardReverse proxy adminInternal only

Internal Ports (Docker network only):

  • 27017: MongoDB
  • 6379: Redis
  • 8000: FastAPI server
  • 5173: SvelteKit dev server (development)

Inbound Rules:

Terminal window
# Allow HTTP
sudo ufw allow 80/tcp
# Allow HTTPS
sudo ufw allow 443/tcp
# Allow SSH (for management)
sudo ufw allow 22/tcp
# Enable firewall
sudo ufw enable

Outbound Rules:

  • Allow all outbound (default)
  • Required for:
    • Package updates
    • AI API calls (OpenAI/Azure)
    • External integrations
    • Email sending
    • License validation

Minimum:

  • Download: 10 Mbps
  • Upload: 5 Mbps

Recommended:

  • Download: 100 Mbps
  • Upload: 50 Mbps

Enterprise:

  • Download: 1 Gbps
  • Upload: 500 Mbps

Bandwidth Usage Estimates:

  • Per user session: 1-5 Mbps
  • File uploads: Variable (depends on file size)
  • AI requests: 0.1-1 Mbps per request
  • Background sync: 1-10 Mbps

Required:

  • Fully qualified domain name (FQDN)
  • DNS A record pointing to server IP
  • SSL/TLS certificate (Let’s Encrypt recommended)

Examples:

  • app.yourcompany.com
  • querri.yourcompany.com
  • analytics.yourcompany.com

Wildcard Support (optional):

  • *.querri.yourcompany.com for multi-tenant

Purpose: User authentication and SSO Requirement: Active WorkOS account Pricing: Varies by plan Setup: https://workos.com

Network Requirements:

  • Outbound HTTPS to api.workos.com
  • OAuth callback URL accessible

Purpose: AI-powered data analysis and chat

Option 1: OpenAI

Option 2: Azure OpenAI

  • Requirement: Azure subscription with OpenAI access
  • Pricing: Varies by region
  • Setup: Azure Portal

Network Requirements:

  • Outbound HTTPS to api.openai.com (OpenAI)
  • Outbound HTTPS to your Azure endpoint (Azure OpenAI)

API Rate Limits:

  • Consider your usage tier
  • Monitor quota and rate limits
  • Plan for peak usage

Purpose: Subscription billing and payments Requirement: Stripe account (if using billing) Setup: https://stripe.com


Purpose: Transactional emails and notifications Requirement: SendGrid account (if using email) Alternative: SMTP server Setup: https://sendgrid.com


Purpose: Scalable file storage Requirement: AWS account (if using S3 storage) Alternative: Local storage Setup: AWS Console


  • Sentry: Error tracking
  • Segment: Analytics
  • Userflow: User onboarding

Recommended:

  • Google Chrome 120+
  • Microsoft Edge 120+
  • Safari 17+
  • Firefox 121+

Minimum:

  • Chrome 100+
  • Edge 100+
  • Safari 15+
  • Firefox 100+

Mobile Browsers:

  • Chrome Mobile (Android)
  • Safari Mobile (iOS 15+)

Not Supported:

  • Internet Explorer (any version)
  • Chrome <100
  • Safari <15
  • JavaScript enabled (required)
  • Cookies enabled (required)
  • WebSockets support
  • Local storage support
  • Modern CSS (Grid, Flexbox)

Working Set:

  • Should fit in RAM for optimal performance
  • Estimate: (Number of projects × 10 MB) + (Number of files × average file size)

Index Size:

  • Approximately 10-20% of data size
  • Must fit in RAM for best performance

Example Sizing:

  • Small (100 projects, 1,000 files): 2-4 GB database
  • Medium (1,000 projects, 10,000 files): 20-40 GB database
  • Large (10,000 projects, 100,000 files): 200-400 GB database

Recommendations:

  • 8 GB RAM for small deployments
  • 16 GB RAM for medium deployments
  • 32+ GB RAM for large deployments

When to scale vertically:

  • Single server not fully utilized
  • Simpler management
  • Cost-effective for small-medium deployments

Upgrade path:

  1. Add more RAM (easiest)
  2. Add faster storage (SSD → NVMe)
  3. Add more CPU cores
  4. Upgrade to faster CPUs

When to scale horizontally:

  • Single server at capacity
  • High availability required
  • Geographic distribution needed
  • Very large deployments (200+ users)

Architecture:

  • Multiple server-api replicas (already supported)
  • Load balancer (Traefik or external)
  • Separate database server
  • Shared storage (S3)
  • Redis cluster (optional)

Deployment patterns:

  • Database on dedicated server
  • API servers on multiple instances
  • Load balancer distributing traffic
  • Shared file storage (S3 required)

Small Deployment (4 cores, 8 GB RAM):

  • Concurrent users: 10-20
  • Projects: Up to 500
  • Query response: <2 seconds
  • File upload: 10 MB/s

Medium Deployment (8 cores, 16 GB RAM):

  • Concurrent users: 50-100
  • Projects: Up to 5,000
  • Query response: <1 second
  • File upload: 50 MB/s

Large Deployment (16 cores, 32 GB RAM):

  • Concurrent users: 200-500
  • Projects: Up to 50,000
  • Query response: <500ms
  • File upload: 100 MB/s

Database:

  • MongoDB replica set (3+ nodes)
  • Automated failover
  • Geographic distribution

Application:

  • Multiple server instances
  • Load balancing
  • Health checks
  • Automatic restart

Storage:

  • RAID for local storage
  • S3 for high availability
  • Regular backups

Network:

  • Redundant internet connections
  • DDoS protection
  • CDN for static assets

Monitoring:

  • Uptime monitoring
  • Performance monitoring
  • Alerting
  • Log aggregation

Database Backups:

  • Frequency: Daily minimum
  • Retention: 30 days
  • Storage: Off-site
  • Testing: Monthly restore tests

File Backups:

  • Frequency: Daily (if LOCAL storage)
  • Retention: 30 days
  • Note: S3 has built-in redundancy

Configuration Backups:

  • Version control for .env-prod
  • Secure storage
  • Document all customizations

RTO (Recovery Time Objective):

  • Small deployment: 4 hours
  • Medium deployment: 2 hours
  • Large deployment: 1 hour
  • Enterprise: 15 minutes

RPO (Recovery Point Objective):

  • Standard: 24 hours (daily backups)
  • Enhanced: 1 hour (continuous replication)
  • Critical: Near-zero (synchronous replication)

  • Choose appropriate AWS region (if using S3)
  • Configure database location
  • Ensure services comply with regulations
  • Document data flows
  • SOC 2 compliance (if required)
  • GDPR compliance (for EU users)
  • HIPAA compliance (for healthcare data)
  • PCI DSS (if processing payments)
  • Enable comprehensive logging
  • Log retention policies
  • Access control logging
  • Regular security audits

  • CPU meets minimum requirements (4+ cores)
  • RAM meets minimum requirements (8+ GB)
  • Storage meets minimum requirements (50+ GB SSD)
  • Network bandwidth adequate (10+ Mbps)
  • Operating system installed and updated
  • Docker Engine 24.0+ installed
  • Docker Compose 2.20+ installed
  • Required ports open (80, 443)
  • Firewall configured
  • Domain name registered and configured
  • SSL certificate obtained (Let’s Encrypt recommended)
  • WorkOS account created and configured
  • AI provider configured (OpenAI or Azure)
  • Optional services configured (Stripe, SendGrid, etc.)
  • .env-prod file created with all required variables
  • Credentials generated and secured
  • File permissions set correctly
  • Docker volumes configured
  • Test deployment in staging environment
  • Verify all services start correctly
  • Test authentication flow
  • Test AI functionality
  • Test file uploads
  • Load testing completed (if production)