hubKnowledge Base
Knowledge Base/Getting Started
rocket_launchDeployment Guide

Getting Started

Deploy VyXlo CSP from zero to production. The full stack runs as 7 Docker containers with a single command. No passwords are stored in VyXlo — all identity is delegated to ZITADEL via OAuth 2.0 PKCE.

Prerequisites

deployed_code
Docker Engine
v24.0+
database
PostgreSQL 16
with pgvector extension
shield_person
ZITADEL
Cloud or self-hosted IdP
key
AI API Key
OpenAI / Anthropic / Gemini / Ollama

Container Stack

docker compose up -d starts all 7 containers. No separate orchestration needed for development or small production deployments.

ContainerImagePortRole
apiCustom Python 3.128000FastAPI — REST, WebSocket, SSE
celery_workerSame imageAsync task execution (AI, email, cleanup)
celery_beatSame imageScheduled tasks (expiry, digests, retention)
flowerSame image5555Celery task monitoring dashboard
postgrespgvector/pgvector:pg165432Primary DB — full-text + vector search
redisredis:7-alpine6379Cache, Celery queue & result backend
miniominio/minio9000 / 9001S3-compatible object storage

Quick Start

1

Clone and Configure

git clone https://github.com/vyxlo/vyxlo-dms cd vyxlo-dms cp .env.example .env
2

Launch All Services

docker compose up -d # Verify containers are healthy: docker compose ps
3

Run Database Migrations

docker compose exec api alembic upgrade head
4

Verify Health Endpoints

# Liveness probe curl http://localhost:8000/api/v1/health # → {"status": "ok"} # Readiness probe (checks DB, Redis, MinIO) curl http://localhost:8000/api/v1/health/ready

Environment Variables

Full reference available in .env.example. All required variables must be set before launching containers.

VariableDescription
SECRET_KEY64-character hex secret for JWT signing
DATABASE_URLPostgreSQL async connection string
REDIS_URLRedis connection string
MINIO_ENDPOINTMinIO host:port
MINIO_ACCESS_KEYMinIO access key
MINIO_SECRET_KEYMinIO secret key
MINIO_BUCKET_NAMETarget bucket name
ZITADEL_ISSUERZITADEL instance URL
ZITADEL_AUDIENCEExpected JWT audience claim
ZITADEL_CLIENT_IDOIDC client application ID
ALLOWED_ORIGINSComma-separated CORS origins
ENABLE_AI_FEATURESToggle AI pipeline (true / false)
AI_PROCESS_ON_UPLOADAuto-process on upload (true / false)
OPENAI_API_KEYOpenAI key (if using OpenAI provider)
ANTHROPIC_API_KEYAnthropic key (if using Anthropic provider)
ENABLE_EMAIL_NOTIFICATIONSToggle email delivery (true / false)

Authentication — ZITADEL + PKCE

VyXlo delegates all identity management to ZITADEL using the OAuth 2.0 Authorization Code flow with PKCE. No passwords are stored inside VyXlo. All API calls require an Authorization: Bearer <access_token> header.

Option A

SSO Federation

Configure ZITADEL to federate with your existing IdP (SAML 2.0, OIDC, or LDAP/AD). Users log in once — VyXlo accepts federated tokens.

Option B

Embedded Auth

Use ZITADEL's hosted login UI in a redirect flow. VyXlo receives and validates the access token against the ZITADEL JWKS endpoint.

Option C

Service Account

For backend-to-backend integrations: provision a ZITADEL service account with a JSON key. Exchange the key for a JWT and call VyXlo APIs on behalf of a machine user.

Document Ingestion Pipeline

File upload is a two-step process: first create the metadata record (returns a document ID), then upload the file bytes. This pattern allows metadata to be created and queued before large files are transferred.

1Create document record
POST /api/v1/documents Authorization: Bearer <access_token> Content-Type: application/json { "title": "Q4 Financial Report", "document_type": "FINANCIAL", "folder_id": 17 } # → { "id": 1042, "status": "DRAFT", ... }
2Upload file content
POST /api/v1/documents/1042/upload Authorization: Bearer <access_token> Content-Type: multipart/form-data file=@q4-financials.pdf # Creates version 1; triggers AI processing pipeline
3Poll for AI completion
GET /api/v1/documents/1042 # ai_processed: false → queued for async processing # ai_processed: true → classification + summary ready
4Retrieve AI fields
GET /api/v1/ai/documents/1042 # → { # "ai_classification": "FINANCIAL_REPORT", # "ai_confidence": 0.97, # "ai_summary": "Quarterly results showing 18% revenue growth...", # "ai_keywords": ["revenue", "Q4", "board", "growth"], # "chunk_index_status": "INDEXED" # }

Kubernetes Deployment

Each component maps to a standard Kubernetes workload. The API and Celery worker tiers are fully stateless and horizontally scalable.

ComponentWorkloadNotes
FastAPI APIDeploymentStateless — scale horizontally
Celery WorkerDeploymentScale by concurrency requirements
Celery BeatDeployment (replicas: 1)Single scheduler instance only
PostgreSQLStatefulSetPersistent volume required
RedisStatefulSet or managedCan use ElastiCache, Upstash, etc.
MinIOStatefulSet or replaceCan replace with AWS S3 or GCS