Core Engine v4.0

The AI reads it so your team doesn't have to.

Engineered for high-velocity document intelligence. VyXlo parses, categorizes, and indexes your entire institutional knowledge in milliseconds.

vyxlo_engine_v4.py

# 1. Upload document → triggers AI pipeline

POST /api/v1/documents/1042/upload

 

# 2. Pipeline runs async (Celery + LangChain)

{

  "ai_classification": "FINANCIAL_REPORT",

  "ai_confidence": 0.97,

  "ai_summary": "Q4 results: 18% growth...",

  "ai_keywords": ["revenue", "Q4"],

  "chunk_index_status": "INDEXED"

}

 

# 3. Ask the document anything (RAG + SSE)

POST /api/v1/chat/document/1042

// Streams: token → token → done + citations

Live Pulse
Autonomous Sorting

Precision Classification

Neural Success Rate
99.82%
gavel
Legal
Contracts, NDAs
account_balance_wallet
Financial
Invoices, P&L
medical_services
Medical
Records, Prescriptions
local_shipping
Logistics
BOL, Manifests
engineering
Engineering
Specs, CAD Notes
badge
HR
Offers, Appraisals
shopping_cart
Procurement
POs, RFQs
apartment
Real Estate
Deeds, Surveys
policy
Compliance
Audits, Reports
science
R&D
Patents, Research
// Subsystem Architecture

Intelligence Stack

data_objectModule 01

Entity Extraction

Identifies people, organizations, dates, and locations across any document type. Extracted as structured JSON and stored on the document object for downstream filtering and search.

# ai_entities field on Document object
"people": ["Sarah K.", "J. Reinholt"]
"organizations": ["Kanz Holdings"]
"dates": ["2026-03-15", "2026-12-31"]
"locations": ["Dubai, UAE"]
hubModule 02

Multi-Provider

Pluggable LLM backends via LangChain. Configure per deployment via environment variables. Toggle AI features per organization via feature flag.

OpenAI (GPT-4, Embeddings)Classification + Embeddings
Anthropic (Claude)Summarization + Q&A
Google GeminiClassification
Ollama (local models)Air-gapped / on-prem
manage_searchModule 03

Semantic Search

1,536-dim pgvector embeddings. Finds conceptually related documents without exact keyword matches. Chat against any document with RAG via SSE streaming.

Index Type
HNSW (pgvector)
Q&A Protocol
RAG + SSE Stream
Endpoint
POST /chat/document/{id}

Ready to ignite your AI pipeline?

Deploy in minutes. Process thousands of documents per hour on day one.