Skip to main content

Architecture Overview

Overview

Ibex is a multi-service data platform deployed on AWS EC2 behind Traefik, with two React UIs served via S3 + CloudFront. It provides federated SQL analytics, BI reporting, AI-powered data chat, and pipeline management across heterogeneous data sources.

Deployment Topology

                   Internet

          ┌───────────┴───────────┐
          │                       │
   CloudFront (CDN)         api.triviz.cloud
   S3-hosted SPAs             EC2 Instance
          │                       │
   ┌──────┴──────┐         ┌──────┴──────┐
   │  ibex.      │         │   Traefik   │ :80/:443
   │  triviz.    │         │  TLS proxy  │ Let's Encrypt
   │  cloud      │         └──────┬──────┘
   │  (Config UI)│                │
   └─────────────┘         Routes by PathPrefix / Host
   ┌─────────────┐                │
   │  bi.triviz  │         (see Routing table below)
   │  .cloud     │
   │  (BI UI)    │
   └─────────────┘

DNS / Domains

DomainPurpose
api.triviz.cloudAll backend APIs (Traefik entry point)
ibex.triviz.cloudConfig UI — pipelines, data sources, monitoring (CloudFront → ibex-platform-ui)
bi.triviz.cloudBI UI — dashboards, charts, reports, AI chat, RAG (CloudFront → ajna-data-platform-ui-lib)
listmonk.triviz.cloudEmail marketing (Listmonk, Traefik)

Services

Application Services

ServiceLanguagePortImage
ibex-data-platformPython/FastAPI8080ghcr.io/ajnacloud-ksj/ibex-data-platform
ibex-identity-servicePython/FastAPI8090ghcr.io/ajnacloud-ksj/ibex-identity-service
ibex-analytics-servicePython/FastAPI + DuckDB8000ghcr.io/ajnacloud-ksj/ibex-analytics-service
ibex-bi-backendGo8085ghcr.io/ajnacloud-ksj/ibex-bi-backend
ibex-ai-servicePython/FastAPI8010ghcr.io/ajnacloud-ksj/ibex-ai-service
ibex-agent-engineNode.js3000ghcr.io/ajnacloud-ksj/ibex-agent-engine
ibex-listmonkGo (Listmonk)9000ghcr.io/ajnacloud-ksj/ibex-listmonk

Infrastructure Services

ServicePurposePort
traefikTLS reverse proxy + routing80, 443
postgres-metadataCentral metadata PostgreSQL DB5432 (internal), 5433 (host)
minioS3-compatible object storage (Iceberg warehouse, file uploads)9000
minio-setupOne-shot: creates warehouse + data-ingestion buckets
iceberg-restApache Iceberg REST catalog (backed by MinIO)8181
vault-secretsHashiCorp Vault — credential store8200
vault-initOne-shot: unseals Vault, seeds credentials
redpandaKafka-compatible event streaming9092
watchtowerAuto-redeploys containers when new images are pushed

Container Reference

Startup Order

Containers start in dependency order. The chain ensures each service has its dependencies healthy before it launches:
postgres-metadata  ──► vault-secrets ──► vault-init (one-shot)
                   ──► redpanda
                   ──► listmonk
minio              ──► minio-setup (one-shot) ──► iceberg-rest

postgres + redpanda + minio + vault-init + iceberg-rest + listmonk
  └──► ibex-data-platform
         └──► ibex-analytics-service
                └──► ibex-agent-engine
                       └──► ibex-ai-service
ibex-identity-service + ibex-analytics-service
  └──► ibex-bi-backend

traefik (ibex-traefik)

Image: traefik:v3.1 Ports: 80 (HTTP redirect), 443 (HTTPS) Purpose: TLS-terminating reverse proxy. All external traffic enters here. How it works:
  • Reads Docker labels from other containers at runtime via the Docker socket (/var/run/docker.sock)
  • Each container self-registers its own routing rules via traefik.http.routers.* labels — no central config file needed for app routes
  • Automatically provisions and renews Let’s Encrypt TLS certificates via ACME (stored in traefik_letsencrypt volume)
  • Routes requests by Host + PathPrefix to the correct backend container
  • Config template (traefik.yml.tpl) is rendered at startup by render-config.sh into /tmp/traefik.yml

postgres-metadata (postgres-metadata)

Image: pgvector/pgvector:pg15 Port: 5433 (host) → 5432 (container) Purpose: Central PostgreSQL database. Shared by multiple services. Databases it hosts:
  • metadata_db — main application database (pipelines, data sources, file uploads, business rules, demo data, Vault KV store)
  • listmonk — email marketing database (lists, subscribers, campaigns)
Init scripts run on first boot (via docker-entrypoint-initdb.d/):
  1. 01-init-business-data.sql — business tables, demo MySQL/orders/products/users data
  2. 02-init-vault.sql — creates vault_kv_store table used by Vault as its storage backend
  3. 03-init-bi-metadata.sql — BI metadata schema (reports, connections)
  4. 04-enable-pgvector.sql — enables the pgvector extension for vector similarity search
Uses the pgvector image (PostgreSQL 15 + pgvector extension) to support embedding storage for RAG.

redpanda (redpanda)

Image: redpandadata/redpanda:v25.1.7 Ports: 9092 (external Kafka), 29092 (internal Kafka) Purpose: Kafka-compatible event streaming broker for CDC pipelines. How it works:
  • ibex-data-platform publishes pipeline events and CDC change records to Redpanda topics
  • Auto-creates topics on first publish (auto_create_topics_enabled=true)
  • Single-node, single-partition setup (sufficient for current load)
  • Data persisted in redpanda_data volume
  • Two listeners: PLAINTEXT://redpanda:29092 for internal container-to-container, OUTSIDE://localhost:9092 for host access

minio (minio)

Image: minio/minio:latest Ports: 9010 (S3 API), 9011 (console UI) Purpose: S3-compatible object storage. Stores Iceberg table data (Parquet files) and pipeline ingestion staging files. Two buckets (created by minio-setup):
  • warehouse/ — Iceberg table data, managed by iceberg-rest. Files are Parquet-format column data for analytics tables
  • data-ingestion/ — CDC and pipeline staging area where ibex-data-platform writes ingested records before they’re committed to Iceberg
Credentials stored in Vault and referenced by iceberg-rest and ibex-data-platform.

minio-setup (one-shot)

Image: minio/mc:latest Purpose: Runs once at startup to create the warehouse and data-ingestion buckets in MinIO. Exits after completion. Never restarts.

iceberg-rest (iceberg-rest)

Image: tabulario/iceberg-rest:0.9.0 Port: 8181 Purpose: Apache Iceberg REST catalog. Tracks table metadata (schemas, partition specs, snapshots) for Iceberg tables stored in MinIO. How it works:
  • ibex-analytics-service (DuckDB) connects to this catalog via the Iceberg REST API to discover and read Iceberg tables
  • The catalog stores table metadata; actual data files live in MinIO warehouse/ as Parquet
  • When ibex-data-platform writes a CDC pipeline record to an Iceberg table, it registers the new snapshot here
  • DuckDB then reads the latest snapshot via the catalog and executes queries directly against the Parquet files in MinIO

vault-secrets (vault-secrets)

Image: hashicorp/vault:1.15 Port: 8200 Purpose: HashiCorp Vault — secrets manager. Stores all database passwords, MinIO credentials, and API keys at runtime so they’re never hardcoded in config files. How it works:
  • Uses PostgreSQL (vault_kv_store table in metadata_db) as its storage backend — no separate volume for secret data
  • Starts sealed; vault-init unseals it and seeds initial secrets
  • ibex-data-platform reads credentials from Vault at startup via a root token file mounted from the vault_keys shared volume
  • Vault config is rendered from a template (render-vault-config.sh) on startup

vault-init (one-shot)

Image: hashicorp/vault:1.15 Purpose: Runs once to initialize, unseal, and seed Vault with initial credentials. Writes the root token to vault_keys volume so ibex-data-platform can read it. Exits after completion. Secrets it seeds: PostgreSQL password, MySQL password, MinIO credentials, Listmonk API credentials, config manager admin token.

ibex-listmonk (ibex-listmonk)

Image: ghcr.io/ajnacloud-ksj/ibex-listmonk:latest Port: 9000 Purpose: Email marketing and transactional email platform (Listmonk). Manages mailing lists, subscribers, campaigns, and SMTP delivery. How it works:
  • Stores all data in the listmonk database on postgres-metadata
  • SMTP configured for Gmail (configurable via env vars)
  • Exposed externally at listmonk.triviz.cloud via Traefik (full app, not just API)
  • File uploads stored in listmonk_uploads volume

ibex-identity-service (ibex-identity-service)

Image: ghcr.io/ajnacloud-ksj/ibex-identity-service:latest Port: 8090 Purpose: Authentication and user management. Issues and validates JWTs. How it works:
  • Supports two auth modes (set via IDENTITY_AUTH_MODE):
    • local — users stored in SQLite (/data/ajna-identity.db), passwords hashed locally
    • cognito — delegates auth to AWS Cognito (User Pool ID + Client ID from env)
  • Issues JWT access tokens (default 24h) and refresh tokens (default 7d)
  • All other services call GET /validate to verify tokens on every request
  • Exposed via Traefik at /auth/* and /users/*

ibex-data-platform (ibex-data-platform)

Image: ghcr.io/ajnacloud-ksj/ibex-data-platform:latest Port: 8080 Purpose: Config/platform manager. The control plane for data sources, CDC pipelines, business rules, and configurations. How it works:
  • Registers and manages data sources: MySQL, PostgreSQL, Iceberg, S3, REST APIs
  • Configures CDC (Change Data Capture) pipelines using Redpanda as the message bus
  • Syncs registered sources to ibex-analytics-service so DuckDB can attach them as catalogs
  • Reads DB credentials from Vault at startup via the root token file mounted at /vault/keys/root_token
  • Stores all state in postgres-metadata (metadata_db)
  • Exposed via Traefik at /api/*, /configs/*, /business-configs/*, /health, /metrics (priority 10 — catch-all)
  • UI: ibex.triviz.cloud (Config UI — ibex-platform-ui)

ibex-analytics-service (ibex-analytics-service)

Image: ghcr.io/ajnacloud-ksj/ibex-analytics-service:latest Port: 8000 Purpose: Federated SQL execution engine. Embeds DuckDB and attaches heterogeneous data sources as virtual catalogs. How it works:
  • On startup, fetches the data source registry from ibex-data-platform (GET /api/data-sources)
  • Attaches each registered source to DuckDB:
    • MySQL → DuckDB MySQL scanner extension
    • PostgreSQL → DuckDB Postgres scanner extension
    • Iceberg → Iceberg REST catalog → MinIO Parquet files (via S3FileIO)
    • S3/CSV/Parquet → direct file scan
  • Queries can reference multiple sources in one SQL statement: SELECT * FROM mysql_business.orders JOIN postgres_metadata.users
  • Internal only — no Traefik label, only reachable inside the ajna Docker network
  • Called by: ibex-bi-backend (report/analytics queries), ibex-ai-service (direct SQL), ibex-agent-engine (agent SQL execution)
  • Protected by an API key (ANALYTICS_SERVICE_API_KEY)

ibex-bi-backend (ibex-bi-backend)

Image: ghcr.io/ajnacloud-ksj/ibex-bi-backend:latest Port: 8085 Purpose: Go-based BI metadata service. The primary API for the BI UI. How it works:
  • Validates every request by calling ibex-identity-service (GET /validate) — acts as an auth gateway
  • Stores BI metadata in SQLite (/data/ajna-bi-metadata.dbajna_bi_metadata volume): reports, database connections, dashboards, charts, RAG documents
  • Proxies SQL execution to ibex-analytics-service for report preview and analytics queries
  • Exposes CRUD for: reports, metadata, database connections, dashboards, charts, RAG knowledge base
  • Exposed via Traefik at /api/metadata, /api/reports, /api/analytics, /api/database-connections, /internal/data-sources, /api/dashboards, /api/charts, /api/rag (priority 90)
  • UI: bi.triviz.cloud (BI UI — ajna_data_platform_ui_lib)

ibex-ai-service (ibex-ai-service)

Image: ghcr.io/ajnacloud-ksj/ibex-ai-service:latest Port: 8010 Purpose: AI conversation orchestrator. Manages chat sessions and routes user messages to the agent engine. How it works:
  • Receives chat messages from the BI UI (POST /api/chat)
  • Manages conversation history (CRUD via /api/conversations)
  • Forwards each user message to ibex-agent-engine (POST /api/agents/{name}/query)
  • Validates JWTs via ibex-identity-service on every request
  • Streams the agent response back to the browser
  • Default agent: demo_data_analyst (configurable via DEFAULT_AGENT_NAME)
  • Exposed via Traefik at /api/chat and /api/conversations (priority 90)

ibex-agent-engine (ibex-agent-engine)

Image: ghcr.io/ajnacloud-ksj/ibex-agent-engine:latest Port: 3000 Purpose: Node.js LLM agent runtime. Executes multi-step reasoning pipelines that combine LLM calls with SQL execution. How it works:
  • Each agent is a defined pipeline of nodes (e.g. tool-kb-1 → tool-exec-1 → tool-gen-1):
    1. tool-kb-1 — searches the RAG knowledge base for relevant context (table schemas, business docs)
    2. tool-exec-1 — LLM generates SQL; executes it against ibex-analytics-service; returns data
    3. tool-gen-1 — LLM formats the data into a natural language response
  • LLM routing via llm-routing.json: endpoint https://openrouter.ai/api/v1, model openai/gpt-4o-mini
  • Uses OPENROUTER_API_KEY for authentication to OpenRouter
  • Internal only — no Traefik label; called exclusively by ibex-ai-service

ibex-watchtower (ibex-watchtower)

Image: containrrr/watchtower:1.7.1 Purpose: Automatic container updater. Polls GHCR every 5 minutes and restarts containers when a new image is published. How it works:
  • Only monitors containers with the label com.centurylinklabs.watchtower.enable=true
  • Pulls the new image → stops the old container → starts a new one (rolling restart)
  • Uses the Docker credentials file (/root/.docker/config.json) to authenticate to GHCR
  • --cleanup removes old image layers after update to save disk
  • Important: Watchtower only updates the image — it does NOT re-read docker-compose.yml. Changes to labels, env vars, or volumes require a manual docker compose up -d <service> on the server.

Traefik Routing

All requests to api.triviz.cloud are routed by path prefix:
PathPrefix→ ServicePriority
/auth, /usersibex-identity-service :8090default
/api/metadata, /api/reports, /api/analytics, /api/database-connections, /internal/data-sources, /api/dashboards, /api/charts, /api/ragibex-bi-backend :808590
/api/chat, /api/conversationsibex-ai-service :801090
/api, /configs, /business-configs, /health, /metricsibex-data-platform :808010 (catch-all)
listmonk.triviz.cloud (all paths)ibex-listmonk :9000
Note: ibex-analytics-service and ibex-agent-engine are internal only — no Traefik labels, only reachable inside the Docker ajna network.

Service Responsibilities

ibex-identity-service

  • JWT-based authentication (local SQLite or AWS Cognito)
  • User/role management
  • Token validation endpoint consumed by ibex-bi-backend middleware
  • Store: /data/ajna-identity.db (SQLite, persisted in Docker volume)

ibex-data-platform (Config Manager)

  • Registers data sources: MySQL, PostgreSQL, Iceberg, S3, APIs
  • Manages CDC pipelines and Redpanda Kafka topics
  • Syncs connection credentials from HashiCorp Vault
  • Propagates source registry to ibex-analytics-service
  • UI: bi.triviz.cloud (Config UI)

ibex-analytics-service

  • Embeds DuckDB for in-process federated SQL execution
  • Attaches MySQL/Postgres/Iceberg sources as DuckDB catalogs
  • Supports fully-qualified cross-source queries: mysql_business.orders JOIN postgres_metadata.public.users
  • Iceberg tables via REST catalog → MinIO S3 Parquet files
  • Adhoc file uploads (CSV/Parquet) queryable as temporary DuckDB tables
  • Not directly exposed to the internet — called by ibex-bi-backend, ibex-ai-service, ibex-agent-engine

ibex-bi-backend

  • Go service for BI metadata: reports, dashboards, charts, DB connections, RAG knowledge base
  • Proxies SQL execution to ibex-analytics-service
  • Auth middleware validates JWT via ibex-identity-service
  • Store: /data/ajna-bi-metadata.db (SQLite, persisted in Docker volume)
  • UI: bi.triviz.cloud (BI UI — dashboards, charts, reports, connections, RAG pages)

ibex-ai-service

  • Conversation/chat session management (CRUD via /api/conversations)
  • Orchestrates AI queries: receives user message → calls ibex-agent-engine → returns response
  • Passes analytics context to agents

ibex-agent-engine

  • Node.js LLM agent runtime
  • Connects to LLM via OpenRouter (OPENROUTER_API_KEY, endpoint https://openrouter.ai/api/v1)
  • Executes SQL via ibex-analytics-service to answer data questions
  • Exposes pre-defined agents (e.g. demo_data_analyst)

ibex-listmonk

  • Email marketing and transactional email
  • Uses postgres-metadata DB (separate listmonk database)
  • SMTP via Gmail (configurable)

Data Flow

Federated SQL Query (Reports / SQL Lab)

Browser (BI UI)
  └─► POST /api/analytics/query  →  ibex-bi-backend :8085
        └─► POST /analytics/query  →  ibex-analytics-service :8000
              └─► DuckDB executes federated SQL
                    ├─► mysql_business.*        (attached MySQL)
                    ├─► postgres_metadata.*     (attached PostgreSQL)
                    └─► iceberg_catalog.*       (Iceberg REST → MinIO S3)

AI Chat Query

Browser (BI UI)
  └─► POST /api/chat  →  ibex-ai-service :8010
        └─► POST /api/agents/{name}/query  →  ibex-agent-engine :3000
              ├─► LLM (OpenRouter/Claude/GPT) — generates SQL
              └─► POST /analytics/query  →  ibex-analytics-service :8000
                    └─► DuckDB executes SQL → returns results
              └─► LLM — formats natural language response
        └─► Response streamed back to browser

Authentication Flow

Browser
  └─► POST /auth/login  →  ibex-identity-service  →  SQLite / Cognito
        └─► Returns JWT access token
  └─► All subsequent API requests: Authorization: Bearer <token>
        └─► ibex-bi-backend WithAuth middleware
              └─► GET /validate  →  ibex-identity-service
                    └─► 200 OK → proceed | 401 → reject

DB Connections — Unified List (UI)

Browser calls getAllConnectionsUnified():
  ├─► GET /api/database-connections/   →  ibex-bi-backend      (user-created connections)
  ├─► GET /api/data-sources            →  ibex-data-platform   (platform: MySQL, Postgres, Iceberg, S3)
  └─► GET /analytics/catalogs          →  ibex-analytics-service (DuckDB: data_lakes, storage, databases)
                                            └─► Deduplicated and merged → displayed in Connections page

Storage Architecture

┌─────────────────────────────────────────────────────────┐
│                   postgres-metadata                     │
│   DB: metadata_db (23 tables)                          │
│   ├─ ajna_poller_checkpoint    pipeline state           │
│   ├─ business_pipelines        CDC pipeline config      │
│   ├─ business_data_sources     registered sources       │
│   ├─ file_uploads              adhoc file registry      │
│   ├─ configurations / business_rules                    │
│   ├─ orders / products / users (demo data)              │
│   └─ vault_kv_store            Vault backend            │
│                                                         │
│   DB: listmonk                  email marketing         │
└─────────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────────┐
│                   MinIO (S3-compatible)                  │
│   Bucket: warehouse/                                    │
│   ├─ Iceberg table data (Parquet files)                 │
│   └─ Managed via iceberg-rest REST catalog              │
│                                                         │
│   Bucket: data-ingestion/                               │
│   └─ CDC / pipeline ingestion staging                   │
└─────────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────────┐
│                   HashiCorp Vault                        │
│   Stores: DB passwords, MinIO keys, API tokens          │
│   Backend: postgres-metadata (vault_kv_store table)     │
│   Consumed by: ibex-data-platform (via vault_keys vol)  │
└─────────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────────┐
│                   SQLite volumes                         │
│   /data/ajna-identity.db   →  users, roles, tokens      │
│   /data/ajna-bi-metadata.db →  reports, DB connections  │
└─────────────────────────────────────────────────────────┘

Frontend Architecture

Two UIs, One Component Library

Both UIs are built from the shared ajna_data_platform_ui_lib component library (React + TypeScript + Vite + Tailwind + shadcn/ui).
UIRepoHostedPurpose
BI UIajna_data_platform_ui_libbi.triviz.cloud (CloudFront → S3 /ajna-data-platform-ui-lib)Dashboards, charts, reports, DB connections, AI chat, RAG
Config UIibex-platform-uiibex.triviz.cloud (CloudFront → S3 /ibex-platform-ui)Pipeline management, data sources, monitoring

Frontend → Backend Mapping

UI PageUIAPI Calls
LoginBothPOST /auth/login → ibex-identity-service
DB ConnectionsBI UIGET /api/database-connections/ (bi-backend) + GET /api/data-sources (data-platform) + GET /analytics/catalogs (analytics-service)
Reports (create/edit)BI UIPOST/PUT /api/metadata/reports → bi-backend → analytics-service (SQL exec)
SQL LabBI UIPOST /api/analytics/query → bi-backend → analytics-service
AI ChatBI UIPOST /api/chat, GET/POST /api/conversations/ → ai-service → agent-engine → analytics-service
DashboardsBI UIGET/POST /api/dashboards → bi-backend
ChartsBI UIGET/POST /api/charts/, POST /api/charts/data → bi-backend
RAG Knowledge BaseBI UIGET /api/rag/documents, POST /api/rag/upload, POST /api/rag/query → bi-backend
PipelinesConfig UIGET/POST /api/pipelines → ibex-data-platform
Data SourcesConfig UIGET/POST /api/data-sources → ibex-data-platform

Build & Deploy

git push origin main
  └─► GitHub Actions: build Vite bundle (VITE_* vars baked in at build time)
        └─► Upload to S3
              └─► CloudFront invalidation
                    └─► New bundle live (no server restart needed)
VITE environment variables (baked at build time in GitHub Actions): BI UI (ajna_data_platform_ui_lib → bi.triviz.cloud):
  • VITE_API_BASE_URLhttps://api.triviz.cloud/api — base for bi-backend calls
  • VITE_AI_SERVICE_URLhttps://api.triviz.cloud — base for ai-service calls
  • VITE_ANALYTICS_API_URLhttps://api.triviz.cloud/api — analytics proxy via bi-backend
  • VITE_BI_SERVICE_URLhttps://api.triviz.cloud — bi-backend direct calls
  • VITE_PLATFORM_API_URLhttps://api.triviz.cloud — ibex-data-platform URL
  • VITE_IDENTITY_SERVICE_URLhttps://api.triviz.cloud — identity service
Config UI (ibex-platform-ui → ibex.triviz.cloud):
  • VITE_CONFIG_MANAGER_URLhttps://api.triviz.cloud/api — ibex-data-platform
  • VITE_IDENTITY_SERVICE_URLhttps://api.triviz.cloud — identity service
  • VITE_ANALYTICS_API_URLhttps://api.triviz.cloud/api/analytics
  • VITE_INGESTION_API_URLhttps://api.triviz.cloud/api

Auto-Deployment (Watchtower)

Developer pushes code
  └─► GitHub Actions CI: build Docker image → push to ghcr.io
        └─► Watchtower polls every 5 min (--label-enable)
              └─► Detects new image digest for labeled containers
                    └─► Pulls new image → rolling restart of container

⚠️  Watchtower updates container images only.
    It does NOT re-read docker-compose.yml for label/env changes.
    For label or env changes: SSH to server → git pull → docker compose up -d <service>

Inter-Service Communication

All services communicate over the internal Docker bridge network ajna. No inter-service traffic leaves the host.
ibex-data-platform   ──► ibex-analytics-service  (sync data sources)
ibex-bi-backend      ──► ibex-analytics-service  (execute SQL)
ibex-bi-backend      ──► ibex-identity-service   (validate JWT)
ibex-ai-service      ──► ibex-agent-engine        (run agent)
ibex-ai-service      ──► ibex-analytics-service  (direct SQL)
ibex-ai-service      ──► ibex-identity-service   (validate JWT)
ibex-agent-engine    ──► ibex-analytics-service  (execute agent SQL)
ibex-data-platform   ──► vault-secrets            (fetch credentials)
ibex-data-platform   ──► postgres-metadata        (metadata DB)
ibex-data-platform   ──► redpanda                 (pipeline events)

Security

  • TLS: Let’s Encrypt via Traefik ACME, all external traffic HTTPS only
  • Auth: JWT issued by ibex-identity-service, validated on every request by ibex-bi-backend and ibex-ai-service
  • Credentials: All DB passwords and API keys stored in HashiCorp Vault, injected via environment at runtime
  • CORS: Explicit allowlist — https://ibex.triviz.cloud,https://bi.triviz.cloud on all backend services
  • Network isolation: Analytics and Agent Engine are internal-only (no Traefik exposure)
  • Mixed Content: All frontend API calls use HTTPS; FastAPI services configured to avoid HTTP redirects

Key Configuration Files

FileLocationPurpose
docker-compose.aws.yml/opt/ibex-platform-runner/ on EC2Production service definitions
docker-compose.deploy-base.ymlsameInfrastructure services (Postgres, MinIO, Vault, etc.)
docker-compose.demo-data.ymlsameDemo MySQL + seed data
.env/opt/ibex-platform-runner/Secrets and overrides (not in git)
deploy/traefik/traefik.yml.tplsameTraefik config template
deploy/init-db/init.sqlsamePostgreSQL schema + demo data
deploy/scripts/vault-init.shsameVault unseal + credential seeding