Cto Architecture Whitepaper

21 min read

TamperTrail — CTO & Security Architecture Guide

Document Type: Executive Whitepaper Classification: Public
Audience: Chief Technology Officers, Chief Security Officers, Principal Engineers, and Security Auditors. Version: 1.0 — March 2026
Purpose: A transparent, technically rigorous description of TamperTrail's internal architecture, security model, cryptographic guarantees, performance engineering, and the shared responsibility framework governing the boundary between what TamperTrail protects and what the customer must protect. Keywords: audit log data governance, GDPR audit trail, CCPA compliance logging, privacy by design, encrypted audit log, immutable audit trail, data lifecycle management, HIPAA audit logging


Table of Contents

  1. Executive Summary & Design Philosophy
  2. High-Level System Architecture
  3. Performance & Scalability Mechanics
  4. Security & Cryptography Deep Dive
  5. Data Lifecycle & Retention Governance
  6. Shared Responsibility Model
  7. Compliance Readiness Posture
  8. Infrastructure Topology & Deployment Model

1. Executive Summary & Design Philosophy

TamperTrail is a self-hosted, immutable audit vault designed to give engineering and compliance teams cryptographically verifiable evidence that a sequence of events occurred, was not modified, and was not silently deleted — without surrendering that data to a third-party cloud provider.

The Problem It Solves

Modern compliance frameworks (SOC 2, ISO 27001, HIPAA, GDPR) require organizations to maintain tamper-evident audit trails. The standard industry response is to ship logs to a SaaS platform. This approach has two fundamental problems:

  1. Data Sovereignty: Sensitive operational and user-behavioral data leaves your infrastructure boundary permanently.
  2. Verifiability: You cannot independently verify that a SaaS audit log has not been modified. You are trusting the vendor's word.

TamperTrail eliminates both problems. Logs never leave your infrastructure, and mathematical proof of integrity is built into every write.

Core Architectural Principles

PrincipleImplementation
Zero external dependencies at runtimeAll components run as Docker containers. No cloud APIs, no third-party SDKs, no outbound network calls.
Immutability by cryptographic constructionEvery log entry is SHA-256 hashed and chained to its predecessor. Tampering breaks the chain in a mathematically verifiable way.
Encryption before persistenceSensitive metadata is Fernet-encrypted in application memory before it is passed to the database driver. The database never receives plaintext for this field.
Zero-config securityCryptographic secrets (encryption key, JWT secret, database password) are generated automatically on first boot. No engineer is required to choose or manage them.
Defense-in-depthSecurity is enforced at four independent layers: Nginx (network), FastAPI middleware (application), SQLAlchemy (query), and PostgreSQL RLS (database). A bypass at any single layer does not compromise the system.

Technology Selection Rationale

The stack — React + FastAPI (Python) + PostgreSQL + Nginx — was chosen deliberately:

  • FastAPI over Node.js/Express: Native async/await, Pydantic for strict input validation at the schema layer, and Python's mature cryptography ecosystem (cryptography library with OpenSSL-backed AES).
  • PostgreSQL over NoSQL: ACID guarantees, native JSONB with GIN indexing (combining document-store flexibility with relational integrity), and Row-Level Security for multi-tenant enforcement at the database engine level.
  • Nginx over application-layer rate limiting: Network-layer enforcement is not bypassable by application bugs. Nginx's limit_req_zone operates before a single Python byte is executed.
  • Docker Compose over Kubernetes (for self-hosted tier): Deployment simplicity is itself a security property — complex orchestration introduces additional attack surface for self-hosted deployments.
  • SQLAlchemy + Alembic: Parameterized query construction (eliminating SQL injection by construction) and a full, auditable migration history for the database schema.

2. High-Level System Architecture

The system is composed of four runtime layers, each with a clearly defined responsibility boundary.

        ┌─────────────────────────────────────────────────────┐
        │                   CLIENT TRAFFIC                    │
        │          (Developers, Dashboards, Monitors)         │
        └──────────────────────┬──────────────────────────────┘
                               │ :80
        ┌──────────────────────▼──────────────────────────────┐
        │               LAYER 1 — NGINX GATEWAY               │
        │  Rate limiting · Request size caps · Security hdrs  │
        │  Reverse proxy to FastAPI · Serves React SPA files  │
        │              [ tampertrail-client container ]           │
        └──────────┬────────────────────────────┬─────────────┘
                   │ /v1/* (API calls)           │ /* (SPA assets)
        ┌──────────▼──────────────────┐   ┌──────▼────────────┐
        │   LAYER 2 — FASTAPI ENGINE  │   │  React SPA files  │
        │ Validation · Auth · Hashing │   │  (static, nginx)  │
        │ Encryption · WAL · Batching │   └───────────────────┘
        │   [ tampertrail-server :8000 ]  │
        └──────────┬──────────────────┘
                   │ asyncpg (async SQL, connection pool)
        ┌──────────▼──────────────────────────────────────────┐
        │           LAYER 3 — POSTGRESQL DATABASE             │
        │  audit_logs · JSONB GIN index · encrypted metadata  │
        │   Row-Level Security · Alembic-managed migrations   │
        │              [ tampertrail-db container ]               │
        └─────────────────────────────────────────────────────┘

2.1 Layer 1 — The API Gateway (Nginx)

Container: tampertrail-client | Exposed: Port 80 (only publicly accessible port)

The FastAPI backend is bound to tampertrail-server:8000 inside the Docker network and is never directly reachable from outside. All external traffic — API calls and dashboard requests — enters exclusively through Nginx.

Responsibilities:

  • Reverse Proxy: Forwards /v1/* to tampertrail-server:8000. Injects X-Forwarded-For and X-Real-IP for accurate IP attribution in logs.
  • Rate Limiting (per IP, limit_req_zone with nodelay):
    • POST /v1/auth/login: 5 req/min — renders brute-force attacks infeasible
    • POST /v1/log: 100 req/min — prevents runaway services from flooding the WAL
    • All other /v1/*: 200 req/min
  • Request Size Limiting (client_max_body_size): Oversized payloads are rejected at the network layer before any Python memory is allocated, preventing memory exhaustion attacks.
  • Security Headers (enforced on every response): X-Frame-Options: DENY, X-Content-Type-Options: nosniff, Content-Security-Policy, Strict-Transport-Security, Referrer-Policy: no-referrer.
  • OpenAPI Suppression: /docs, /redoc, and /openapi.json are explicitly blocked, preventing public API schema discovery.
  • Static Serving: The pre-built React SPA is served directly as static files — no Node.js runtime in production.

2.2 Layer 2 — The Backend Engine (FastAPI / Python)

Container: tampertrail-server | Internal port: 8000

The application brain. Handles authentication, validation, encryption, hash computation, and asynchronous log ingestion.

Ingestion Pipeline (sub-10ms response path)

  1. API key validated via Argon2id hash comparison.
  2. LogIngestRequest Pydantic schema enforces field constraints — invalid payloads rejected with 422 before processing.
  3. LogItem immediately appended to the Write-Ahead Log (/app/data/queue.wal on the server_data volume) — durable before 202 Accepted is returned.
  4. Item placed on asyncio.Queue. HTTP handler returns — it does not wait for the database.
  5. Background IngestionService worker drains the queue in micro-batches: encrypts metadata, computes hash chain, executes a single bulk INSERT for the entire batch.
  6. WAL position file (queue.wal.pos) updated only after DB transaction commits. On crash/restart, uncommitted entries replay automatically.

Schema & Migrations

Alembic manages all database schema migrations. start.sh runs alembic upgrade head on every container start, guaranteeing schema–code consistency with a full auditable migration history. SQLAlchemy (async mode via asyncpg) handles all queries using parameterized expressions — raw SQL string concatenation is not used anywhere in the codebase.


2.3 Layer 3 — The Storage Layer (PostgreSQL)

Container: tampertrail-db | Persistence: postgres_data Docker volume

Dual-storage strategy in the audit_logs table:

ColumnTypeEncryptedGIN IndexedPurpose
tagsJSONB❌ Plaintext✅ YesSearchable context — fast dashboard filtering
metadataBYTEA✅ Fernet AES-128❌ N/ASensitive forensic payload — opaque binary blob

This split is deliberate: encrypting tags would make them unsearchable (ciphertext is opaque to the query planner). TamperTrail achieves both search performance on structured context and cryptographic privacy for sensitive payload data simultaneously.

Row-Level Security (RLS): PostgreSQL RLS policies enforce tenant isolation at the database engine layer — a defense-in-depth backstop independent of application-layer filtering.


2.4 Layer 4 — The Presentation Layer (React SPA)

Served by: Nginx (static files — no runtime server) | Role: Strictly read-only viewer

The React frontend never receives the metadata field. The LogEntryOut API response schema structurally excludes it — the encrypted_metadata BYTEA column is dropped before serialization. A complete compromise of the React application (XSS, supply chain attack) cannot expose the encrypted vault because that data is architecturally never transmitted to the browser.


3. Performance & Scalability Mechanics

3.1 Ingestion Speed — Async Micro-Batching

The most common bottleneck in audit logging is synchronous per-entry database writes. At 100 logs/second, this means 100 PostgreSQL round-trips per second with individual transaction overhead.

TamperTrail's solution:

  • The HTTP handler enqueues items on asyncio.Queue in sub-microsecond time.
  • A background coroutine drains the queue in batches (up to 500 items per batch, 50ms flush interval).
  • Each batch executes a single parameterized INSERT ... VALUES (...) with multiple rows — one DB round-trip for hundreds of entries.
  • asyncpg maintains a persistent connection pool — no connection establishment on the hot path.

Result: POST /v1/log response latency is bounded by WAL disk write time (1–3ms on SSD), not database latency. The system sustains thousands of ingestion requests per second on standard hardware before the database becomes a bottleneck.

Single-Worker Architecture (by design): TamperTrail runs as a single-worker async process (--workers 1). This is intentional, not a limitation. All concurrent request handling is managed by Python's asyncio event loop — a single worker handles thousands of concurrent connections because the hot path (WAL append + queue push) is non-blocking and completes in microseconds. Hash chain integrity is additionally enforced at the database level via PostgreSQL advisory locks (pg_advisory_xact_lock), meaning that even if multiple workers were introduced, the chain would remain consistent. However, a single worker eliminates all inter-process state contention, keeps memory usage predictable, and simplifies the WAL lifecycle. DevOps teams should not attempt to increase the worker count — it would increase memory usage without meaningful throughput improvement, as the bottleneck is disk I/O and database commit latency, not Python concurrency.

3.2 WAL Durability Guarantee

EventBehavior
202 Accepted returnedEntry is durable on disk in WAL file — guaranteed
Server crash mid-batchOn restart, worker replays all WAL entries not confirmed in .wal.pos — zero data loss
Sustained overload (queue growing faster than DB drain)WAL enforces a maximum file size — new writes return 503 rather than silently failing or causing unbounded disk growth

WAL Data Format — Transparency Note: The WAL file (queue.wal) stores incoming log entries in plaintext JSON (JSONL format) on the server_data Docker volume. This is the standard and correct behavior — the WAL's purpose is crash recovery, and encrypting its contents would add latency to every write on the critical ingestion path without meaningful security benefit (the file is on the same server that holds the encryption key in memory). This is identical to how PostgreSQL's own WAL operates: data is written to the WAL in plaintext before being committed to encrypted tablespaces. The metadata field is encrypted by the background batch worker immediately before database insertion, not at WAL write time. For environments requiring encryption of all data at rest — including temporary buffers — ensure your Docker host volume is encrypted at the filesystem or block-device level (e.g., LUKS on Linux, BitLocker on Windows, or an encrypted EBS volume on AWS). This provides transparent encryption of the WAL file, the PostgreSQL data directory, and all other disk-resident data without any performance impact on the application layer.

3.3 Search Speed — PostgreSQL GIN Indexing

The tags JSONB column carries a Generalized Inverted Index (GIN). GIN indexes decompose JSONB into individual key-value pairs and index each independently, enabling:

  • JSONB containment (tags @> '{"plan":"pro"}'): Used by the meta_contains API parameter — leverages the GIN index for near-constant query time regardless of table size.
  • Universal text search: The search parameter performs ILIKE '%term%' across actor, action, message, environment, IP, tags-cast-to-text, and hash simultaneously in a single OR predicate.

Sub-10ms dashboard searches across millions of log entries are achievable on standard PostgreSQL hardware.

3.4 Resource Protection — Nginx Request Size Limiting

Python's JSON deserialization of an arbitrarily large payload consumes memory proportional to payload size. Nginx's client_max_body_size directive rejects oversized requests before they reach the Python process, returning 413 Request Entity Too Large with zero Python memory allocation.


4. Security & Cryptography Deep Dive

4.1 The Immutable Ledger — SHA-256 Hash Chaining

Every audit_logs row carries two hash fields:

FieldContent
prev_hashThe hash of the immediately preceding entry (same tenant, chronological order)
hashSHA-256 digest of this entry's canonical representation

Hash input (deterministic concatenation): prev_hash + created_at + actor + action + target_type + target_id + hex(metadata_cipher)

Including the hex-encoded ciphertext of metadata in the hash means that modifying the encrypted bytes — even without the decryption key — breaks the chain. The first entry of each tenant uses a fixed GENESIS_HASH (SHA-256 of "GENESIS") as prev_hash.

Tamper detection (GET /v1/verify): For each entry in chronological order, the server recomputes the expected hash from stored fields and asserts entry.prev_hash == previous_entry.hash. Any deletion, modification, or insertion produces a broken link at that position — detectable without any external oracle.

What this proves to an auditor: If verification returns "status": "ok", the log sequence has not been modified, reordered, or deleted since the last entry was written. This guarantee holds even against the operator of the instance: database-level modifications cannot be made undetectable without rehashing all subsequent entries.

Retention-safe verification — Monthly Checkpoints: When logs are pruned under a retention policy, POST /v1/checkpoints creates a cryptographic snapshot of the chain state at a month-end boundary. The verifier bridges the pruning gap by anchoring to the checkpoint hash, not GENESIS — preserving full chain verifiability across retention events.


4.2 Encryption at Rest — Fernet AES-128

Algorithm: Fernet (Python cryptography library, OpenSSL-backed)

  • Cipher: AES-128-CBC
  • Authentication: HMAC-SHA256 (encrypt-then-MAC — ciphertext tampering is detectable)
  • Key format: 32 bytes (256-bit), URL-safe base64 encoded — first 16 bytes for AES, second 16 for HMAC

Key lifecycle:

  • Auto-generated on first boot via Fernet.generate_key() (draws from /dev/urandom).
  • Stored in /app/data/config.json on the server_data Docker volume with restricted filesystem permissions.
  • Never transmitted over the network. Never stored in the database.

Key rotation (MultiFernet): The ENCRYPTION_KEY accepts a comma-separated list. The first key encrypts new entries; all keys decrypt existing entries. Zero-downtime rotation: prepend a new key, deploy — new entries use the new key, existing entries remain readable.

Envelope encryption (optional): A MASTER_KEY configuration enables KEK/DEK separation — the data encryption key is itself encrypted by the master key before storage, enabling integration with AWS KMS, HashiCorp Vault, or GCP Cloud KMS. The plaintext DEK never persists to disk.

What encryption protects:

  • A full PostgreSQL dump exposes only binary ciphertext for the metadata column.
  • A compromised read replica, misconfigured snapshot, or stolen backup file does not expose metadata contents.

What encryption does not protect:

  • An attacker with access to both the database and config.json (i.e., the server_data volume) can decrypt all metadata. Protecting the encryption key is the customer's responsibility — see Section 6.

4.3 Authentication Architecture — Strict Principal Separation

PrincipalCredentialScopeHashing
Machine (service/script)API Key (X-API-Key header)POST /v1/log onlyArgon2id
Human (admin/viewer)JWT session cookie (tampertrail_token)Dashboard & management APIsHS256, HTTPOnly cookie

Why strict separation? Combining machine credentials with human session tokens creates a class of attack where a leaked API key grants dashboard access. Structurally different credential types enforced at the routing layer eliminate this class entirely.

API key properties:

  • Generated once, returned once, never stored in plaintext.
  • Stored as Argon2id hash — resistant to GPU/ASIC cracking.
  • Revocation is immediate at the dependency injection layer.

Session JWT properties:

  • HS256 signed with auto-generated JWT_SECRET.
  • HTTPOnly (inaccessible to JavaScript — mitigates XSS token theft).
  • SameSite=Lax (mitigates CSRF).
  • One active session per user — new login invalidates all prior sessions.
  • Session records include IP and user-agent for forensic audit.

Password hashing:

  • Argon2id (memory-hard, OWASP-recommended, resistant to GPU cracking).
  • Automatic rehash-on-login: if cost parameters were upgraded since last login, the hash is transparently upgraded on next successful authentication.

4.4 License Integrity — RS256 JWT

Pro license keys are RS256-signed JWTs (asymmetric). The private key is held exclusively by the TamperTrail issuing authority; the public key is embedded in the server for offline verification. A valid-appearing license key cannot be forged without the private key, even with full access to the source code.


5. Data Lifecycle & Retention Governance

StageDetail
IngestionValidated → metadata encrypted in memory → hash computed → WAL write → 202 Accepted returned → background batch DB insert
StorageAppend-only audit_logs table. Entries are never UPDATEd after creation. Alembic manages schema with full migration history.
SearchGET /v1/logs — paginated, filtered, tenant-scoped. metadata excluded from all responses structurally.
ExportGET /v1/export — cursor-streamed CSV or JSONL. metadata excluded from exports by design.
VerificationGET /v1/verify / GET /v1/verify/deep — full chain re-verification on demand.
PruningHard delete of entries older than log_retention_days. Monthly checkpoints created first to preserve chain verifiability.

6. Shared Responsibility Model

This model follows industry-standard shared responsibility frameworks adapted for self-hosted enterprise software.


6.1 What TamperTrail Guarantees

GuaranteeMechanism
Cryptographic chain integritySHA-256 hash chaining on every write. GET /v1/verify provides mathematical proof.
Metadata encryption before persistenceFernet AES-128 in application memory. Plaintext never reaches the database driver.
Brute-force login protectionNginx: 5 req/min per IP on login endpoint.
API abuse preventionNginx rate limiting and request size caps on all endpoints.
Credential storage securityAll passwords and API keys stored as Argon2id hashes. No plaintext persistence.
Session securityHTTPOnly + SameSite=Lax JWT. Single active session per user.
Cross-tenant data isolationApplication-layer tenant_id scoping on every query + PostgreSQL RLS backstop.
Zero-config secret generationAll cryptographic secrets auto-generated from /dev/urandom on first boot.
Frontend metadata firewallmetadata structurally excluded from all API response schemas. React cannot receive it.
SQL injection preventionAll queries via SQLAlchemy parameterized expressions. No raw string interpolation.

6.2 What the Customer Must Ensure

6.2.1 Safeguarding the Encryption Key — CRITICAL

The ENCRYPTION_KEY in /app/data/config.json (Docker volume: server_data) is the sole key for the metadata vault. TamperTrail holds no escrow copy.

ScenarioConsequence
Key lost (volume deleted without backup)All metadata ciphertext is permanently and irrecoverably unreadable. tags and all other fields remain intact.
Key leaked (config.json exposed)All historical metadata can be decrypted offline with no detection mechanism.

Required actions:

  • Back up the server_data Docker volume (or extract config.json) to a secure, access-controlled location.
  • Treat config.json with the same access controls as a TLS private key or database root credential.
  • Rotate the encryption key on a schedule using TamperTrail's MultiFernet rotation.
  • For high-security deployments, use the MASTER_KEY option for envelope encryption with an external KMS.

6.2.2 Infrastructure & Network Security

ResponsibilityGuidance
Host OS patchingTamperTrail containers run as non-root users, but a host kernel exploit bypasses container isolation. Keep the Docker host OS patched.
Network firewallPort 80 should not be publicly exposed unless intentional. Restrict to your application servers or a VPN.
TLS / HTTPS terminationTamperTrail does not terminate TLS. Without a TLS reverse proxy (Caddy, Traefik, cloud load balancer) in front, API keys and JWTs transit the network in plaintext. Production deployments must use HTTPS.
Docker socketDo not expose the Docker socket to TamperTrail containers. Container escape via Docker socket yields root on the host.
Volume permissionsserver_data and secrets volumes must be accessible only to the Docker daemon and authorized system users.

6.2.3 Data Sanitization — The tags Field

TamperTrail encrypts what it is given. It cannot protect data it does not know is sensitive.

The tags field is plaintext JSONB — visible in the dashboard, returned by the API, and included in exports. Any PII or sensitive data placed in tags is stored and transmitted in the clear.

Do NOT put in tagsCorrect field: metadata
Email addresses, phone numbers✅ Put in metadata
User IP addresses (GDPR scope)✅ Put in metadata
Auth tokens, session IDs✅ Put in metadata
Internal credentials✅ Put in metadata
Stack traces with internal paths✅ Put in metadata
Any PII covered by your data classification policy✅ Put in metadata

Enforce correct field routing in your integration code. Consider code review checklists or static analysis rules that audit all POST /v1/log call sites.


6.2.4 Access Control & User Provisioning

  • Provision admin accounts only to personnel who require administrative access.
  • Revoke user access promptly when employees leave or change roles.
  • Distribute API keys only to systems that legitimately require write access to the audit log.
  • A compromised API key allows log injection (writing false entries) but not reading, modifying, or deleting existing entries.

7. Compliance Readiness Posture

TamperTrail's design provides technical controls directly relevant to major compliance frameworks. This section describes the mapping — it does not constitute a compliance certification.

ControlFramework RelevanceTamperTrail Implementation
Tamper-evident audit logSOC 2 CC7.2, ISO 27001 A.12.4SHA-256 hash chain. Mathematical tamper detection via GET /v1/verify.
Encryption of sensitive data at restHIPAA §164.312(a)(2)(iv), GDPR Art. 32Fernet AES-128 on metadata column. Key never persists to database.
Access control to audit recordsSOC 2 CC6.1, ISO 27001 A.9JWT role-based access (admin/viewer). Session tracking with IP and user-agent.
Audit log integrity verificationSOC 2 CC7.3, PCI DSS 10.5GET /v1/verify and GET /v1/verify/deep provide on-demand chain integrity reports.
Log retention policy enforcementHIPAA §164.316(b)(2), SOC 2Configurable retention periods. Retention is license-gated.
Authentication hardeningNIST SP 800-63B, SOC 2 CC6.1Argon2id password hashing. Brute-force protection via Nginx rate limiting.
Multi-tenancy / data segregationSOC 2 CC6.3Application-layer tenant_id enforcement + PostgreSQL RLS.
No data egress to third partiesGDPR Art. 44–49 (data transfers)Self-hosted. Zero outbound connections at runtime. All data remains in customer infrastructure.

Note to auditors: TamperTrail is a tool that enables compliance. Achieving certification (SOC 2, ISO 27001, HIPAA) also requires customer-side policies, procedures, and controls that are outside the scope of any software product.


8. Infrastructure Topology & Deployment Model

Container Map

ContainerImageRoleInternal Port
tampertrail-initinit (Alpine)One-shot secret generation on first boot. Exits after creating /secrets/db_password.
tampertrail-clientnginx (custom)API gateway, rate limiter, React SPA server. The only publicly exposed container.80
tampertrail-serverpython:3.12-slim (custom)FastAPI application, background ingestion worker, WAL manager.8000 (internal only)
tampertrail-dbpostgres:16PostgreSQL database.5432 (internal only)

Docker Volumes

VolumeMounted InContentsBackup Priority
postgres_datatampertrail-dbAll PostgreSQL data files — the primary data storeCritical — back up regularly
server_datatampertrail-serverconfig.json (contains ENCRYPTION_KEY, JWT_SECRET), WAL filesCritical — losing this loses all metadata
secretstampertrail-init, tampertrail-db, tampertrail-serverdb_password file — shared secret between DB and serverCritical

Startup Sequence

1. tampertrail-init     → generates /secrets/db_password (runs once, exits)
2. tampertrail-db       → starts PostgreSQL, waits for init to complete
3. tampertrail-server   → runs `alembic upgrade head`, starts Uvicorn
4. tampertrail-client   → starts Nginx, begins accepting traffic

The tampertrail-server container uses Docker Compose depends_on: condition: service_healthy to wait for PostgreSQL to be fully ready before running migrations.

Network Topology

All containers share a single Docker bridge network (tampertrail_net). tampertrail-db and tampertrail-server are not reachable from outside this network. Only tampertrail-client (Nginx) binds to a host port. This topology means that a full network compromise of the Docker host's external interface does not automatically yield database access — the attacker must first compromise the Nginx container or the Docker network itself.

Internet
    │
    ▼
[TLS Termination — Caddy / Traefik / Cloud LB]  ← Customer manages
    │ HTTPS
    ▼
[Nginx — tampertrail-client :80]  ← TamperTrail manages
    │ Internal Docker network
    ▼
[FastAPI — tampertrail-server :8000]  ← TamperTrail manages
    │
    ▼
[PostgreSQL — tampertrail-db :5432]  ← TamperTrail manages

TLS termination sits in front of Nginx and is entirely customer-managed. TamperTrail's responsibility begins at the Nginx listener.

Share this doc𝕏 Twitterin LinkedIn

Help improve these docs

See a typo or outdated information? Open a GitHub issue and we'll update it.

View Raw Markdown on GitHub