Standard Postgres 18 on a CoW substrate. Forks with the volume — tables, indexes, WAL, extensions, roles, statistics. The same psql localhost:5432 in dev and prod.
No Postgres fork. No custom storage tier. The substrate (GlideFS) does CoW; Postgres runs unmodified on top.
Inside a Beyond box, Postgres is already running. Connect through PgBouncer on localhost:5432:
psql "postgres://postgres@localhost:5432/postgres"Forks boot in seconds against a CoW snapshot of /var/lib/postgresql. Postgres replays WAL and comes up crash-consistent.
- Forks with the substrate — every byte under
/var/lib/postgresqlsnapshots atomically; the new box boots on a CoW copy. Nopg_dump, no replication topology, no wait - PgBouncer on the front — transaction pooling on
:5432; Postgres itself listens on the loopback only - Logical decoding from day one —
wal_level = logical,max_wal_senders = 10,max_replication_slots = 10. No primary restart to enable CDC later - Standard extensions, pinned — pgvector, pgvectorscale, PostGIS, pg_cron, pg_partman, pg_jsonschema, hypopg, pg_repack, pg_search, pg_stat_statements, pg_trgm, auto_explain
- Beyond extensions on the same volume —
beyond-authandbeyond-queueship in the image and live under their own schemas in your existing database. Forking your DB forks their state automatically - Vertical scale in place —
byd pg scale --size 1t; the volume is portable, resizable. The auto-tuner rewritespostgresql.conf. - Ephemeral previews are free — preview and branch volumes set
synchronous_commit = off, never flush to S3. Zero storage cost, instant teardown - WAL sink for quorum durability — Tier 1.5 streams WAL to a second failure domain. Zero data loss on host failure without a full standby
- HA via streaming replication — Tier 2 keeps a warm standby on a different host; promote on primary loss. No Postgres fork, no custom storage protocol
The same image plays every role. The tier is set by MMDS at boot.
| Tier | Durability | Availability | Use |
|---|---|---|---|
| Single (durable) | GlideFS write-behind, ~5 s on host loss | Volume rehomes, minutes RTO | Dev, low-stakes production |
| Single (ephemeral) | Local SSD only, gone on host loss | Best-effort rehome | Preview, branch, fork |
| Single + WAL sink | Quorum WAL, zero data loss | Volume rehomes, minutes RTO | Production without HA budget |
| HA | Sync replication across hosts | Warm standby, seconds RTO | Production needing fast failover |
HA + ephemeral is rejected. Nothing to be highly available about on a throwaway volume.
This repo builds three binaries plus the rootfs image:
| Binary | Role |
|---|---|
beyond-pg |
Init, supervisor, and boot orchestrator. PID 1 inside the VM |
beyond-pg-sink |
WAL sink — runs pg_receivewal --synchronous and serves segments over HTTPS |
beyond-pg-cdc |
Logical decoding sidecar — streams changes over QUIC to downstream consumers |
The image is built with Packer; the mise run build:image task assembles the rootfs.
Settings come from MMDS at boot. The image reads them and rewrites Postgres config without a restart.
| MMDS key | Effect |
|---|---|
BEYOND_PG_TIER |
primary (default), replica, or sink. Drops standby.signal for replicas |
BEYOND_PG_WAL_SINK |
URL of the WAL sink. Adds the sink to synchronous_standby_names |
BEYOND_VOLUME_EPHEMERAL |
1 → synchronous_commit = off |
BEYOND_PG_MEMORY_MB |
Read by the auto-tuner; rewrites shared_buffers, effective_cache_size, work_mem |
BEYOND_PG_PITR_TARGET |
Restore target timestamp or LSN |
BEYOND_PG_PRIMARY_CONNINFO |
Replica only — primary_conninfo value |
The boot sequence is idempotent. Every step checks state before acting, every transient failure is retryable.
Not a Postgres fork. Not Aurora-compatible. Not a sharding layer. Not a managed-Postgres reimplementation.
It's standard Postgres on a substrate that forks. Every Postgres client, ORM, migration tool, and backup tool works unchanged. The wire protocol is the SDK.