Release notes for EDB Postgres Distributed version 4.1.0 v4
EDB Postgres Distributed version 4.1.0 includes the following:
Component | Version | Type | Description |
---|---|---|---|
CLI | 1.0.0 | Feature | Ability to gather information such as the current state of replication, consensus, and nodes for an EDB Postgres Distributed cluster using new command-line interface (CLI). |
BDR | 4.1.0 | Feature | Support in-place major upgrades of Postgres on a data node with a new command-line utility, bdr_pg_upgrade. This utility uses the standard pg_upgrade command, and reduces the time and network bandwidth needed to do major version upgrades of a EDB Postgres Distributed cluster. |
BDR | 4.1.0 | Feature | Enable the ability to configure a replication lag threshold. After the threshold is met, the transaction commits get throttled. This threshold allows limiting RPO without incurring the latency impact on every transaction that comes with synchronous replication. |
BDR | 4.1.0 | Feature | Global sequences are automatically configured based on data type replacing the need to set up custom sequence handling configuration on every node. The new SnowflakeID algorithm replaces Timeshard, which had limitations. |
BDR | 4.1.0 | Feature | Add a new SQL-level interface for configuring synchronous replication durability and visibility options by group rather than by node. This approach allows you to configure all nodes consistently from a single place instead of using config files. |
BDR | 4.1.0 | Feature | Add a new synchronous replication option, Group Commit, which allows a quorum to be required before committing a transaction in a EDB Postgres Distributed group. |
BDR | 4.1.0 | Feature | Allow a Raft request to be required for CAMO switching to Local Mode. Add a require_raft flag to the CAMO pairing configuration which controls the behavior of switching from CAMO protected to Local Mode, introducing the option to require a majority of nodes to be connected to allow to switch to Local Mode. (RT78928) |
BDR | 4.1.0 | Feature | Allow replication to continue on ALTER TABLE ... DETACH PARTITION CONCURRENTLY of already detached partition. Similarly to how BDR 4 handles CREATE INDEX CONCURRENTLY when same index already exists, we now allow replication to continue when ALTER TABLE ... DETACH PARTITION CONCURRENTLY is receiver for partition that has been already detached. (RT78362) |
BDR | 4.1.0 | Feature | Add additional filtering options to DDL filters. DDL filters allow for replication of different DDL statements to different replication sets. Similar to how table membership in replication set allows DML on different tables to be replicated via different replication sets. This release adds new controls that make it easier to use the DDL filters: - query_match - if defined query must match this regex - exclusive - if true, other matched filters are not taken into consideration (i.e. only the exclusive filter is applied), when multiple exclusive filters match, we throw error |
BDR | 4.1.0 | Feature | Add bdr.lock_table_locking configuration variable. When enabled this changes behavior of LOCK TABLE command to take take a global DML lock |
BDR | 4.1.0 | Feature | Implement buffered write for LCR segment file. This should reduce I/O and improve CPU usage of the Decoding Worker. |
BDR | 4.1.0 | Feature | Add support for partial unique index lookups for conflict detection. Indexes on expression are however still not supported for conflict detection. (RT78368) |
BDR | 4.1.0 | Feature | Add additional statistics to bdr.stat_subscription :- nstream_insert => the count of INSERTs on streamed transactions - nstream_update => the count of UPDATEs on streamed transactions - nstream_delete => the count of DELETEs on streamed transactions - nstream_truncate => the count of TRUNCATEs on streamed transactions - npre_commit_confirmations => the count pre-commit confirmations, when using CAMO - npre_commit => the count of pre-commits - ncommit_prepared => the count of prepared commits with 2PC - nabort_prepared => the count of aborts of prepared transactions with 2PC |
BDR | 4.1.0 | Feature | Add execute_locally option to bdr.replicate_ddl_command.This allows optional queueing of ddl commands for replication to other groups without executing it locally. (RT73533) |
BDR | 4.1.0 | Feature | Add fast argument to bdr.alter_subscription_disable() . The argument only influences the behavior of immediate . When set to true (default) it will stop the workers without letting them finish the current work. (RT79798) |
BDR | 4.1.0 | Feature | Simplify bdr.{add,remove}_camo_pair functions to return void. |
BDR | 4.1.0 | Feature | Add connectivity/lag check before taking global lock so that application or user does not have to wait for minutes to get lock timeout when there are obvious connectivity issues. Can be set to DEBUG, LOG, WARNING (default) or ERROR. |
BDR | 4.1.0 | Feature | Only log conflicts to conflict log table by default. They are no longer logged to the server log file by default, but this can be overridden. |
BDR | 4.1.0 | Feature | Improve reporting of remote errors during node join. |
BDR | 4.1.0 | Feature | Make autopartition worker's max naptime configurable. |
BDR | 4.1.0 | Feature | Add ability to request partitions upto the given upper bound with autopartition. |
BDR | 4.1.0 | Feature | Don't try replicate DDL run on subscribe-only node. It has nowhere to replicate so any attempt to do so will fail. This is same as how logical standbys behave. |
BDR | 4.1.0 | Feature | Add bdr.accept_connections configuration variable. When false , walsender connections to replication slots using BDR output plugin will fail. This is useful primarily during restore of single node from backup. |
BDR | 4.1.0 | Bug fix | Keep the lock_timeout as configured on non-CAMO-partner BDR nodes. A CAMO partner uses a low lock_timeout when applying transactions from its origin node. This was inadvertently done for all BDR nodes rather than just the CAMO partner, which may have led to spurious lock_timeout errors on pglogical writer processes on normal BDR nodes. |
BDR | 4.1.0 | Bug fix | Show a proper wait event for CAMO / Eager confirmation waits. Show correct "BDR Prepare Phase"/"BDR Commit Phase" in bdr.stat_activity instead of the default “unknown wait event”. (RT75900) |
BDR | 4.1.0 | Bug fix | Reduce log for bdr.run_on_nodes. Don't log when setting bdr.ddl_replication to off if it's done with the "run_on_nodes" variants of function. This eliminates the flood of logs for monitoring functions. (RT80973) |
BDR | 4.1.0 | Bug fix | Fix replication of arrays of composite types and arrays of builtin types that don't support binary network encoding |
BDR | 4.1.0 | Bug fix | Fix replication of data types created during bootstrap |
BDR | 4.1.0 | Bug fix | Confirm end LSN of the running transactions record processed by WAL decoder so that the WAL decoder slot remains up to date and WAL sender get the candidate in timely manner. |
BDR | 4.1.0 | Bug fix | Don't wait for autopartition tasks to complete on parting nodes |
BDR | 4.1.0 | Bug fix | Limit the bdr.standby_slot_names check when reporting flush position only to physical slots. Otherwise flush progress is not reported in presence of disconnected nodes when using bdr.standby_slot_names . (RT77985, RT78290) |
BDR | 4.1.0 | Bug fix | Request feedback reply from walsender if we are close to wal_receiver_timeout |
BDR | 4.1.0 | Bug fix | Don't record dependency of auto-paritioned table on BDR extension more than once. This resulted in "ERROR: unexpected number of extension dependency records" errors from auto-partition and broken replication on conflicts when this happens. Note that existing broken tables need to still be fixed manually by removing the double dependency from |
BDR | 4.1.0 | Bug fix | Improve keepalive handling in receiver. Don't update position based on keepalive when in middle of streaming transaction as we might lose data on crash if we do that. There is also new flush and signalling logic that should improve latency in low TPS scenarios. |
BDR | 4.1.0 | Bug fix | Only do post CREATE commands processing when BDR node exists in the database. |
BDR | 4.1.0 | Bug fix | Don't try to log ERROR conflicts to conflict history table. |
BDR | 4.1.0 | Bug fix | Fixed segfault where a conflict_slot was being used after it was released during multi-insert (COPY) (RT76439). |
BDR | 4.1.0 | Bug fix | Prevent walsender processes spinning when facing lagging standby slots. Correct signaling to reset a latch so that a walsender process does consume 100% of a CPU in case one of the standby slots is lagging behind. (RT80295, RT78290) |
BDR | 4.1.0 | Bug fix | Fix handling of wal_sender_timeout when bdr.standby_slot_names are used (RT78290) |
BDR | 4.1.0 | Bug fix | Fix reporting of disconnected slots in bdr.monitor_local_replslots . They could have been previously reported as missing instead of disconnected. |
BDR | 4.1.0 | Bug fix | Fix apply timestamp reporting for down subscriptions in bdr.get_subscription_progress() function and in the bdr.subscription_summary that uses that function. It would report garbage value before. |
BDR | 4.1.0 | Bug fix | Fix snapshot handling in various places in BDR workers. |
BDR | 4.1.0 | Bug fix | Be more consistent about reporting timestamps and LSNs as NULLs in monitoring functions when there is no available value for those. |
BDR | 4.1.0 | Bug fix | Reduce log information when switching between writer processes. |
BDR | 4.1.0 | Bug fix | Don't do superuser check when configuration parameter was specified on PG command-line. We can't do transactions there yet and it's guaranteed to be superuser changed at that stage. |
BDR | 4.1.0 | Bug fix | Use 64 bits for calculating lag size in bytes. To eliminate risk of overflow with large lag. |
HARP | 2.1.0 | Feature | The BDR DCS now uses a push notification from the consensus rather than through polling nodes. This change reduces the time for new leader selection and the load that HARP does on the BDR DCS since it doesn't need to poll in short intervals anymore. |
HARP | 2.1.0 | Feature | TPA now restarts each HARP Proxy one by one and wait until they come back to reduce any downtime incurred by the application during software upgrades. |
HARP | 2.1.0 | Feature | The support for embedding PGBouncer directly into HARP Proxy is now deprecated and will be removed in the next major release of HARP. It's now possible to configure TPA to put PGBouncer on the same node as HARP Proxy and point to that HARP Proxy. |
HARP | 2.1.0 | Bug fix | harpctl promote <node_name> would occasionally promote a different node than the one specified. This has been fixed. (RT75406) |
HARP | 2.1.0 | Bug fix | Fencing would sometimes fail when using BDR as the Distributed Consensus Service. This has been corrected. |
HARP | 2.1.0 | Bug fix | harpctl apply no longer turns off routing for leader after the cluster has been established. (RT80790) |
HARP | 2.1.0 | Bug fix | Harp-manager no longer exits if it cannot start a failed database. Harp-manager will keep retrying with randomly increasing periods. (RT78516) |
HARP | 2.1.0 | Bug fix | The internal pgbouncer proxy implementation had a memory leak. This has been remediated. |