Release notes for EDB Postgres Distributed version 4.0.2 v4
This is a maintenance release for BDR 4.0 and HARP 2.0 which includes minor improvements as well as fixes for issues identified in previous versions.
Component | Version | Type | Description |
---|---|---|---|
BDR | 4.0.2 | Enhancement | Add bdr.max_worker_backoff_delay (BDR-1767) This changes the handling of the backoff delay to exponentially increase from |
BDR | 4.0.2 | Enhancement | Add execute_locally option to bdr.replicate_ddl_command() (RT73533)This allows optional queueing of ddl commands for replication to other groups without executing it locally. |
BDR | 4.0.2 | Enhancement | Change ERROR on consensus issue during JOIN to WARNING The reporting of these transient errors was confusing as they were also shown in bdr.worker_errors. These are now changed to WARNINGs. |
BDR | 4.0.2 | Bug fix | WAL decoder confirms end LSN of the running transactions record (BDR-1264) Confirm end LSN of the running transactions record processed by WAL decoder so that the WAL decoder slot remains up to date and WAL senders get the candidate in timely manner. |
BDR | 4.0.2 | Bug fix | Don't wait for autopartition tasks to complete on parting nodes (BDR-1867) When a node has started parting process, it makes no sense to wait for autopartition tasks on such nodes to finish since it's not part of the group anymore. |
BDR | 4.0.2 | Bug fix | Improve handling of node name reuse during parallel join (RT74789) Nodes now have a generation number so that it's easier to identify the name reuse even if the node record is received as part of a snapshot. |
BDR | 4.0.2 | Bug fix | Fix locking and snapshot use during node management in the BDR manager process (RT74789) When processing multiple actions in the state machine, make sure to reacquire the lock on the processed node and update the snapshot to make sure all updates happening through consensus are taken into account. |
BDR | 4.0.2 | Bug fix | Improve cleanup of catalogs on local node drop Drop all groups, not only the primary one and drop all the node state history info as well. |
BDR | 4.0.2 | Bug fix | Improve error checking for join request in bdr_init_physical Previously bdr_init_physical would simply wait forever when there was any issue with the consensus request, now we do same checking as the logical join does. |
BDR | 4.0.2 | Bug fix | Improve handling of various timeouts and sleeps in consensus This reduces the amount of new consensus votes needed when processing many consensus requests or time consuming consensus requests, for example during join of a new node. |
BDR | 4.0.2 | Bug fix | Fix handling of wal_receiver_timeout (BDR-1848) The |
BDR | 4.0.2 | Bug fix | Limit the bdr.standby_slot_names check when reporting flush position only to physical slots (RT77985, RT78290) Otherwise flush progress is not reported in presence of disconnected nodes when using |
BDR | 4.0.2 | Bug fix | Fix replication of data types created during bootstrap (BDR-1784) |
BDR | 4.0.2 | Bug fix | Fix replication of arrays of builtin types that don't have binary transfer support (BDR-1042) |
BDR | 4.0.2 | Bug fix | Prevent CAMO configuration warnings if CAMO is not being used (BDR-1825) |
HARP | 2.0.2 | Enhancement | BDR consensus now generally available. HARP offers multiple options for Distributed Consensus Service (DCS) source: etcd and BDR. The BDR consensus option can be used in deployments where etcd isn't present. Use of the BDR consensus option is no longer considered beta and is now supported for use in production environments. |
HARP | 2.0.2 | Enhancement | Transport layer proxy now generally available. HARP offers multiple proxy options for routing connections between the client application and database: application layer (L7) and transport layer (L4). The network layer 4 or transport layer proxy simply forwards network packets, and layer 7 terminates network traffic. The transport layer proxy, previously called simple proxy, is no longer considered beta and is now supported for use in production environments. |