BDR (Bi-Directional Replication) v4
BDR is a PostgreSQL extension providing multi-master replication and data distribution with advanced conflict management, data-loss protection, and throughput up to 5X faster than native logical replication, and enables distributed Postgres clusters with high availability up to five 9s.
BDR provides loosely coupled, multi-master logical replication using a mesh topology. This means that you can write to any server and the changes are sent directly, row-by-row, to all the other servers that are part of the same BDR group.
By default, BDR uses asynchronous replication, applying changes on the peer nodes only after the local commit. Multiple synchronous replication options are also available.
Basic architecture
Multiple groups
A BDR node is a member of at least one node group, and in the most basic architecture there is a single node group for the whole BDR cluster.
Multiple masters
Each node (database) participating in a BDR group both receives changes from other members and can be written to directly by the user.
This is distinct from hot or warm standby, where only one master server accepts writes, and all the other nodes are standbys that replicate either from the master or from another standby.
You don't have to write to all the masters all of the time. A frequent configuration directs writes mostly to just one master.
Asynchronous, by default
Changes made on one BDR node aren't replicated to other nodes until
they're committed locally. As a result, the data isn't exactly the
same on all nodes at any given time. Some nodes have data that
hasn't yet arrived at other nodes. PostgreSQL's block-based replication
solutions default to asynchronous replication as well. In BDR,
because there are multiple masters and, as a result, multiple data streams,
data on different nodes might differ even when
synchronous_commit
and synchronous_standby_names
are used.
Mesh topology
BDR is structured around a mesh network where every node connects to every other node and all nodes exchange data directly with each other. There's no forwarding of data in BDR except in special circumstances such as adding and removing nodes. Data can arrive from outside the EDB Postgres Distributed cluster or be sent onwards using native PostgreSQL logical replication.
Logical replication
Logical replication is a method of replicating data rows and their changes based on their replication identity (usually a primary key). We use the term logical in contrast to physical replication, which uses exact block addresses and byte-by-byte replication. Index changes aren't replicated, thereby avoiding write amplification and reducing bandwidth.
Logical replication starts by copying a snapshot of the data from the source node. Once that is done, later commits are sent to other nodes as they occur in real time. Changes are replicated without re-executing SQL, so the exact data written is replicated quickly and accurately.
Nodes apply data in the order in which commits were made on the source node, ensuring transactional consistency is guaranteed for the changes from any single node. Changes from different nodes are applied independently of other nodes to ensure the rapid replication of changes.
Replicated data is sent in binary form, when it's safe to do so.
High availability
Each master node can be protected by one or more standby nodes, so any node that goes down can be quickly replaced and continue. Each standby node can be either a logical or a physical standby node.
Replication continues between currently connected nodes even if one or more nodes are currently unavailable. When the node recovers, replication can restart from where it left off without missing any changes.
Nodes can run different release levels, negotiating the required protocols to communicate. As a result, EDB Postgres Distributed clusters can use rolling upgrades, even for major versions of database software.
DDL is replicated across nodes by default. DDL execution can be user controlled to allow rolling application upgrades, if desired.
Architectural options and performance
Always On architectures
A number of different architectures can be configured, each of which has different performance and scalability characteristics.
The group is the basic building block consisting of 2+ nodes (servers). In a group, each node is in a different availability zone, with dedicated router and backup, giving immediate switchover and high availability. Each group has a dedicated replication set defined on it. If the group loses a node, you can easily repair or replace it by copying an existing node from the group.
The Always On architectures are built from either one group in a single location or two groups in two separate locaions. Each group provides HA and IS. When two groups are leveraged in remote locations, they together also provide disaster recovery (DR).
Tables are created across both groups, so any change goes to all nodes, not just to nodes in the local group.
One node in each group is the target for the main application. All other nodes are described as shadow nodes (or "read-write replica"), waiting to take over when needed. If a node loses contact, we switch immediately to a shadow node to continue processing. If a group fails, we can switch to the other group. Scalability isn't the goal of this architecture.
Since we write mainly to only one node, the possibility of contention between is reduced to almost zero. As a result, performance impact is much reduced.
Secondary applications might execute against the shadow nodes, although these are reduced or interrupted if the main application begins using that node.
In the future, one node will be elected as the main replicator to other groups, limiting CPU overhead of replication as the cluster grows and minimizing the bandwidth to other groups.
Supported PostgreSQL database servers
BDR is compatible with Postgres, EDB Postgres Extended Server, and EDB Postgres Advanced Server distributions and can be deployed as a standard Postgres extension. See the Compatibility matrix for details of supported version combinations.
Some key BDR features depend on certain core capabilities being available in the targeted Postgres database server. Therefore, BDR users must also adopt the Postgres database server distribution that's best suited to their business needs. For example, if having the BDR feature Commit At Most Once (CAMO) is mission critical to your use case, don't adopt the community PostgreSQL distribution because it doesn't have the core capability required to handle CAMO. See the full feature matrix compatibility in Choosing a Postgres distribution.
BDR offers close to native Postgres compatibility. However, some access patterns don't necessarily work as well in multi-node setup as they do on a single instance. There are also some limitations in what can be safely replicated in multi-node setting. Application usage goes into detail on how BDR behaves from an application development perspective.
Characteristics affecting BDR performance
By default, BDR keeps one copy of each table on each node in the group, and any changes propagate to all nodes in the group.
Since copies of data are everywhere, SELECTs need only ever access the local node. On a read-only cluster, performance on any one node isn't affected by the number of nodes and is immune to replication conflicts on other nodes caused by long-running SELECT queries. Thus, adding nodes increases linearly the total possible SELECT throughput.
If an INSERT, UPDATE, and DELETE (DML) is performed locally, then the changes propagate to all nodes in the group. The overhead of DML apply is less than the original execution, so if you run a pure write workload on multiple nodes concurrently, a multi-node cluster can handle more TPS than a single node.
Conflict handling has a cost that acts to reduce the throughput. The throughput then depends on how much contention the application displays in practice. Applications with very low contention perform better than a single node. Applications with high contention can perform worse than a single node. These results are consistent with any multi-master technology. They aren't particular to BDR.
Synchronous replilcation options can send changes concurrently to multiple nodes so that the replication lag is minimized. Adding more nodes means using more CPU for replication, so peak TPS reduces slightly as each node is added.
If the workload tries to use all CPU resources, then this resource constrains replication, which can then affect the replication lag.
In summary, adding more master nodes to a BDR group doesn't result in significant write throughput increase when most tables are replicated because all the writes will be replayed on all nodes. Because BDR writes are in general more effective than writes coming from Postgres clients by way of SQL, some performance increase can be achieved. Read throughput generally scales linearly with the number of nodes.
Deployment
BDR is intended to be deployed in one of a small number of known-good configurations, using either TPAexec or a configuration management approach and deployment architecture approved by Technical Support.
Manual deployment isn't recommended and might not be supported.
Refer to the TPAexec Architecture User Manual
for your architecture.
Log messages and documentation are currently available only in English.
Clocks and timezones
BDR is designed to operate with nodes in multiple timezones, allowing a
truly worldwide database cluster. Individual servers don't need to be configured
with matching timezones, although we do recommend using log_timezone = UTC
to
ensure the human-readable server log is more accessible and comparable.
Synchronize server clocks using NTP or other solutions.
Clock synchronization isn't critical to performance, as it is with some other solutions. Clock skew can impact origin conflict detection, although BDR provides controls to report and manage any skew that exists. BDR also provides row-version conflict detection, as described in Conflict detection.