Using Failover Manager with virtual IP addresses v4
Failover Manager uses the efm_address
script to assign or release a virtual IP address.
Note
Virtual IP addresses aren't supported by many cloud providers. In those environments, use another mechanism, such as an elastic IP address on AWS, that can be changed when needed by a fencing or post-promotion script.
By default, the script resides in:
/usr/edb/efm-4.<x>/bin/efm_address
Failover Manager uses the following command variations to assign or release an IPv4 or IPv6 IP address.
To assign a virtual IPv4 IP address:
# efm_address add4 <interface_name> <IPv4_addr>/<prefix>
To assign a virtual IPv6 IP address:
# efm_address add6 <interface_name> <IPv6_addr>/<prefix>
To release a virtual address:
# efm_address del <interface_name> <IP_address/prefix>
Where:
<interface_name>
matches the name specified in the virtual.ip.interface
property in the cluster properties file.
<IPv4_addr>
or <IPv6_addr>
matches the value specified in the virtual.ip
property in the cluster properties file.
prefix
matches the value specified in the virtual.ip.prefix
property in the cluster properties file.
For more information about properties that describe a virtual IP address, see The cluster properties file.
Invoke the efm_address
script as the root user. The efm user is created during the installation and is granted privileges in the sudoers file to run the efm_address
script. For more information about the sudoers
file, see Extending Failover Manager permissions.
Note
If a VIP address or any address other than the bind.address
is assigned to a node, the operating system can choose the source address used when contacting the database. Be sure to modify the pg_hba.conf
file on all monitored databases to allow contact from all addresses within your replication scenario.
Testing the VIP
When using a virtual IP (VIP) address with Failover Manager, it's important to test the VIP functionality manually before starting Failover Manager. This catches any network-related issues before they cause a problem during an actual failover. While testing the VIP, make sure that Failover Manager isn't running.
The following steps test the actions that Failover Manager takes. The example uses the following property values:
virtual.ip=172.24.38.239 virtual.ip.interface=eth0 virtual.ip.prefix=24 ping.server.command=/bin/ping -q -c3 -w5
Note
The virtual.ip.prefix
specifies the number of significant bits in the virtual IP address.
When instructed to ping the VIP from a node, use the command defined by the ping.server.command
property.
Ping the VIP from all nodes to confirm that the address isn't already in use:
# /bin/ping -q -c3 -w5 172.24.38.239 PING 172.24.38.239 (172.24.38.239) 56(84) bytes of data. --- 172.24.38.239 ping statistics --- 4 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3000ms
You see 100% packet loss.
Important
Failover Manager uses the exit code of the ping command to determine whether or not the address was reachable. In this case the exit code isn't zero. If using a command other than, it must return a non-zero exit code if the address isn't reachable.
Run the
efm_address add4
command on the Primary node to assign the VIP, and then confirm with ip address:# efm_address add4 eth0 172.24.38.239/24 # ip address <output truncated> eth0 Link encap:Ethernet HWaddr 36:AA:A4:F4:1C:40 inet addr:172.24.38.239 Bcast:172.24.38.255 ...
Ping the VIP from the other nodes to verify that they can reach the VIP:
# /bin/ping -q -c3 -w5 172.24.38.239 PING 172.24.38.239 (172.24.38.239) 56(84) bytes of data. --- 172.24.38.239 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.023/0.025/0.029/0.006 ms
No packet loss occurs.
Important
Failover Manager uses the exit code of the ping command to determine whether or not the address was reachable. In this case the exit code is zero. If using a command other than ping, it must return a zero exit code if the address is reachable.
Use the
efm_address del
command to release the address on the primary node and confirm the node was released with ip address:# efm_address del eth0 172.24.38.239/24 # ip address eth0 Link encap:Ethernet HWaddr 22:00:0A:89:02:8E inet addr:10.137.2.142 Bcast:10.137.2.191 ...
The output from this step doesn't show an eth0 interface.
Repeat step 3, this time verifying that the standby and witness don't see the VIP in use:
# /bin/ping -q -c3 -w5 172.24.38.239 PING 172.24.38.239 (172.24.38.239) 56(84) bytes of data. --- 172.24.38.239 ping statistics --- 4 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3000ms
100% packet loss occurs. Repeat this step on all nodes.
Repeat step 2 on all standby nodes to assign the VIP to every node. You can ping the VIP from any node to verify that it's in use.
# efm_address add4 eth0 172.24.38.239/24 # ip address <output truncated> eth0 Link encap:Ethernet HWaddr 36:AA:A4:F4:1C:40 inet addr:172.24.38.239 Bcast:172.24.38.255 ...
After the test steps above, release the VIP from any nonprimary node before attempting to start Failover Manager.
Note
The network interface used for the VIP doesn't have to be the same interface used for the Failover Manager agent's bind.address
value. The primary agent drops the VIP as needed during a failover, and Failover Manager verifies that the VIP is no longer available before promoting a standby. A failure of the bind address network leads to primary isolation and failover.
If the VIP uses a different interface, you might encounter a timing condition in which the rest of the cluster checks for a reachable VIP before the primary agent drops it. In this case, Failover Manager retries the VIP check for the number of seconds specified in the node.timeout
property to help ensure that a failover happens as expected.
- On this page
- Testing the VIP