harpctl command-line tool v4
harpctl
is a command-line tool for directly manipulating the consensus layer
contents to fit desired cluster geometry. You can use it to, for example, examine node
status, "promote" a node to lead master, disable/enable cluster management,
bootstrap cluster settings, and so on.
Synopsis
In addition to this basic synopsis, each of the available commands has its own series of allowed subcommands and flags.
Configuration
harpctl
must interact with the consensus
layer to operate. This means a certain minimum amount of settings must be
defined in config.yml
for successful execution. This includes:
dcs.driver
dcs.endpoints
cluster.name
As an example using etcd:
See Configuration for details.
Execution
Execute harpctl
like this:
Each command has its own series of subcommands and flags. Further help for these are available by executing this command:
harpctl apply
You must use an apply
command to "bootstrap" a HARP cluster using a
file that defines various attributes of the intended cluster.
Execute an apply
command like this:
This essentially creates all of the initial cluster metadata, default or custom management settings, and so on. This is done here because the DCS is used as the ultimate source of truth for managing the cluster, and this makes it possible to change these settings dynamically.
This can either be done once to bootstrap the entire cluster, once per type of object, or even on a per-node basis for the sake of simplicity.
This is an example of a bootstrap file for a single node:
As seen here, it is good practice to always include a cluster name preamble to ensure all changes target the correct HARP cluster, in case several are operating in the same environment.
Once apply
completes without error, the node is integrated with the rest
of the cluster.
Note
You can also use this command to bootstrap the entire cluster at once since all defined sections are applied at the same time. However, we don't encourage this use for anything but testing as it increases the difficulty of validating each portion of the cluster during initial definition.
harpctl fence
Marks the local or specified node as fenced. A node with this status is essentially completely excluded from the cluster. HARP Proxy doesn't send it traffic, its representative HARP Manager doesn't claim the lead master lease, and further steps are also taken. If running, HARP Manager stops Postgres on the node as well.
Execute a fence
command like this:
The node-name is optional; if omitted, harpctl
uses the name of
the locally configured node.
harpctl get
Fetches information stored in the consensus layer for various elements of the cluster. This includes nodes, locations, the cluster, and so on. The full list includes:
cluster
— Returns the cluster state.leader
— Returns the current or specified location leader.location
— Returns current or specified location information.locations
— Returns list of all locations.node
— Returns the specified Postgres node.nodes
— Returns list of all Postgres nodes.proxy
— Returns current or specified proxy information.proxies
— Returns list of all proxy nodes.
harpctl get cluster
Fetches information stored in the consensus layer for the current cluster:
harpctl get leader
Fetches node information for the current lead master stored in the DCS for the
specified location. Use harpctl get locations
to list the defined locations.
Example:
harpctl get location
Fetches location information for the specified location. Use harpctl get locations
to list the defined locations.
Example:
harpctl get locations
Fetches information for all locations currently present in the DCS.
Example:
harpctl get node
Fetches node information stored in the DCS for the specified node.
Example:
harpctl get nodes
Fetches node information stored in the DCS for the all nodes in the cluster.
Example:
harpctl get proxy
Fetches proxy information stored in the DCS for specified proxy. Specify
global
to see proxy defaults for this cluster.
Example:
harpctl get proxies
Fetches proxy information stored in the DCS for all proxies in the cluster.
Additionally, lists the global
pseudo-proxy for default proxy settings.
Example:
harpctl manage
If a cluster isn't in a managed state, instructs all HARP Manager services to resume monitoring Postgres and updating the consensus layer. Do this after maintenance is complete following HARP software updates or other significant changes that might affect the whole cluster.
Execute a manage
command like this:
Note
Currently you can enable or disable cluster management only at
the cluster
level. Later versions will also make it possible to do this
for individual nodes or proxies.
harpctl promote
Promotes the next available node that meets leadership requirements to lead master in the current Location. Since this is a requested event, it invokes a smooth handover where:
- The existing lead master releases the lead master lease, provided:
- If CAMO is enabled, the promoted node must be up to date and CAMO ready,
and the CAMO queue must have less than
node.maximum_camo_lag
bytes remaining to be applied. - Replication lag between the old lead master and the promoted node is
less than
node.maximum_lag
.
- If CAMO is enabled, the promoted node must be up to date and CAMO ready,
and the CAMO queue must have less than
- The promoted node is the only valid candidate to take the lead master
lease and does so as soon as it is released by the current holder. All
other nodes ignore the unset lead master lease.
- If CAMO is enabled, the promoted node temporarily disables client traffic until the CAMO queue is fully applied, even though it holds the lead master lease.
- HARP Proxy, if using pgbouncer, will
PAUSE
connections to allow ongoing transactions to complete. Once the lead master lease is claimed by the promoted node, it reconfigures PgBouncer for the new connection target and resumes database traffic. If HARP Proxy is using the builtin proxy, it terminates existing connections and creates new connections to the lead master as new connections are requested from the client.
Execute a promote
command like this:
Provide the --force
option to forcibly set a node to lead master,
even if it doesn't meet the criteria for becoming lead master. This
circumvents any verification of CAMO status or replication lag and causes an
immediate transition to the promoted node. This is the only way to specify
an exact node for promotion.
The node must be online and operational for this to succeed. Use this option with care.
harpctl set
Sets a specific attribute in the cluster to the supplied value. This is used to
tweak configuration settings for a specific node, proxy, location, or the
cluster rather than using apply
. You can use this for the following
object types:
cluster
— Sets cluster-related attributes.location
— Sets specific location attributes.node
— Sets specific node attributes.proxy
— Sets specific proxy attributes.
harpctl set cluster
Sets cluster-related attributes only.
Example:
harpctl set node
Sets node-related attributes for the named node. Any options mentioned in Node directives are valid here.
Example:
harpctl set proxy
Sets proxy-related attributes for the named proxy. Any options mentioned in the Proxy directives
are valid here.
Properties set this way require a restart of the proxy before the new value takes effect.
Example:
Use global
for cluster-wide proxy defaults:
harpctl unfence
Removes the fenced
attribute from the local or specified node. This
removes all previously applied cluster exclusions from the node so that it can
again receive traffic or hold the lead master lease. Postgres is also started if it isn't running.
Execute an unfence
command like this:
The node-name is optional. If you omit it, harpctl
uses the name of
the locally configured node.
harpctl unmanage
Instructs all HARP Manager services in the cluster to remain running but no longer actively monitoring Postgres, or modify the contents of the consensus layer. This means that any ordinary failover event such as a node outage doesn't result in a leadership migration. This is intended for system or HARP maintenance prior to making changes to HARP software or other significant changes to the cluster.
Execute an unmanage
command like this:
Note
Currently you can enable or disable cluster management at only
the cluster
level. Later versions will also make it possible to do this
for individual nodes or proxies.