Installation and upgrades v1
OpenShift
For instructions on how to install Cloud Native PostgreSQL on Red Hat OpenShift Container Platform, please refer to the "OpenShift" section.
Installation on Kubernetes
Directly using the operator manifest
The operator can be installed like any other resource in Kubernetes,
through a YAML manifest applied via kubectl
.
You can install the latest operator manifest for this minor release as follows:
You can verify that with:
Using the cnp
plugin for kubectl
You can use the cnp
plugin to override the default configuration options
that are in the static manifests.
For example, to generate the default latest manifest but change the watch namespaces to only be a specific namespace, you could run:
Please refer to "cnp
plugin" documentation
for a more comprehensive example.
Warning
If you are deploying EDB Postgres for Kubernetes on GKE and get an error (... failed to
call webhook...
), be aware that by default traffic between worker nodes
and control plane is blocked by the firewall except for a few specific
ports, as explained in the official
docs
and by this
issue.
You'll need to either change the targetPort
in the webhook service, to be
one of the allowed ones, or open the webhooks' port (9443
) on the
firewall.
Testing the latest development snapshot
If you want to test or evaluate the latest development snapshot of
EDB Postgres for Kubernetes before the next official patch release, you can download the
manifests from the
cloudnative-pg/artifacts
which provides easy access to the current trunk (main) as well as to each
supported release.
For example, you can install the latest snapshot of the operator for this minor release with:
Important
Snapshots are not supported by the EDB Postgres for Kubernetes and not intended for production usage.
Using the Operator Lifecycle Manager (OLM)
OperatorHub is a community-sourced index of operators available via the Operator Lifecycle Manager, which is a package managing system for operators.
You can install EDB Postgres for Kubernetes using the metadata available in the EDB Postgres for Kubernetes page from the OperatorHub.io website, following the installation steps listed on that page.
Details about the deployment
In Kubernetes, the operator is by default installed in the postgresql-operator-system
namespace as a Kubernetes Deployment
. The name of this deployment
depends on the installation method.
When installed through the manifest or the cnp
plugin, it is called
postgresql-operator-controller-manager
by default. When installed via Helm, the default name
is postgresql-operator-cloudnative-pg
.
Note
With Helm you can customize the name of the deployment via the
fullnameOverride
field in the "values.yaml" file.
You can get more information using the describe
command in kubectl
:
As with any Deployment, it sits on top of a ReplicaSet and supports rolling upgrades. The default configuration of the EDB Postgres for Kubernetes operator comes with a Deployment of a single replica, which is suitable for most installations. In case the node where the pod is running is not reachable anymore, the pod will be rescheduled on another node.
If you require high availability at the operator level, it is possible to specify multiple replicas in the Deployment configuration - given that the operator supports leader election. Also, you can take advantage of taints and tolerations to make sure that the operator does not run on the same nodes where the actual PostgreSQL clusters are running (this might even include the control plane for self-managed Kubernetes installations).
Operator configuration
You can change the default behavior of the operator by overriding some default options. For more information, please refer to the "Operator configuration" section.
Upgrades
Important
Please carefully read the release notes before performing an upgrade as some versions might require extra steps.
Upgrading EDB Postgres for Kubernetes operator is a two-step process:
- upgrade the controller and the related Kubernetes resources
- upgrade the instance manager running in every PostgreSQL pod
Unless differently stated in the release notes, the first step is normally done by applying the manifest of the newer version for plain Kubernetes installations, or using the native package manager of the used distribution (please follow the instructions in the above sections).
The second step is automatically executed after having updated the controller,
by default triggering a rolling update of every deployed PostgreSQL instance to
use the new instance manager. The rolling update procedure culminates with a
switchover, which is controlled by the primaryUpdateStrategy
option, by
default set to unsupervised
. When set to supervised
, users need to complete
the rolling update by manually promoting a new instance through the cnp
plugin for kubectl
.
Rolling updates
This process is discussed in-depth on the Rolling Updates page.
Important
In case primaryUpdateStrategy
is set to the default value of unsupervised
,
an upgrade of the operator will trigger a switchover on your PostgreSQL cluster,
causing a (normally negligible) downtime.
Since version 1.10.0, the rolling update behavior can be replaced with in-place updates of the instance manager. The latter don't require a restart of the PostgreSQL instance and, as a result, a switchover in the cluster. This behavior, which is disabled by default, is described below.
In-place updates of the instance manager
By default, EDB Postgres for Kubernetes issues a rolling update of the cluster every time the operator is updated. The new instance manager shipped with the operator is added to each PostgreSQL pod via an init container.
However, this behavior can be changed via configuration to enable in-place updates of the instance manager, which is the PID 1 process that keeps the container alive.
Internally, any instance manager from version 1.10 of EDB Postgres for Kubernetes supports injection of a new executable that will replace the existing one, once the integrity verification phase is completed, as well as graceful termination of all the internal processes. When the new instance manager restarts using the new binary, it adopts the already running postmaster.
As a result, the PostgreSQL process is unaffected by the update, refraining from the need to perform a switchover. The other side of the coin, is that the Pod is changed after the start, breaking the pure concept of immutability.
You can enable this feature by setting the ENABLE_INSTANCE_MANAGER_INPLACE_UPDATES
environment variable to 'true'
in the
operator configuration.
The in-place upgrade process will not change the init container image inside the Pods. Therefore, the Pod definition will not reflect the current version of the operator.
Important
This feature requires that all pods (operators and operands) run on the
same platform/architecture (for example, all linux/amd64
).
Compatibility among versions
EDB Postgres for Kubernetes follows semantic versioning. Every release of the operator within the same API version is compatible with the previous one. The current API version is v1, corresponding to versions 1.x.y of the operator.
In addition to new features, new versions of the operator contain bug fixes and stability enhancements. Because of this, we strongly encourage users to upgrade to the latest version of the operator, as each version is released in order to maintain the most secure and stable Postgres environment.
EDB Postgres for Kubernetes currently releases new versions of the operator at least monthly. If you are unable to apply updates as each version becomes available, we recommend upgrading through each version in sequential order to come current periodically and not skipping versions.
Important
In 2022, EDB plans an LTS release for EDB Postgres for Kubernetes in environments where frequent online updates are not possible.
The release notes page contains a detailed list of the changes introduced in every released version of EDB Postgres for Kubernetes, and it must be read before upgrading to a newer version of the software.
Most versions are directly upgradable and in that case, applying the newer manifest for plain Kubernetes installations or using the native package manager of the chosen distribution is enough.
When versions are not directly upgradable, the old version needs to be removed before installing the new one. This won't affect user data but only the operator itself.