Want to see what it takes to get the EDB Postgres for Kubernetes Operator up and running? This section will demonstrate the following:
Installing the EDB Postgres for Kubernetes Operator Deploying a three-node PostgreSQL cluster Installing and using the kubectl-cnp plugin Testing failover to verify the resilience of the cluster It will take roughly 5-10 minutes to work through.
This demo is interactive You can follow along right in your browser by clicking the button below. Once the environment initializes, you'll see a terminal open at the bottom of the screen.
Interactive Demo
Start Now
Clicking Start Now will load an interactive terminal in this window
Once k3d is ready, we need to start a cluster:
k3d cluster create Output
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-k3s-default'
INFO[0000] Created image volume k3d-k3s-default-images
INFO[0000] Starting new tools node...
INFO[0000] Pulling image 'ghcr.io/k3d-io/k3d-tools:5.4.6'
INFO[0001] Creating node 'k3d-k3s-default-server-0'
INFO[0002] Starting Node 'k3d-k3s-default-tools'
INFO[0002] Pulling image 'docker.io/rancher/k3s:v1.24.4-k3s1'
INFO[0007] Creating LoadBalancer 'k3d-k3s-default-serverlb'
INFO[0007] Pulling image 'ghcr.io/k3d-io/k3d-proxy:5.4.6'
INFO[0010] Using the k3d-tools node to gather environment information
INFO[0011] HostIP: using network gateway 172.17.0.1 address
INFO[0011] Starting cluster 'k3s-default'
INFO[0011] Starting servers...
INFO[0011] Starting Node 'k3d-k3s-default-server-0'
INFO[0016] All agents already running.
INFO[0016] Starting helpers...
INFO[0016] Starting Node 'k3d-k3s-default-serverlb'
INFO[0023] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap...
INFO[0025] Cluster 'k3s-default' created successfully!
INFO[0025] You can now use it like this:
kubectl cluster-info This will create the Kubernetes cluster, and you will be ready to use it.
Verify that it works with the following command:
kubectl get nodes Output
NAME STATUS ROLES AGE VERSION
k3d-k3s-default-server-0 Ready control-plane,master 32s v1.24.4+k3s1 You will see one node called k3d-k3s-default-server-0
. If the status isn't yet "Ready", wait for a few seconds and run the command above again.
Install EDB Postgres for Kubernetes Now that the Kubernetes cluster is running, you can proceed with EDB Postgres for Kubernetes installation as described in the "Installation and upgrades" section:
kubectl apply -f https://get.enterprisedb.io/cnp/postgresql-operator-1.18.0.yaml Output
namespace/postgresql-operator-system created
customresourcedefinition.apiextensions.k8s.io/backups.postgresql.k8s.enterprisedb.io created
customresourcedefinition.apiextensions.k8s.io/clusters.postgresql.k8s.enterprisedb.io created
customresourcedefinition.apiextensions.k8s.io/poolers.postgresql.k8s.enterprisedb.io created
customresourcedefinition.apiextensions.k8s.io/scheduledbackups.postgresql.k8s.enterprisedb.io created
serviceaccount/postgresql-operator-manager created
clusterrole.rbac.authorization.k8s.io/postgresql-operator-manager created
clusterrolebinding.rbac.authorization.k8s.io/postgresql-operator-manager-rolebinding created
configmap/postgresql-operator-default-monitoring created
service/postgresql-operator-webhook-service created
deployment.apps/postgresql-operator-controller-manager created
mutatingwebhookconfiguration.admissionregistration.k8s.io/postgresql-operator-mutating-webhook-configuration created
validatingwebhookconfiguration.admissionregistration.k8s.io/postgresql-operator-validating-webhook-configuration created And then verify that it was successfully installed:
kubectl get deploy -n postgresql-operator-system postgresql-operator-controller-manager Output
NAME READY UP-TO-DATE AVAILABLE AGE
postgresql-operator-controller-manager 1/1 1 1 52s Deploy a PostgreSQL cluster As with any other deployment in Kubernetes, to deploy a PostgreSQL cluster
you need to apply a configuration file that defines your desired Cluster
.
The cluster-example.yaml
sample file
defines a simple Cluster
using the default storage class to allocate
disk space:
cat <<EOF > cluster- example.yaml
apiVersion : postgresql.k8s.enterprisedb.io/v1
kind : Cluster
metadata :
name : cluster- example
spec :
instances : 3
primaryUpdateStrategy : unsupervised
storage :
size : 1Gi
EOF
There's more For more detailed information about the available options, please refer
to the "API Reference" section .
In order to create the 3-node PostgreSQL cluster, you need to run the following command:
kubectl apply -f cluster-example.yaml Output
cluster.postgresql.k8s.enterprisedb.io/cluster-example created You can check that the pods are being created with the get pods
command. It'll take a bit to initialize, so if you run that
immediately after applying the cluster configuration you'll see the status as Init:
or PodInitializing
:
kubectl get pods Output
NAME READY STATUS RESTARTS AGE
cluster-example-1-initdb-sdr25 0/1 PodInitializing 0 20s ...give it a minute, and then check on it again:
kubectl get pods Output
NAME READY STATUS RESTARTS AGE
cluster-example-1 1/1 Running 0 47s
cluster-example-2 1/1 Running 0 24s
cluster-example-3 1/1 Running 0 8s Now we can check the status of the cluster:
kubectl get cluster cluster-example -o yaml Output
apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: Cluster
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"postgresql.k8s.enterprisedb.io/v1","kind":"Cluster","metadata":{"annotations":{},"name":"cluster-example","namespace":"default"},"spec":{"instances":3,"primaryUpdateStrategy":"unsupervised","storage":{"size":"1Gi"}}}
creationTimestamp: "2022-09-06T21:18:53Z"
generation: 1
name: cluster-example
namespace: default
resourceVersion: "2037"
uid: e6d88753-e5d5-414c-a7ec-35c6c27f5a9a
spec:
affinity:
podAntiAffinityType: preferred
topologyKey: ""
bootstrap:
initdb:
database: app
encoding: UTF8
localeCType: C
localeCollate: C
owner: app
enableSuperuserAccess: true
imageName: quay.io/enterprisedb/postgresql:15.0
instances: 3
logLevel: info
maxSyncReplicas: 0
minSyncReplicas: 0
monitoring:
customQueriesConfigMap:
- key: queries
name: postgresql-operator-default-monitoring
disableDefaultQueries: false
enablePodMonitor: false
postgresGID: 26
postgresUID: 26
postgresql:
parameters:
archive_mode: "on"
archive_timeout: 5min
dynamic_shared_memory_type: posix
log_destination: csvlog
log_directory: /controller/log
log_filename: postgres
log_rotation_age: "0"
log_rotation_size: "0"
log_truncate_on_rotation: "false"
logging_collector: "on"
max_parallel_workers: "32"
max_replication_slots: "32"
max_worker_processes: "32"
shared_memory_type: mmap
shared_preload_libraries: ""
wal_keep_size: 512MB
wal_receiver_timeout: 5s
wal_sender_timeout: 5s
syncReplicaElectionConstraint:
enabled: false
primaryUpdateMethod: switchover
primaryUpdateStrategy: unsupervised
resources: {}
startDelay: 30
stopDelay: 30
storage:
resizeInUseVolumes: true
size: 1Gi
switchoverDelay: 40000000
status:
certificates:
clientCASecret: cluster-example-ca
expirations:
cluster-example-ca: 2022-12-05 21:13:54 +0000 UTC
cluster-example-replication: 2022-12-05 21:13:54 +0000 UTC
cluster-example-server: 2022-12-05 21:13:54 +0000 UTC
replicationTLSSecret: cluster-example-replication
serverAltDNSNames:
- cluster-example-rw
- cluster-example-rw.default
- cluster-example-rw.default.svc
- cluster-example-r
- cluster-example-r.default
- cluster-example-r.default.svc
- cluster-example-ro
- cluster-example-ro.default
- cluster-example-ro.default.svc
serverCASecret: cluster-example-ca
serverTLSSecret: cluster-example-server
cloudNativePostgresqlCommitHash: ad578cb1
cloudNativePostgresqlOperatorHash: 9f5db5e0e804fb51c6962140c0a447766bf2dd4d96dfa8d8529b8542754a23a4
conditions:
- lastTransitionTime: "2022-09-06T21:20:12Z"
message: Cluster is Ready
reason: ClusterIsReady
status: "True"
type: Ready
configMapResourceVersion:
metrics:
postgresql-operator-default-monitoring: "810"
currentPrimary: cluster-example-1
currentPrimaryTimestamp: "2022-09-06T21:19:31.040336Z"
healthyPVC:
- cluster-example-1
- cluster-example-2
- cluster-example-3
instances: 3
instancesReportedState:
cluster-example-1:
isPrimary: true
timeLineID: 1
cluster-example-2:
isPrimary: false
timeLineID: 1
cluster-example-3:
isPrimary: false
timeLineID: 1
instancesStatus:
healthy:
- cluster-example-1
- cluster-example-2
- cluster-example-3
latestGeneratedNode: 3
licenseStatus:
isImplicit: true
isTrial: true
licenseExpiration: "2022-10-06T21:18:53Z"
licenseStatus: Implicit trial license
repositoryAccess: false
valid: true
phase: Cluster in healthy state
poolerIntegrations:
pgBouncerIntegration: {}
pvcCount: 3
readService: cluster-example-r
readyInstances: 3
secretsResourceVersion:
applicationSecretVersion: "778"
clientCaSecretVersion: "774"
replicationSecretVersion: "776"
serverCaSecretVersion: "774"
serverSecretVersion: "775"
superuserSecretVersion: "777"
targetPrimary: cluster-example-1
targetPrimaryTimestamp: "2022-09-06T21:18:54.556099Z"
timelineID: 1
topology:
instances:
cluster-example-1: {}
cluster-example-2: {}
cluster-example-3: {}
successfullyExtracted: true
writeService: cluster-example-rw
Important The immutable infrastructure paradigm requires that you always
point to a specific version of the container image.
Never use tags like latest
or 13
in a production environment
as it might lead to unpredictable scenarios in terms of update
policies and version consistency in the cluster.
Install the kubectl-cnp plugin EDB Postgres for Kubernetes provides a plugin for kubectl to manage a cluster in Kubernetes, along with a script to install it:
curl -sSfL \
https://github.com/EnterpriseDB/kubectl-cnp/raw/main/install.sh | \
sudo sh -s -- -b /usr/local/binOutput
EnterpriseDB/kubectl-cnp info checking GitHub for latest tag
EnterpriseDB/kubectl-cnp info found version: 1.18.0 for v1.18.0/linux/x86_64
EnterpriseDB/kubectl-cnp info installed /usr/local/bin/kubectl-cnp The cnp
command is now available in kubectl:
kubectl cnp status cluster-example Output
Cluster Summary
Name: cluster-example
Namespace: default
System ID: 7140379538380623889
PostgreSQL Image: quay.io/enterprisedb/postgresql:15.0
Primary instance: cluster-example-1
Status: Cluster in healthy state
Instances: 3
Ready instances: 3
Current Write LSN: 0/5000060 (Timeline: 1 - WAL File: 000000010000000000000005)
Certificates Status
Certificate Name Expiration Date Days Left Until Expiration
---------------- --------------- --------------------------
cluster-example-ca 2022-12-05 21:13:54 +0000 UTC 89.99
cluster-example-replication 2022-12-05 21:13:54 +0000 UTC 89.99
cluster-example-server 2022-12-05 21:13:54 +0000 UTC 89.99
Continuous Backup status
Not configured
Streaming Replication status
Name Sent LSN Write LSN Flush LSN Replay LSN Write Lag Flush Lag Replay Lag State Sync State Sync Priority
---- -------- --------- --------- ---------- --------- --------- ---------- ----- ---------- -------------
cluster-example-2 0/5000060 0/5000060 0/5000060 0/5000060 00:00:00 00:00:00 00:00:00 streaming async 0
cluster-example-3 0/5000060 0/5000060 0/5000060 0/5000060 00:00:00 00:00:00 00:00:00 streaming async 0
Instances status
Name Database Size Current LSN Replication role Status QoS Manager Version Node
---- ------------- ----------- ---------------- ------ --- --------------- ----
cluster-example-1 33 MB 0/5000060 Primary OK BestEffort 1.18.0 k3d-k3s-default-server-0
cluster-example-2 33 MB 0/5000060 Standby (async) OK BestEffort 1.18.0 k3d-k3s-default-server-0
cluster-example-3 33 MB 0/5000060 Standby (async) OK BestEffort 1.18.0 k3d-k3s-default-server-0 Testing failover As our status checks show, we're running two replicas - if something happens to the primary instance of PostgreSQL, the cluster will fail over to one of them. Let's demonstrate this by killing the primary pod:
kubectl delete pod --wait = false cluster-example-1 Output
pod "cluster-example-1" deleted This simulates a hard shutdown of the server - a scenario where something has gone wrong.
Now if we check the status...
kubectl cnp status cluster-example Output
Cluster Summary
Switchover in progress
Name: cluster-example
Namespace: default
System ID: 7140379538380623889
PostgreSQL Image: quay.io/enterprisedb/postgresql:14.5
Primary instance: cluster-example-1 (switching to cluster-example-2)
Status: Failing over Failing over from cluster-example-1 to cluster-example-2
Instances: 3
Ready instances: 2
Current Write LSN: 0/6000F58 (Timeline: 2 - WAL File: 000000020000000000000006)
Certificates Status
Certificate Name Expiration Date Days Left Until Expiration
---------------- --------------- --------------------------
cluster-example-ca 2022-12-05 21:13:54 +0000 UTC 89.99
cluster-example-replication 2022-12-05 21:13:54 +0000 UTC 89.99
cluster-example-server 2022-12-05 21:13:54 +0000 UTC 89.99
Continuous Backup status
Not configured
Streaming Replication status
Not available yet
Instances status
Name Database Size Current LSN Replication role Status QoS Manager Version Node
---- ------------- ----------- ---------------- ------ --- --------------- ----
cluster-example-2 33 MB 0/6000F58 Primary OK BestEffort 1.18.0 k3d-k3s-default-server-0
cluster-example-3 33 MB 0/60000A0 Standby (file based) OK BestEffort 1.18.0 k3d-k3s-default-server-0 ...the failover process has begun, with the second pod promoted to primary. Once the failed pod has restarted, it will become a replica of the new primary:
kubectl cnp status cluster-example Output
Cluster Summary
Name: cluster-example
Namespace: default
System ID: 7140379538380623889
PostgreSQL Image: quay.io/enterprisedb/postgresql:14.5
Primary instance: cluster-example-2
Status: Cluster in healthy state
Instances: 3
Ready instances: 3
Current Write LSN: 0/6004CD8 (Timeline: 2 - WAL File: 000000020000000000000006)
Certificates Status
Certificate Name Expiration Date Days Left Until Expiration
---------------- --------------- --------------------------
cluster-example-ca 2022-12-05 21:13:54 +0000 UTC 89.99
cluster-example-replication 2022-12-05 21:13:54 +0000 UTC 89.99
cluster-example-server 2022-12-05 21:13:54 +0000 UTC 89.99
Continuous Backup status
Not configured
Streaming Replication status
Name Sent LSN Write LSN Flush LSN Replay LSN Write Lag Flush Lag Replay Lag State Sync State Sync Priority
---- -------- --------- --------- ---------- --------- --------- ---------- ----- ---------- -------------
cluster-example-1 0/6004CD8 0/6004CD8 0/6004CD8 0/6004CD8 00:00:00 00:00:00 00:00:00 streaming async 0
cluster-example-3 0/6004CD8 0/6004CD8 0/6004CD8 0/6004CD8 00:00:00 00:00:00 00:00:00 streaming async 0
Instances status
Name Database Size Current LSN Replication role Status QoS Manager Version Node
---- ------------- ----------- ---------------- ------ --- --------------- ----
cluster-example-2 33 MB 0/6004CD8 Primary OK BestEffort 1.18.0 k3d-k3s-default-server-0
cluster-example-1 33 MB 0/6004CD8 Standby (async) OK BestEffort 1.18.0 k3d-k3s-default-server-0
cluster-example-3 33 MB 0/6004CD8 Standby (async) OK BestEffort 1.18.0 k3d-k3s-default-server-0 Further reading This is all it takes to get a PostgreSQL cluster up and running, but of course there's a lot more possible - and certainly much more that is prudent before you should ever deploy in a production environment!