mirror of
https://github.com/cloudnative-pg/plugin-barman-cloud.git
synced 2026-03-09 20:22:20 +01:00
Compare commits
33 Commits
4f83d5a60b
...
d769b2cde8
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d769b2cde8 | ||
|
|
08ab561429 | ||
|
|
5001fe7831 | ||
|
|
c1d46ac604 | ||
|
|
e91a126c9d | ||
|
|
5cad545385 | ||
|
|
d1ca8ed02b | ||
|
|
5cb779ed34 | ||
|
|
71bd4d808d | ||
|
|
fa4de0dd0f | ||
|
|
4d9d4dce49 | ||
|
|
77800474c9 | ||
|
|
be649e9dd8 | ||
|
|
e2099c6d89 | ||
|
|
378c76a526 | ||
|
|
064eac2199 | ||
|
|
2c8d0aa8c4 | ||
|
|
a8b214c460 | ||
|
|
604fb9c430 | ||
|
|
fa546eae05 | ||
|
|
ad8a1767a7 | ||
|
|
e5eb03e181 | ||
|
|
e943923f8f | ||
|
|
4a637d7c58 | ||
|
|
24fbc01a33 | ||
|
|
5bc006b035 | ||
|
|
4f5b407c0f | ||
|
|
b3bcf6d9c1 | ||
|
|
757ca11304 | ||
|
|
31acf7ce0f | ||
|
|
95a26f5236 | ||
|
|
2c134eafe4 | ||
|
|
0153abba82 |
38
.github/workflows/barman-base-image.yml
vendored
38
.github/workflows/barman-base-image.yml
vendored
@ -1,38 +0,0 @@
|
||||
name: Barman Base Image
|
||||
on:
|
||||
workflow_dispatch:
|
||||
schedule:
|
||||
- cron: "0 0 * * 0"
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
paths:
|
||||
- 'containers/sidecar-requirements.txt'
|
||||
|
||||
permissions: read-all
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
packages: write
|
||||
contents: write
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v6
|
||||
- name: Install QEMU static binaries
|
||||
uses: docker/setup-qemu-action@v3
|
||||
- name: Install Task
|
||||
uses: arduino/setup-task@v2
|
||||
- name: Install Dagger
|
||||
env:
|
||||
# renovate: datasource=github-tags depName=dagger/dagger versioning=semver
|
||||
DAGGER_VERSION: 0.19.8
|
||||
run: |
|
||||
curl -L https://dl.dagger.io/dagger/install.sh | BIN_DIR=$HOME/.local/bin sh
|
||||
- name: Publish a barman-base
|
||||
env:
|
||||
REGISTRY_USER: ${{ github.actor }}
|
||||
REGISTRY_PASSWORD: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
task publish-barman-base
|
||||
2
.github/workflows/ci.yml
vendored
2
.github/workflows/ci.yml
vendored
@ -44,7 +44,7 @@ jobs:
|
||||
- name: Install Dagger
|
||||
env:
|
||||
# renovate: datasource=github-tags depName=dagger/dagger versioning=semver
|
||||
DAGGER_VERSION: 0.19.8
|
||||
DAGGER_VERSION: 0.19.10
|
||||
run: |
|
||||
curl -L https://dl.dagger.io/dagger/install.sh | BIN_DIR=$HOME/.local/bin sh
|
||||
- name: Run CI task
|
||||
|
||||
2
.github/workflows/release-please.yml
vendored
2
.github/workflows/release-please.yml
vendored
@ -31,7 +31,7 @@ jobs:
|
||||
- name: Install Dagger
|
||||
env:
|
||||
# renovate: datasource=github-tags depName=dagger/dagger versioning=semver
|
||||
DAGGER_VERSION: 0.19.8
|
||||
DAGGER_VERSION: 0.19.10
|
||||
run: |
|
||||
curl -L https://dl.dagger.io/dagger/install.sh | BIN_DIR=$HOME/.local/bin sh
|
||||
- name: Create image and manifest
|
||||
|
||||
2
.github/workflows/release-publish.yml
vendored
2
.github/workflows/release-publish.yml
vendored
@ -21,7 +21,7 @@ jobs:
|
||||
- name: Install Dagger
|
||||
env:
|
||||
# renovate: datasource=github-tags depName=dagger/dagger versioning=semver
|
||||
DAGGER_VERSION: 0.19.8
|
||||
DAGGER_VERSION: 0.19.10
|
||||
run: |
|
||||
curl -L https://dl.dagger.io/dagger/install.sh | BIN_DIR=$HOME/.local/bin sh
|
||||
- name: Create image and manifest
|
||||
|
||||
@ -1,3 +1,3 @@
|
||||
{
|
||||
".": "0.10.0"
|
||||
".": "0.11.0"
|
||||
}
|
||||
|
||||
@ -1,3 +1,4 @@
|
||||
AKS
|
||||
AccessDenied
|
||||
AdditionalContainerArgs
|
||||
Akamai
|
||||
@ -5,6 +6,7 @@ Azurite
|
||||
BarmanObjectStore
|
||||
BarmanObjectStoreConfiguration
|
||||
BarmanObjectStores
|
||||
CLI
|
||||
CNCF
|
||||
CRD
|
||||
CloudNativePG
|
||||
@ -38,6 +40,7 @@ PITR
|
||||
PoR
|
||||
PostgreSQL
|
||||
Postgres
|
||||
PowerShell
|
||||
README
|
||||
RPO
|
||||
RTO
|
||||
@ -45,6 +48,7 @@ RecoveryWindow
|
||||
ResourceRequirements
|
||||
RetentionPolicy
|
||||
SAS
|
||||
SDK
|
||||
SFO
|
||||
SPDX
|
||||
SPDX
|
||||
|
||||
17
CHANGELOG.md
17
CHANGELOG.md
@ -1,5 +1,22 @@
|
||||
# Changelog
|
||||
|
||||
## [0.11.0](https://github.com/cloudnative-pg/plugin-barman-cloud/compare/v0.10.0...v0.11.0) (2026-01-30)
|
||||
|
||||
|
||||
### Features
|
||||
|
||||
* Add support for DefaultAzureCredential authentication mechanism ([#681](https://github.com/cloudnative-pg/plugin-barman-cloud/issues/681)) ([2c134ea](https://github.com/cloudnative-pg/plugin-barman-cloud/commit/2c134eafe456ee77bbd46187040aa5041e5643ab))
|
||||
* **deps:** Update barman-cloud to v3.17.0 ([#702](https://github.com/cloudnative-pg/plugin-barman-cloud/issues/702)) ([fa546ea](https://github.com/cloudnative-pg/plugin-barman-cloud/commit/fa546eae0581a191abb625904b95d85a65d3ab08))
|
||||
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
* **azure:** Update barman-cloud with Azure validation fix ([#710](https://github.com/cloudnative-pg/plugin-barman-cloud/issues/710)) ([0153abb](https://github.com/cloudnative-pg/plugin-barman-cloud/commit/0153abba82437fdb9fa47094c83aaa532ce45f67)), closes [#705](https://github.com/cloudnative-pg/plugin-barman-cloud/issues/705)
|
||||
* **deps:** Update all non-major go dependencies ([#719](https://github.com/cloudnative-pg/plugin-barman-cloud/issues/719)) ([4a637d7](https://github.com/cloudnative-pg/plugin-barman-cloud/commit/4a637d7c58aad9dae70303af05e2a5fd95526d63))
|
||||
* **deps:** Update k8s.io/utils digest to 914a6e7 ([#715](https://github.com/cloudnative-pg/plugin-barman-cloud/issues/715)) ([b3bcf6d](https://github.com/cloudnative-pg/plugin-barman-cloud/commit/b3bcf6d9c1295a3acbe38124c70de18e5db85cf1))
|
||||
* **deps:** Update module sigs.k8s.io/controller-runtime to v0.23.1 ([#748](https://github.com/cloudnative-pg/plugin-barman-cloud/issues/748)) ([71bd4d8](https://github.com/cloudnative-pg/plugin-barman-cloud/commit/71bd4d808dbd6d62f27b9405f3ba89a49ba42c09))
|
||||
* Resolve WAL archiving performance and memory issues ([#746](https://github.com/cloudnative-pg/plugin-barman-cloud/issues/746)) ([378c76a](https://github.com/cloudnative-pg/plugin-barman-cloud/commit/378c76a5268907aca43104f16e2acd641903df75)), closes [#735](https://github.com/cloudnative-pg/plugin-barman-cloud/issues/735)
|
||||
|
||||
## [0.10.0](https://github.com/cloudnative-pg/plugin-barman-cloud/compare/v0.9.0...v0.10.0) (2025-12-30)
|
||||
|
||||
|
||||
|
||||
52
Taskfile.yml
52
Taskfile.yml
@ -19,9 +19,9 @@ tasks:
|
||||
desc: Run golangci-lint
|
||||
env:
|
||||
# renovate: datasource=git-refs depName=golangci-lint lookupName=https://github.com/sagikazarmark/daggerverse currentValue=main
|
||||
DAGGER_GOLANGCI_LINT_SHA: 6133ad18e131b891d4723b8e25d69f5de077b472
|
||||
DAGGER_GOLANGCI_LINT_SHA: 5dcc7e4c4cd5ed230046955f42e27f2166545155
|
||||
# renovate: datasource=docker depName=golangci/golangci-lint versioning=semver
|
||||
GOLANGCI_LINT_VERSION: v2.7.2
|
||||
GOLANGCI_LINT_VERSION: v2.8.0
|
||||
cmds:
|
||||
- >
|
||||
GITHUB_REF= dagger -sc "github.com/sagikazarmark/daggerverse/golangci-lint@${DAGGER_GOLANGCI_LINT_SHA}
|
||||
@ -85,9 +85,13 @@ tasks:
|
||||
env:
|
||||
# renovate: datasource=git-refs depName=crd-gen-refs lookupName=https://github.com/cloudnative-pg/daggerverse currentValue=main
|
||||
DAGGER_CRDGENREF_SHA: ee59e34a99940e45f87a16177b1d640975b05b74
|
||||
# renovate: datasource=go depName=github.com/elastic/crd-ref-docs
|
||||
CRDREFDOCS_VERSION: v0.3.0
|
||||
cmds:
|
||||
- >
|
||||
GITHUB_REF= dagger -s call -m github.com/cloudnative-pg/daggerverse/crd-ref-docs@${DAGGER_CRDGENREF_SHA} generate
|
||||
GITHUB_REF= dagger -s call -m github.com/cloudnative-pg/daggerverse/crd-ref-docs@${DAGGER_CRDGENREF_SHA}
|
||||
--version ${CRDREFDOCS_VERSION}
|
||||
generate
|
||||
--src .
|
||||
--source-path api/v1
|
||||
--config-file hack/crd-gen-refs/config.yaml
|
||||
@ -125,11 +129,11 @@ tasks:
|
||||
desc: Run go test
|
||||
env:
|
||||
# renovate: datasource=docker depName=golang versioning=semver
|
||||
GOLANG_IMAGE_VERSION: 1.25.5
|
||||
GOLANG_IMAGE_VERSION: 1.25.6
|
||||
# renovate: datasource=git-refs depname=kubernetes packageName=https://github.com/kubernetes/kubernetes versioning=semver
|
||||
K8S_VERSION: 1.31.0
|
||||
# renovate: datasource=git-refs depName=controller-runtime packageName=https://github.com/kubernetes-sigs/controller-runtime versioning=semver
|
||||
SETUP_ENVTEST_VERSION: 0.22.4
|
||||
SETUP_ENVTEST_VERSION: 0.23.1
|
||||
cmds:
|
||||
- >
|
||||
GITHUB_REF= dagger -s call -m ./dagger/gotest
|
||||
@ -202,7 +206,7 @@ tasks:
|
||||
- start-build-network
|
||||
vars:
|
||||
# renovate: datasource=github-tags depName=dagger/dagger versioning=semver
|
||||
DAGGER_VERSION: 0.19.8
|
||||
DAGGER_VERSION: 0.19.10
|
||||
DAGGER_ENGINE_IMAGE: registry.dagger.io/engine:v{{ .DAGGER_VERSION }}
|
||||
cmds:
|
||||
- >
|
||||
@ -302,7 +306,7 @@ tasks:
|
||||
- start-kind-cluster
|
||||
vars:
|
||||
# renovate: datasource=docker depName=golang versioning=semver
|
||||
GOLANG_IMAGE_VERSION: 1.25.5
|
||||
GOLANG_IMAGE_VERSION: 1.25.6
|
||||
KUBECONFIG_PATH:
|
||||
sh: mktemp -t kubeconfig-XXXXX
|
||||
env:
|
||||
@ -321,7 +325,7 @@ tasks:
|
||||
- build-images
|
||||
vars:
|
||||
# renovate: datasource=docker depName=golang versioning=semver
|
||||
GOLANG_IMAGE_VERSION: 1.25.5
|
||||
GOLANG_IMAGE_VERSION: 1.25.6
|
||||
env:
|
||||
_EXPERIMENTAL_DAGGER_RUNNER_HOST: docker-container://{{ .DAGGER_ENGINE_CONTAINER_NAME }}
|
||||
cmds:
|
||||
@ -377,34 +381,6 @@ tasks:
|
||||
build --dir . --file containers/Dockerfile.sidecar --platform linux/amd64 --platform linux/arm64
|
||||
publish --ref {{.SIDECAR_IMAGE_NAME}} --tags {{.IMAGE_VERSION}}
|
||||
|
||||
publish-barman-base:
|
||||
desc: Build and publish a barman-cloud base container image
|
||||
vars:
|
||||
BARMAN_BASE_IMAGE_NAME: ghcr.io/{{.GITHUB_REPOSITORY}}-base{{if not (hasPrefix "refs/heads/main" .GITHUB_REF)}}-testing{{end}}
|
||||
BARMAN_VERSION:
|
||||
sh: grep "^barman" containers/sidecar-requirements.in | sed -E 's/.*==([^ ]+)/\1/'
|
||||
BUILD_DATE:
|
||||
sh: date +"%Y%m%d%H%M"
|
||||
requires:
|
||||
# We expect this to run in a GitHub workflow, so we put a few GitHub-specific vars here
|
||||
# to prevent running this task locally by accident.
|
||||
vars:
|
||||
- CI
|
||||
- GITHUB_REPOSITORY
|
||||
- GITHUB_REF
|
||||
- GITHUB_REF_NAME
|
||||
- REGISTRY_USER
|
||||
- REGISTRY_PASSWORD
|
||||
env:
|
||||
# renovate: datasource=git-refs depName=docker lookupName=https://github.com/purpleclay/daggerverse currentValue=main
|
||||
DAGGER_DOCKER_SHA: ee12c1a4a2630e194ec20c5a9959183e3a78c192
|
||||
cmds:
|
||||
- >
|
||||
dagger call -m github.com/purpleclay/daggerverse/docker@${DAGGER_DOCKER_SHA}
|
||||
--registry ghcr.io --username $REGISTRY_USER --password env:REGISTRY_PASSWORD
|
||||
build --dir . --file containers/Dockerfile.barmanbase --platform linux/amd64 --platform linux/arm64
|
||||
publish --ref {{.BARMAN_BASE_IMAGE_NAME}} --tags "{{.BARMAN_VERSION}}-{{.BUILD_DATE}}"
|
||||
|
||||
controller-gen:
|
||||
desc: Run controller-gen
|
||||
run: once
|
||||
@ -482,7 +458,7 @@ tasks:
|
||||
IMAGE_VERSION: '{{regexReplaceAll "(\\d+)/merge" .GITHUB_REF_NAME "pr-${1}"}}'
|
||||
env:
|
||||
# renovate: datasource=git-refs depName=kustomize lookupName=https://github.com/sagikazarmark/daggerverse currentValue=main
|
||||
DAGGER_KUSTOMIZE_SHA: 6133ad18e131b891d4723b8e25d69f5de077b472
|
||||
DAGGER_KUSTOMIZE_SHA: 5dcc7e4c4cd5ed230046955f42e27f2166545155
|
||||
cmds:
|
||||
- >
|
||||
dagger -s call -m https://github.com/sagikazarmark/daggerverse/kustomize@${DAGGER_KUSTOMIZE_SHA}
|
||||
@ -512,7 +488,7 @@ tasks:
|
||||
- GITHUB_TOKEN
|
||||
env:
|
||||
# renovate: datasource=git-refs depName=gh lookupName=https://github.com/sagikazarmark/daggerverse
|
||||
DAGGER_GH_SHA: 6133ad18e131b891d4723b8e25d69f5de077b472
|
||||
DAGGER_GH_SHA: 5dcc7e4c4cd5ed230046955f42e27f2166545155
|
||||
preconditions:
|
||||
- sh: "[[ {{.GITHUB_REF}} =~ 'refs/tags/v.*' ]]"
|
||||
msg: not a tag, failing
|
||||
|
||||
@ -108,6 +108,11 @@ spec:
|
||||
- key
|
||||
- name
|
||||
type: object
|
||||
useDefaultAzureCredentials:
|
||||
description: |-
|
||||
Use the default Azure authentication flow, which includes DefaultAzureCredential.
|
||||
This allows authentication using environment variables and managed identities.
|
||||
type: boolean
|
||||
type: object
|
||||
data:
|
||||
description: |-
|
||||
|
||||
@ -1,7 +0,0 @@
|
||||
FROM python:3.13-slim-bookworm
|
||||
COPY containers/sidecar-requirements.txt .
|
||||
RUN apt-get update && \
|
||||
apt-get install -y postgresql-common build-essential && \
|
||||
/usr/share/postgresql-common/pgdg/apt.postgresql.org.sh -y && \
|
||||
apt-get install -y libpq-dev && \
|
||||
pip install -r sidecar-requirements.txt
|
||||
@ -1,5 +1,5 @@
|
||||
# Build the manager binary
|
||||
FROM --platform=$BUILDPLATFORM golang:1.25.5 AS gobuilder
|
||||
FROM --platform=$BUILDPLATFORM golang:1.25.6 AS gobuilder
|
||||
ARG TARGETOS
|
||||
ARG TARGETARCH
|
||||
|
||||
|
||||
@ -5,12 +5,12 @@
|
||||
# Both components are built before going into a distroless container
|
||||
|
||||
# Build the manager binary
|
||||
FROM --platform=$BUILDPLATFORM golang:1.25.5 AS gobuilder
|
||||
FROM --platform=$BUILDPLATFORM golang:1.25.6 AS gobuilder
|
||||
ARG TARGETOS
|
||||
ARG TARGETARCH
|
||||
|
||||
WORKDIR /workspace
|
||||
# Copy the Go Modules manifests
|
||||
|
||||
COPY ../go.mod go.mod
|
||||
COPY ../go.sum go.sum
|
||||
# cache deps before building and copying source so that we don't need to re-download as much
|
||||
@ -20,35 +20,73 @@ RUN go mod download
|
||||
ENV GOCACHE=/root/.cache/go-build
|
||||
ENV GOMODCACHE=/go/pkg/mod
|
||||
|
||||
# Copy the go source
|
||||
COPY ../cmd/manager/main.go cmd/manager/main.go
|
||||
COPY ../api/ api/
|
||||
COPY ../internal/ internal/
|
||||
|
||||
# Build
|
||||
# the GOARCH has not a default value to allow the binary be built according to the host where the command
|
||||
# was called. For example, if we call make docker-build in a local env which has the Apple Silicon M1 SO
|
||||
# the docker BUILDPLATFORM arg will be linux/arm64 when for Apple x86 it will be linux/amd64. Therefore,
|
||||
# by leaving it empty we can ensure that the container and binary shipped on it will have the same platform.
|
||||
# Build Go binary for target platform (TARGETOS/TARGETARCH)
|
||||
# Docker BuildKit sets these based on --platform flag or defaults to the build host platform
|
||||
RUN --mount=type=cache,target=/go/pkg/mod --mount=type=cache,target=/root/.cache/go-build \
|
||||
CGO_ENABLED=0 GOOS=${TARGETOS:-linux} GOARCH=${TARGETARCH} go build -a -o manager cmd/manager/main.go
|
||||
|
||||
# Use plugin-barman-cloud-base to get the dependencies.
|
||||
# pip will build everything inside /usr, so we copy every file into a new
|
||||
# destination that will then be copied into the distroless container
|
||||
FROM ghcr.io/cloudnative-pg/plugin-barman-cloud-base:3.16.2-202512221525 AS pythonbuilder
|
||||
# Prepare a new /usr/ directory with the files we'll need in the final image
|
||||
RUN mkdir /new-usr/ && \
|
||||
cp -r --parents /usr/local/lib/ /usr/lib/*-linux-gnu/ /usr/local/bin/ \
|
||||
/new-usr/
|
||||
# Build Python virtualenv with all dependencies
|
||||
FROM debian:trixie-slim AS pythonbuilder
|
||||
WORKDIR /build
|
||||
|
||||
# Joint process
|
||||
# Now we put everything that was build from the origin into our
|
||||
# distroless container
|
||||
FROM gcr.io/distroless/python3-debian12:nonroot
|
||||
# Install postgresql-common and setup pgdg repository first
|
||||
RUN apt-get update && \
|
||||
apt-get install -y --no-install-recommends postgresql-common && \
|
||||
/usr/share/postgresql-common/pgdg/apt.postgresql.org.sh -y
|
||||
|
||||
# Install build dependencies
|
||||
RUN apt-get update && \
|
||||
apt-get install -y --no-install-recommends \
|
||||
python3 \
|
||||
python3-venv \
|
||||
python3-dev \
|
||||
build-essential \
|
||||
libpq-dev \
|
||||
liblz4-dev \
|
||||
libsnappy-dev
|
||||
|
||||
COPY containers/sidecar-requirements.txt .
|
||||
|
||||
# Create virtualenv and install dependencies
|
||||
RUN python3 -m venv /venv && \
|
||||
/venv/bin/pip install --upgrade pip setuptools wheel && \
|
||||
/venv/bin/pip install --no-cache-dir -r sidecar-requirements.txt
|
||||
|
||||
# Download and extract runtime library packages and their dependencies
|
||||
# Using apt-cache to automatically resolve dependencies, filtering out packages
|
||||
# already present in the distroless base image.
|
||||
# Distroless package list from: https://github.com/GoogleContainerTools/distroless/blob/main/base/config.bzl
|
||||
# and https://github.com/GoogleContainerTools/distroless/blob/main/python3/config.bzl
|
||||
RUN mkdir -p /dependencies /build/downloads && \
|
||||
cd /build/downloads && \
|
||||
DISTROLESS_PACKAGES="libc6 libssl3t64 libzstd1 zlib1g libgcc-s1 libstdc++6 \
|
||||
libbz2-1.0 libdb5.3t64 libexpat1 liblzma5 libsqlite3-0 libuuid1 \
|
||||
libncursesw6 libtinfo6 libcom-err2 libcrypt1 libgssapi-krb5-2 \
|
||||
libk5crypto3 libkeyutils1 libkrb5-3 libkrb5support0 libnsl2 \
|
||||
libreadline8t64 libtirpc3t64 libffi8 libpython3.13-minimal \
|
||||
libpython3.13-stdlib python3.13-minimal python3.13-venv" && \
|
||||
apt-cache depends --recurse --no-recommends --no-suggests \
|
||||
--no-conflicts --no-breaks --no-replaces --no-enhances \
|
||||
$DISTROLESS_PACKAGES 2>/dev/null | grep "^\w" | sort -u > /tmp/distroless.txt && \
|
||||
apt-cache depends --recurse --no-recommends --no-suggests \
|
||||
--no-conflicts --no-breaks --no-replaces --no-enhances \
|
||||
libpq5 liblz4-1 libsnappy1v5 2>/dev/null | grep "^\w" | sort -u | \
|
||||
grep -v -F -x -f /tmp/distroless.txt > /tmp/packages.txt && \
|
||||
apt-get download $(cat /tmp/packages.txt) && \
|
||||
for deb in *.deb; do \
|
||||
dpkg -x "$deb" /dependencies; \
|
||||
done
|
||||
|
||||
# Final sidecar image using distroless base for minimal size and fewer packages
|
||||
FROM gcr.io/distroless/python3-debian13:nonroot
|
||||
|
||||
ENV SUMMARY="CloudNativePG Barman plugin" \
|
||||
DESCRIPTION="Container image that provides the barman-cloud sidecar"
|
||||
DESCRIPTION="Container image that provides the barman-cloud sidecar" \
|
||||
PATH="/venv/bin:$PATH"
|
||||
|
||||
LABEL summary="$SUMMARY" \
|
||||
description="$DESCRIPTION" \
|
||||
@ -60,7 +98,13 @@ LABEL summary="$SUMMARY" \
|
||||
version="" \
|
||||
release="1"
|
||||
|
||||
COPY --from=pythonbuilder /new-usr/* /usr/
|
||||
COPY --from=pythonbuilder /venv /venv
|
||||
COPY --from=pythonbuilder /dependencies/usr/lib /usr/lib
|
||||
COPY --from=gobuilder /workspace/manager /manager
|
||||
|
||||
# Compile all Python bytecode as root to avoid runtime compilation
|
||||
USER 0:0
|
||||
RUN ["/venv/bin/python3", "-c", "import sysconfig, compileall; compileall.compile_dir(sysconfig.get_path('stdlib'), quiet=1); compileall.compile_dir('/venv', quiet=1)"]
|
||||
|
||||
USER 26:26
|
||||
ENTRYPOINT ["/manager"]
|
||||
|
||||
@ -1,3 +1,3 @@
|
||||
barman[azure,cloud,google,snappy,zstandard,lz4]==3.16.2
|
||||
setuptools==80.9.0
|
||||
barman[azure,cloud,google,snappy,zstandard,lz4]==3.17.0
|
||||
setuptools==80.10.2
|
||||
zipp>=3.19.1 # not directly required, pinned by Snyk to avoid a vulnerability
|
||||
|
||||
@ -4,9 +4,9 @@
|
||||
#
|
||||
# pip-compile --allow-unsafe --generate-hashes --output-file=sidecar-requirements.txt --strip-extras sidecar-requirements.in
|
||||
#
|
||||
azure-core==1.37.0 \
|
||||
--hash=sha256:7064f2c11e4b97f340e8e8c6d923b822978be3016e46b7bc4aa4b337cfb48aee \
|
||||
--hash=sha256:b3abe2c59e7d6bb18b38c275a5029ff80f98990e7c90a5e646249a56630fcc19
|
||||
azure-core==1.38.0 \
|
||||
--hash=sha256:8194d2682245a3e4e3151a667c686464c3786fed7918b394d035bdcd61bb5993 \
|
||||
--hash=sha256:ab0c9b2cd71fecb1842d52c965c95285d3cfb38902f6766e4a471f1cd8905335
|
||||
# via
|
||||
# azure-identity
|
||||
# azure-storage-blob
|
||||
@ -14,31 +14,27 @@ azure-identity==1.25.1 \
|
||||
--hash=sha256:87ca8328883de6036443e1c37b40e8dc8fb74898240f61071e09d2e369361456 \
|
||||
--hash=sha256:e9edd720af03dff020223cd269fa3a61e8f345ea75443858273bcb44844ab651
|
||||
# via barman
|
||||
azure-storage-blob==12.27.1 \
|
||||
--hash=sha256:65d1e25a4628b7b6acd20ff7902d8da5b4fde8e46e19c8f6d213a3abc3ece272 \
|
||||
--hash=sha256:a1596cc4daf5dac9be115fcb5db67245eae894cf40e4248243754261f7b674a6
|
||||
azure-storage-blob==12.28.0 \
|
||||
--hash=sha256:00fb1db28bf6a7b7ecaa48e3b1d5c83bfadacc5a678b77826081304bd87d6461 \
|
||||
--hash=sha256:e7d98ea108258d29aa0efbfd591b2e2075fa1722a2fae8699f0b3c9de11eff41
|
||||
# via barman
|
||||
barman==3.16.2 \
|
||||
--hash=sha256:0549f451a1b928647c75c5a2977526233ad7a976bb83e9a4379c33ce61443515 \
|
||||
--hash=sha256:ab0c6f4f5cfc0cc12b087335bdd5def2edbca32bc1bf553cc5a9e78cd83df43a
|
||||
barman==3.17.0 \
|
||||
--hash=sha256:07b033da14e72f103de44261c31bd0c3169bbb2e4de3481c6bb3510e9870d38e \
|
||||
--hash=sha256:d6618990a6dbb31af3286d746a278a038534b7e3cc617c2b379ef7ebdeb7ed5a
|
||||
# via -r sidecar-requirements.in
|
||||
boto3==1.42.14 \
|
||||
--hash=sha256:a5d005667b480c844ed3f814a59f199ce249d0f5669532a17d06200c0a93119c \
|
||||
--hash=sha256:bfcc665227bb4432a235cb4adb47719438d6472e5ccbf7f09512046c3f749670
|
||||
boto3==1.42.39 \
|
||||
--hash=sha256:d03f82363314759eff7f84a27b9e6428125f89d8119e4588e8c2c1d79892c956 \
|
||||
--hash=sha256:d9d6ce11df309707b490d2f5f785b761cfddfd6d1f665385b78c9d8ed097184b
|
||||
# via barman
|
||||
botocore==1.42.14 \
|
||||
--hash=sha256:cf5bebb580803c6cfd9886902ca24834b42ecaa808da14fb8cd35ad523c9f621 \
|
||||
--hash=sha256:efe89adfafa00101390ec2c371d453b3359d5f9690261bc3bd70131e0d453e8e
|
||||
botocore==1.42.39 \
|
||||
--hash=sha256:0f00355050821e91a5fe6d932f7bf220f337249b752899e3e4cf6ed54326249e \
|
||||
--hash=sha256:9e0d0fed9226449cc26fcf2bbffc0392ac698dd8378e8395ce54f3ec13f81d58
|
||||
# via
|
||||
# boto3
|
||||
# s3transfer
|
||||
cachetools==6.2.4 \
|
||||
--hash=sha256:69a7a52634fed8b8bf6e24a050fb60bff1c9bd8f6d24572b99c32d4e71e62a51 \
|
||||
--hash=sha256:82c5c05585e70b6ba2d3ae09ea60b79548872185d2f24ae1f2709d37299fd607
|
||||
# via google-auth
|
||||
certifi==2025.11.12 \
|
||||
--hash=sha256:97de8790030bbd5c2d96b7ec782fc2f7820ef8dba6db909ccf95449f2d062d4b \
|
||||
--hash=sha256:d8ab5478f2ecd78af242878415affce761ca6bc54a22a27e026d7c25357c3316
|
||||
certifi==2026.1.4 \
|
||||
--hash=sha256:9943707519e4add1115f44c2bc244f782c0249876bf51b6599fee1ffbedd685c \
|
||||
--hash=sha256:ac726dd470482006e014ad384921ed6438c457018f4b3d204aea4281258b2120
|
||||
# via requests
|
||||
cffi==2.0.0 \
|
||||
--hash=sha256:00bdf7acc5f795150faa6957054fbbca2439db2f775ce831222b66f192f03beb \
|
||||
@ -378,75 +374,71 @@ cramjam==2.11.0 \
|
||||
# via
|
||||
# barman
|
||||
# python-snappy
|
||||
cryptography==46.0.3 \
|
||||
--hash=sha256:00a5e7e87938e5ff9ff5447ab086a5706a957137e6e433841e9d24f38a065217 \
|
||||
--hash=sha256:01ca9ff2885f3acc98c29f1860552e37f6d7c7d013d7334ff2a9de43a449315d \
|
||||
--hash=sha256:09859af8466b69bc3c27bdf4f5d84a665e0f7ab5088412e9e2ec49758eca5cbc \
|
||||
--hash=sha256:0abf1ffd6e57c67e92af68330d05760b7b7efb243aab8377e583284dbab72c71 \
|
||||
--hash=sha256:1000713389b75c449a6e979ffc7dcc8ac90b437048766cef052d4d30b8220971 \
|
||||
--hash=sha256:109d4ddfadf17e8e7779c39f9b18111a09efb969a301a31e987416a0191ed93a \
|
||||
--hash=sha256:10b01676fc208c3e6feeb25a8b83d81767e8059e1fe86e1dc62d10a3018fa926 \
|
||||
--hash=sha256:10ca84c4668d066a9878890047f03546f3ae0a6b8b39b697457b7757aaf18dbc \
|
||||
--hash=sha256:15ab9b093e8f09daab0f2159bb7e47532596075139dd74365da52ecc9cb46c5d \
|
||||
--hash=sha256:191bb60a7be5e6f54e30ba16fdfae78ad3a342a0599eb4193ba88e3f3d6e185b \
|
||||
--hash=sha256:22d7e97932f511d6b0b04f2bfd818d73dcd5928db509460aaf48384778eb6d20 \
|
||||
--hash=sha256:23b1a8f26e43f47ceb6d6a43115f33a5a37d57df4ea0ca295b780ae8546e8044 \
|
||||
--hash=sha256:36e627112085bb3b81b19fed209c05ce2a52ee8b15d161b7c643a7d5a88491f3 \
|
||||
--hash=sha256:39b6755623145ad5eff1dab323f4eae2a32a77a7abef2c5089a04a3d04366715 \
|
||||
--hash=sha256:3b51b8ca4f1c6453d8829e1eb7299499ca7f313900dd4d89a24b8b87c0a780d4 \
|
||||
--hash=sha256:402b58fc32614f00980b66d6e56a5b4118e6cb362ae8f3fda141ba4689bd4506 \
|
||||
--hash=sha256:416260257577718c05135c55958b674000baef9a1c7d9e8f306ec60d71db850f \
|
||||
--hash=sha256:46acf53b40ea38f9c6c229599a4a13f0d46a6c3fa9ef19fc1a124d62e338dfa0 \
|
||||
--hash=sha256:4b7387121ac7d15e550f5cb4a43aef2559ed759c35df7336c402bb8275ac9683 \
|
||||
--hash=sha256:50fc3343ac490c6b08c0cf0d704e881d0d660be923fd3076db3e932007e726e3 \
|
||||
--hash=sha256:516ea134e703e9fe26bcd1277a4b59ad30586ea90c365a87781d7887a646fe21 \
|
||||
--hash=sha256:549e234ff32571b1f4076ac269fcce7a808d3bf98b76c8dd560e42dbc66d7d91 \
|
||||
--hash=sha256:5d7f93296ee28f68447397bf5198428c9aeeab45705a55d53a6343455dcb2c3c \
|
||||
--hash=sha256:5ecfccd2329e37e9b7112a888e76d9feca2347f12f37918facbb893d7bb88ee8 \
|
||||
--hash=sha256:6276eb85ef938dc035d59b87c8a7dc559a232f954962520137529d77b18ff1df \
|
||||
--hash=sha256:6b5063083824e5509fdba180721d55909ffacccc8adbec85268b48439423d78c \
|
||||
--hash=sha256:6eae65d4c3d33da080cff9c4ab1f711b15c1d9760809dad6ea763f3812d254cb \
|
||||
--hash=sha256:6f61efb26e76c45c4a227835ddeae96d83624fb0d29eb5df5b96e14ed1a0afb7 \
|
||||
--hash=sha256:71e842ec9bc7abf543b47cf86b9a743baa95f4677d22baa4c7d5c69e49e9bc04 \
|
||||
--hash=sha256:760f83faa07f8b64e9c33fc963d790a2edb24efb479e3520c14a45741cd9b2db \
|
||||
--hash=sha256:78a97cf6a8839a48c49271cdcbd5cf37ca2c1d6b7fdd86cc864f302b5e9bf459 \
|
||||
--hash=sha256:7ce938a99998ed3c8aa7e7272dca1a610401ede816d36d0693907d863b10d9ea \
|
||||
--hash=sha256:8a6e050cb6164d3f830453754094c086ff2d0b2f3a897a1d9820f6139a1f0914 \
|
||||
--hash=sha256:9394673a9f4de09e28b5356e7fff97d778f8abad85c9d5ac4a4b7e25a0de7717 \
|
||||
--hash=sha256:94cd0549accc38d1494e1f8de71eca837d0509d0d44bf11d158524b0e12cebf9 \
|
||||
--hash=sha256:a04bee9ab6a4da801eb9b51f1b708a1b5b5c9eb48c03f74198464c66f0d344ac \
|
||||
--hash=sha256:a23582810fedb8c0bc47524558fb6c56aac3fc252cb306072fd2815da2a47c32 \
|
||||
--hash=sha256:a2c0cd47381a3229c403062f764160d57d4d175e022c1df84e168c6251a22eec \
|
||||
--hash=sha256:a8b17438104fed022ce745b362294d9ce35b4c2e45c1d958ad4a4b019285f4a1 \
|
||||
--hash=sha256:a9a3008438615669153eb86b26b61e09993921ebdd75385ddd748702c5adfddb \
|
||||
--hash=sha256:b02cf04496f6576afffef5ddd04a0cb7d49cf6be16a9059d793a30b035f6b6ac \
|
||||
--hash=sha256:b419ae593c86b87014b9be7396b385491ad7f320bde96826d0dd174459e54665 \
|
||||
--hash=sha256:c0a7bb1a68a5d3471880e264621346c48665b3bf1c3759d682fc0864c540bd9e \
|
||||
--hash=sha256:c70cc23f12726be8f8bc72e41d5065d77e4515efae3690326764ea1b07845cfb \
|
||||
--hash=sha256:c8daeb2d2174beb4575b77482320303f3d39b8e81153da4f0fb08eb5fe86a6c5 \
|
||||
--hash=sha256:cb3d760a6117f621261d662bccc8ef5bc32ca673e037c83fbe565324f5c46936 \
|
||||
--hash=sha256:d55f3dffadd674514ad19451161118fd010988540cee43d8bc20675e775925de \
|
||||
--hash=sha256:d89c3468de4cdc4f08a57e214384d0471911a3830fcdaf7a8cc587e42a866372 \
|
||||
--hash=sha256:db391fa7c66df6762ee3f00c95a89e6d428f4d60e7abc8328f4fe155b5ac6e54 \
|
||||
--hash=sha256:dfb781ff7eaa91a6f7fd41776ec37c5853c795d3b358d4896fdbb5df168af422 \
|
||||
--hash=sha256:e5bf0ed4490068a2e72ac03d786693adeb909981cc596425d09032d372bcc849 \
|
||||
--hash=sha256:e7aec276d68421f9574040c26e2a7c3771060bc0cff408bae1dcb19d3ab1e63c \
|
||||
--hash=sha256:ef639cb3372f69ec44915fafcd6698b6cc78fbe0c2ea41be867f6ed612811963 \
|
||||
--hash=sha256:f260d0d41e9b4da1ed1e0f1ce571f97fe370b152ab18778e9e8f67d6af432018
|
||||
cryptography==46.0.4 \
|
||||
--hash=sha256:01df4f50f314fbe7009f54046e908d1754f19d0c6d3070df1e6268c5a4af09fa \
|
||||
--hash=sha256:0563655cb3c6d05fb2afe693340bc050c30f9f34e15763361cf08e94749401fc \
|
||||
--hash=sha256:078e5f06bd2fa5aea5a324f2a09f914b1484f1d0c2a4d6a8a28c74e72f65f2da \
|
||||
--hash=sha256:0a9ad24359fee86f131836a9ac3bffc9329e956624a2d379b613f8f8abaf5255 \
|
||||
--hash=sha256:2067461c80271f422ee7bdbe79b9b4be54a5162e90345f86a23445a0cf3fd8a2 \
|
||||
--hash=sha256:281526e865ed4166009e235afadf3a4c4cba6056f99336a99efba65336fd5485 \
|
||||
--hash=sha256:2d08bc22efd73e8854b0b7caff402d735b354862f1145d7be3b9c0f740fef6a0 \
|
||||
--hash=sha256:3c268a3490df22270955966ba236d6bc4a8f9b6e4ffddb78aac535f1a5ea471d \
|
||||
--hash=sha256:3d425eacbc9aceafd2cb429e42f4e5d5633c6f873f5e567077043ef1b9bbf616 \
|
||||
--hash=sha256:44cc0675b27cadb71bdbb96099cca1fa051cd11d2ade09e5cd3a2edb929ed947 \
|
||||
--hash=sha256:47bcd19517e6389132f76e2d5303ded6cf3f78903da2158a671be8de024f4cd0 \
|
||||
--hash=sha256:485e2b65d25ec0d901bca7bcae0f53b00133bf3173916d8e421f6fddde103908 \
|
||||
--hash=sha256:5aa3e463596b0087b3da0dbe2b2487e9fc261d25da85754e30e3b40637d61f81 \
|
||||
--hash=sha256:5f14fba5bf6f4390d7ff8f086c566454bff0411f6d8aa7af79c88b6f9267aecc \
|
||||
--hash=sha256:62217ba44bf81b30abaeda1488686a04a702a261e26f87db51ff61d9d3510abd \
|
||||
--hash=sha256:6225d3ebe26a55dbc8ead5ad1265c0403552a63336499564675b29eb3184c09b \
|
||||
--hash=sha256:6bb5157bf6a350e5b28aee23beb2d84ae6f5be390b2f8ee7ea179cda077e1019 \
|
||||
--hash=sha256:728fedc529efc1439eb6107b677f7f7558adab4553ef8669f0d02d42d7b959a7 \
|
||||
--hash=sha256:766330cce7416c92b5e90c3bb71b1b79521760cdcfc3a6a1a182d4c9fab23d2b \
|
||||
--hash=sha256:812815182f6a0c1d49a37893a303b44eaac827d7f0d582cecfc81b6427f22973 \
|
||||
--hash=sha256:829c2b12bbc5428ab02d6b7f7e9bbfd53e33efd6672d21341f2177470171ad8b \
|
||||
--hash=sha256:82a62483daf20b8134f6e92898da70d04d0ef9a75829d732ea1018678185f4f5 \
|
||||
--hash=sha256:8a15fb869670efa8f83cbffbc8753c1abf236883225aed74cd179b720ac9ec80 \
|
||||
--hash=sha256:8bf75b0259e87fa70bddc0b8b4078b76e7fd512fd9afae6c1193bcf440a4dbef \
|
||||
--hash=sha256:91627ebf691d1ea3976a031b61fb7bac1ccd745afa03602275dda443e11c8de0 \
|
||||
--hash=sha256:93d8291da8d71024379ab2cb0b5c57915300155ad42e07f76bea6ad838d7e59b \
|
||||
--hash=sha256:9b34d8ba84454641a6bf4d6762d15847ecbd85c1316c0a7984e6e4e9f748ec2e \
|
||||
--hash=sha256:9b4d17bc7bd7cdd98e3af40b441feaea4c68225e2eb2341026c84511ad246c0c \
|
||||
--hash=sha256:9c2da296c8d3415b93e6053f5a728649a87a48ce084a9aaf51d6e46c87c7f2d2 \
|
||||
--hash=sha256:a05177ff6296644ef2876fce50518dffb5bcdf903c85250974fc8bc85d54c0af \
|
||||
--hash=sha256:a90e43e3ef65e6dcf969dfe3bb40cbf5aef0d523dff95bfa24256be172a845f4 \
|
||||
--hash=sha256:a9556ba711f7c23f77b151d5798f3ac44a13455cc68db7697a1096e6d0563cab \
|
||||
--hash=sha256:b1de0ebf7587f28f9190b9cb526e901bf448c9e6a99655d2b07fff60e8212a82 \
|
||||
--hash=sha256:be8c01a7d5a55f9a47d1888162b76c8f49d62b234d88f0ff91a9fbebe32ffbc3 \
|
||||
--hash=sha256:bfd019f60f8abc2ed1b9be4ddc21cfef059c841d86d710bb69909a688cbb8f59 \
|
||||
--hash=sha256:c236a44acfb610e70f6b3e1c3ca20ff24459659231ef2f8c48e879e2d32b73da \
|
||||
--hash=sha256:c411f16275b0dea722d76544a61d6421e2cc829ad76eec79280dbdc9ddf50061 \
|
||||
--hash=sha256:c92010b58a51196a5f41c3795190203ac52edfd5dc3ff99149b4659eba9d2085 \
|
||||
--hash=sha256:d5a45ddc256f492ce42a4e35879c5e5528c09cd9ad12420828c972951d8e016b \
|
||||
--hash=sha256:daa392191f626d50f1b136c9b4cf08af69ca8279d110ea24f5c2700054d2e263 \
|
||||
--hash=sha256:dc1272e25ef673efe72f2096e92ae39dea1a1a450dd44918b15351f72c5a168e \
|
||||
--hash=sha256:dce1e4f068f03008da7fa51cc7abc6ddc5e5de3e3d1550334eaf8393982a5829 \
|
||||
--hash=sha256:dd5aba870a2c40f87a3af043e0dee7d9eb02d4aff88a797b48f2b43eff8c3ab4 \
|
||||
--hash=sha256:de0f5f4ec8711ebc555f54735d4c673fc34b65c44283895f1a08c2b49d2fd99c \
|
||||
--hash=sha256:df4a817fa7138dd0c96c8c8c20f04b8aaa1fac3bbf610913dcad8ea82e1bfd3f \
|
||||
--hash=sha256:e07ea39c5b048e085f15923511d8121e4a9dc45cee4e3b970ca4f0d338f23095 \
|
||||
--hash=sha256:eeeb2e33d8dbcccc34d64651f00a98cb41b2dc69cef866771a5717e6734dfa32 \
|
||||
--hash=sha256:fa0900b9ef9c49728887d1576fd8d9e7e3ea872fa9b25ef9b64888adc434e976 \
|
||||
--hash=sha256:fdc3daab53b212472f1524d070735b2f0c214239df131903bae1d598016fa822
|
||||
# via
|
||||
# azure-identity
|
||||
# azure-storage-blob
|
||||
# google-auth
|
||||
# msal
|
||||
# pyjwt
|
||||
google-api-core==2.28.1 \
|
||||
--hash=sha256:2b405df02d68e68ce0fbc138559e6036559e685159d148ae5861013dc201baf8 \
|
||||
--hash=sha256:4021b0f8ceb77a6fb4de6fde4502cecab45062e66ff4f2895169e0b35bc9466c
|
||||
google-api-core==2.29.0 \
|
||||
--hash=sha256:84181be0f8e6b04006df75ddfe728f24489f0af57c96a529ff7cf45bc28797f7 \
|
||||
--hash=sha256:d30bc60980daa36e314b5d5a3e5958b0200cb44ca8fa1be2b614e932b75a3ea9
|
||||
# via
|
||||
# google-cloud-core
|
||||
# google-cloud-storage
|
||||
google-auth==2.45.0 \
|
||||
--hash=sha256:82344e86dc00410ef5382d99be677c6043d72e502b625aa4f4afa0bdacca0f36 \
|
||||
--hash=sha256:90d3f41b6b72ea72dd9811e765699ee491ab24139f34ebf1ca2b9cc0c38708f3
|
||||
google-auth==2.48.0 \
|
||||
--hash=sha256:2e2a537873d449434252a9632c28bfc268b0adb1e53f9fb62afc5333a975903f \
|
||||
--hash=sha256:4f7e706b0cd3208a3d940a19a822c37a476ddba5450156c3e6624a71f7c841ce
|
||||
# via
|
||||
# google-api-core
|
||||
# google-cloud-core
|
||||
@ -455,9 +447,9 @@ google-cloud-core==2.5.0 \
|
||||
--hash=sha256:67d977b41ae6c7211ee830c7912e41003ea8194bff15ae7d72fd6f51e57acabc \
|
||||
--hash=sha256:7c1b7ef5c92311717bd05301aa1a91ffbc565673d3b0b4163a52d8413a186963
|
||||
# via google-cloud-storage
|
||||
google-cloud-storage==3.7.0 \
|
||||
--hash=sha256:469bc9540936e02f8a4bfd1619e9dca1e42dec48f95e4204d783b36476a15093 \
|
||||
--hash=sha256:9ce59c65f4d6e372effcecc0456680a8d73cef4f2dc9212a0704799cb3d69237
|
||||
google-cloud-storage==3.8.0 \
|
||||
--hash=sha256:78cfeae7cac2ca9441d0d0271c2eb4ebfa21aa4c6944dd0ccac0389e81d955a7 \
|
||||
--hash=sha256:cc67952dce84ebc9d44970e24647a58260630b7b64d72360cedaf422d6727f28
|
||||
# via barman
|
||||
google-crc32c==1.8.0 \
|
||||
--hash=sha256:014a7e68d623e9a4222d663931febc3033c5c7c9730785727de2a81f87d5bab8 \
|
||||
@ -512,9 +504,9 @@ isodate==0.7.2 \
|
||||
--hash=sha256:28009937d8031054830160fce6d409ed342816b543597cece116d966c6d99e15 \
|
||||
--hash=sha256:4cd1aa0f43ca76f4a6c6c0292a85f40b35ec2e43e315b59f06e6d32171a953e6
|
||||
# via azure-storage-blob
|
||||
jmespath==1.0.1 \
|
||||
--hash=sha256:02e2e4cc71b5bcab88332eebf907519190dd9e6e82107fa7f83b1003a6252980 \
|
||||
--hash=sha256:90261b206d6defd58fdd5e85f478bf633a2901798906be2ad389150c5c60edbe
|
||||
jmespath==1.1.0 \
|
||||
--hash=sha256:472c87d80f36026ae83c6ddd0f1d05d4e510134ed462851fd5f754c8c3cbb88d \
|
||||
--hash=sha256:a5663118de4908c91729bea0acadca56526eb2698e83de10cd116ae0f4e97c64
|
||||
# via
|
||||
# boto3
|
||||
# botocore
|
||||
@ -591,17 +583,17 @@ proto-plus==1.27.0 \
|
||||
--hash=sha256:1baa7f81cf0f8acb8bc1f6d085008ba4171eaf669629d1b6d1673b21ed1c0a82 \
|
||||
--hash=sha256:873af56dd0d7e91836aee871e5799e1c6f1bda86ac9a983e0bb9f0c266a568c4
|
||||
# via google-api-core
|
||||
protobuf==6.33.2 \
|
||||
--hash=sha256:1f8017c48c07ec5859106533b682260ba3d7c5567b1ca1f24297ce03384d1b4f \
|
||||
--hash=sha256:2981c58f582f44b6b13173e12bb8656711189c2a70250845f264b877f00b1913 \
|
||||
--hash=sha256:56dc370c91fbb8ac85bc13582c9e373569668a290aa2e66a590c2a0d35ddb9e4 \
|
||||
--hash=sha256:7109dcc38a680d033ffb8bf896727423528db9163be1b6a02d6a49606dcadbfe \
|
||||
--hash=sha256:7636aad9bb01768870266de5dc009de2d1b936771b38a793f73cbbf279c91c5c \
|
||||
--hash=sha256:87eb388bd2d0f78febd8f4c8779c79247b26a5befad525008e49a6955787ff3d \
|
||||
--hash=sha256:8cd7640aee0b7828b6d03ae518b5b4806fdfc1afe8de82f79c3454f8aef29872 \
|
||||
--hash=sha256:b5d3b5625192214066d99b2b605f5783483575656784de223f00a8d00754fc0e \
|
||||
--hash=sha256:d9b19771ca75935b3a4422957bc518b0cecb978b31d1dd12037b088f6bcc0e43 \
|
||||
--hash=sha256:fc2a0e8b05b180e5fc0dd1559fe8ebdae21a27e81ac77728fb6c42b12c7419b4
|
||||
protobuf==6.33.5 \
|
||||
--hash=sha256:3093804752167bcab3998bec9f1048baae6e29505adaf1afd14a37bddede533c \
|
||||
--hash=sha256:69915a973dd0f60f31a08b8318b73eab2bd6a392c79184b3612226b0a3f8ec02 \
|
||||
--hash=sha256:6ddcac2a081f8b7b9642c09406bc6a4290128fce5f471cddd165960bb9119e5c \
|
||||
--hash=sha256:8afa18e1d6d20af15b417e728e9f60f3aa108ee76f23c3b2c07a2c3b546d3afd \
|
||||
--hash=sha256:8f04fa32763dcdb4973d537d6b54e615cc61108c7cb38fe59310c3192d29510a \
|
||||
--hash=sha256:9b71e0281f36f179d00cbcb119cb19dec4d14a81393e5ea220f64b286173e190 \
|
||||
--hash=sha256:a3157e62729aafb8df6da2c03aa5c0937c7266c626ce11a278b6eb7963c4e37c \
|
||||
--hash=sha256:a5cb85982d95d906df1e2210e58f8e4f1e3cdc088e52c921a041f9c9a0386de5 \
|
||||
--hash=sha256:cbf16ba3350fb7b889fca858fb215967792dc125b35c7976ca4818bee3521cf0 \
|
||||
--hash=sha256:d71b040839446bac0f4d162e758bea99c8251161dae9d0983a3b88dee345153b
|
||||
# via
|
||||
# google-api-core
|
||||
# googleapis-common-protos
|
||||
@ -615,9 +607,9 @@ psycopg2==2.9.11 \
|
||||
--hash=sha256:e03e4a6dbe87ff81540b434f2e5dc2bddad10296db5eea7bdc995bf5f4162938 \
|
||||
--hash=sha256:f10a48acba5fe6e312b891f290b4d2ca595fc9a06850fe53320beac353575578
|
||||
# via barman
|
||||
pyasn1==0.6.1 \
|
||||
--hash=sha256:0d632f46f2ba09143da3a8afe9e33fb6f92fa2320ab7e886e2d0f7672af84629 \
|
||||
--hash=sha256:6f580d2bdd84365380830acf45550f2511469f673cb4a5ae3857a3170128b034
|
||||
pyasn1==0.6.2 \
|
||||
--hash=sha256:1eb26d860996a18e9b6ed05e7aae0e9fc21619fcee6af91cca9bad4fbea224bf \
|
||||
--hash=sha256:9b59a2b25ba7e4f8197db7686c09fb33e658b98339fadb826e9512629017833b
|
||||
# via
|
||||
# pyasn1-modules
|
||||
# rsa
|
||||
@ -625,13 +617,13 @@ pyasn1-modules==0.4.2 \
|
||||
--hash=sha256:29253a9207ce32b64c3ac6600edc75368f98473906e8fd1043bd6b5b1de2c14a \
|
||||
--hash=sha256:677091de870a80aae844b1ca6134f54652fa2c8c5a52aa396440ac3106e941e6
|
||||
# via google-auth
|
||||
pycparser==2.23 \
|
||||
--hash=sha256:78816d4f24add8f10a06d6f05b4d424ad9e96cfebf68a4ddc99c65c0720d00c2 \
|
||||
--hash=sha256:e5c6e8d3fbad53479cab09ac03729e0a9faf2bee3db8208a550daf5af81a5934
|
||||
pycparser==3.0 \
|
||||
--hash=sha256:600f49d217304a5902ac3c37e1281c9fe94e4d0489de643a9504c5cdfdfc6b29 \
|
||||
--hash=sha256:b727414169a36b7d524c1c3e31839a521725078d7b2ff038656844266160a992
|
||||
# via cffi
|
||||
pyjwt==2.10.1 \
|
||||
--hash=sha256:3cc5772eb20009233caf06e9d8a0577824723b44e6648ee0a2aedb6cf9381953 \
|
||||
--hash=sha256:dcdd193e30abefd5debf142f9adfcdd2b58004e644f25406ffaebd50bd98dacb
|
||||
pyjwt==2.11.0 \
|
||||
--hash=sha256:35f95c1f0fbe5d5ba6e43f00271c275f7a1a4db1dab27bf708073b75318ea623 \
|
||||
--hash=sha256:94a6bde30eb5c8e04fee991062b534071fd1439ef58d2adc9ccb823e7bcd0469
|
||||
# via
|
||||
# msal
|
||||
# pyjwt
|
||||
@ -672,9 +664,9 @@ typing-extensions==4.15.0 \
|
||||
# azure-core
|
||||
# azure-identity
|
||||
# azure-storage-blob
|
||||
urllib3==2.6.2 \
|
||||
--hash=sha256:016f9c98bb7e98085cb2b4b17b87d2c702975664e4f060c6532e64d1c1a5e797 \
|
||||
--hash=sha256:ec21cddfe7724fc7cb4ba4bea7aa8e2ef36f607a4bab81aa6ce42a13dc3f03dd
|
||||
urllib3==2.6.3 \
|
||||
--hash=sha256:1b62b6884944a57dbe321509ab94fd4d3b307075e0c2eae991ac71ee15ad38ed \
|
||||
--hash=sha256:bf272323e553dfb2e87d9bfd225ca7b0f467b919d7bbd355436d3fd37cb0acd4
|
||||
# via
|
||||
# botocore
|
||||
# requests
|
||||
@ -785,9 +777,9 @@ zstandard==0.25.0 \
|
||||
# via barman
|
||||
|
||||
# The following packages are considered to be unsafe in a requirements file:
|
||||
setuptools==80.9.0 \
|
||||
--hash=sha256:062d34222ad13e0cc312a4c02d73f059e86a4acbfbdea8f8f76b28c99f306922 \
|
||||
--hash=sha256:f36b47402ecde768dbfafc46e8e4207b4360c654f1f3bb84475f0a28628fb19c
|
||||
setuptools==80.10.2 \
|
||||
--hash=sha256:8b0e9d10c784bf7d262c4e5ec5d4ec94127ce206e8738f29a437945fbc219b70 \
|
||||
--hash=sha256:95b30ddfb717250edb492926c92b5221f7ef3fbcc2b07579bcd4a27da21d0173
|
||||
# via
|
||||
# -r sidecar-requirements.in
|
||||
# barman
|
||||
|
||||
28
go.mod
28
go.mod
@ -2,18 +2,18 @@ module github.com/cloudnative-pg/plugin-barman-cloud
|
||||
|
||||
go 1.25.0
|
||||
|
||||
toolchain go1.25.5
|
||||
toolchain go1.25.6
|
||||
|
||||
require (
|
||||
github.com/cert-manager/cert-manager v1.19.2
|
||||
github.com/cloudnative-pg/api v1.28.0
|
||||
github.com/cloudnative-pg/barman-cloud v0.4.0
|
||||
github.com/cloudnative-pg/barman-cloud v0.4.1-0.20260108104508-ced266c145f5
|
||||
github.com/cloudnative-pg/cloudnative-pg v1.28.0
|
||||
github.com/cloudnative-pg/cnpg-i v0.3.1
|
||||
github.com/cloudnative-pg/cnpg-i-machinery v0.4.2
|
||||
github.com/cloudnative-pg/machinery v0.3.3
|
||||
github.com/onsi/ginkgo/v2 v2.27.3
|
||||
github.com/onsi/gomega v1.38.3
|
||||
github.com/onsi/ginkgo/v2 v2.28.1
|
||||
github.com/onsi/gomega v1.39.1
|
||||
github.com/spf13/cobra v1.10.2
|
||||
github.com/spf13/viper v1.21.0
|
||||
google.golang.org/grpc v1.78.0
|
||||
@ -22,8 +22,8 @@ require (
|
||||
k8s.io/apiextensions-apiserver v0.35.0
|
||||
k8s.io/apimachinery v0.35.0
|
||||
k8s.io/client-go v0.35.0
|
||||
k8s.io/utils v0.0.0-20251222233032-718f0e51e6d2
|
||||
sigs.k8s.io/controller-runtime v0.22.4
|
||||
k8s.io/utils v0.0.0-20260108192941-914a6e750570
|
||||
sigs.k8s.io/controller-runtime v0.23.1
|
||||
sigs.k8s.io/kustomize/api v0.21.0
|
||||
sigs.k8s.io/kustomize/kyaml v0.21.0
|
||||
)
|
||||
@ -66,7 +66,7 @@ require (
|
||||
github.com/google/cel-go v0.26.0 // indirect
|
||||
github.com/google/gnostic-models v0.7.1 // indirect
|
||||
github.com/google/go-cmp v0.7.0 // indirect
|
||||
github.com/google/pprof v0.0.0-20250403155104-27863c87afa6 // indirect
|
||||
github.com/google/pprof v0.0.0-20260115054156-294ebfa9ad83 // indirect
|
||||
github.com/google/uuid v1.6.0 // indirect
|
||||
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 // indirect
|
||||
github.com/grpc-ecosystem/go-grpc-middleware/v2 v2.3.3 // indirect
|
||||
@ -113,15 +113,15 @@ require (
|
||||
go.yaml.in/yaml/v2 v2.4.3 // indirect
|
||||
go.yaml.in/yaml/v3 v3.0.4 // indirect
|
||||
golang.org/x/exp v0.0.0-20250718183923-645b1fa84792 // indirect
|
||||
golang.org/x/mod v0.30.0 // indirect
|
||||
golang.org/x/net v0.48.0 // indirect
|
||||
golang.org/x/mod v0.32.0 // indirect
|
||||
golang.org/x/net v0.49.0 // indirect
|
||||
golang.org/x/oauth2 v0.34.0 // indirect
|
||||
golang.org/x/sync v0.19.0 // indirect
|
||||
golang.org/x/sys v0.39.0 // indirect
|
||||
golang.org/x/term v0.38.0 // indirect
|
||||
golang.org/x/text v0.32.0 // indirect
|
||||
golang.org/x/sys v0.40.0 // indirect
|
||||
golang.org/x/term v0.39.0 // indirect
|
||||
golang.org/x/text v0.33.0 // indirect
|
||||
golang.org/x/time v0.14.0 // indirect
|
||||
golang.org/x/tools v0.39.0 // indirect
|
||||
golang.org/x/tools v0.41.0 // indirect
|
||||
gomodules.xyz/jsonpatch/v2 v2.5.0 // indirect
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20251029180050-ab9386a59fda // indirect
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20251029180050-ab9386a59fda // indirect
|
||||
@ -136,6 +136,6 @@ require (
|
||||
sigs.k8s.io/gateway-api v1.4.0 // indirect
|
||||
sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 // indirect
|
||||
sigs.k8s.io/randfill v1.0.0 // indirect
|
||||
sigs.k8s.io/structured-merge-diff/v6 v6.3.1 // indirect
|
||||
sigs.k8s.io/structured-merge-diff/v6 v6.3.2-0.20260122202528-d9cc6641c482 // indirect
|
||||
sigs.k8s.io/yaml v1.6.0 // indirect
|
||||
)
|
||||
|
||||
52
go.sum
52
go.sum
@ -18,8 +18,8 @@ github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UF
|
||||
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||
github.com/cloudnative-pg/api v1.28.0 h1:xElzHliO0eKkVQafkfMhDJo0aIRCmB1ItEt+SGh6B58=
|
||||
github.com/cloudnative-pg/api v1.28.0/go.mod h1:puXJBOsEaJd8JLgvCtxgl2TO/ZANap/z7bPepKRUgrk=
|
||||
github.com/cloudnative-pg/barman-cloud v0.4.0 h1:V4ajM5yDWq2m+TxmnDtCBGmfMXAxbXr9k7lfR4jM+eE=
|
||||
github.com/cloudnative-pg/barman-cloud v0.4.0/go.mod h1:AWdyNP2jvMO1c7eOOwT8kT+QGyK5O7lEBZX12LEZ1Ic=
|
||||
github.com/cloudnative-pg/barman-cloud v0.4.1-0.20260108104508-ced266c145f5 h1:wPB7VTNgTv6t9sl4QYOBakmVTqHnOdKUht7Q3aL+uns=
|
||||
github.com/cloudnative-pg/barman-cloud v0.4.1-0.20260108104508-ced266c145f5/go.mod h1:qD0NtJOllNQbRB0MaleuHsZjFYaXtXfdg0HbFTbuHn0=
|
||||
github.com/cloudnative-pg/cloudnative-pg v1.28.0 h1:vkv0a0ewDSfJOPJrsyUr4uczsxheReAWf/k171V0Dm0=
|
||||
github.com/cloudnative-pg/cloudnative-pg v1.28.0/go.mod h1:209fkRR6m0vXUVQ9Q498eAPQqN2UlXECbXXtpGsZz3I=
|
||||
github.com/cloudnative-pg/cnpg-i v0.3.1 h1:fKj8NoToWI11HUL2UWYJBpkVzmaTvbs3kDMo7wQF8RU=
|
||||
@ -117,8 +117,8 @@ github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX
|
||||
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||
github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
|
||||
github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||
github.com/google/pprof v0.0.0-20250403155104-27863c87afa6 h1:BHT72Gu3keYf3ZEu2J0b1vyeLSOYI8bm5wbJM/8yDe8=
|
||||
github.com/google/pprof v0.0.0-20250403155104-27863c87afa6/go.mod h1:boTsfXsheKC2y+lKOCMpSfarhxDeIzfZG1jqGcPl3cA=
|
||||
github.com/google/pprof v0.0.0-20260115054156-294ebfa9ad83 h1:z2ogiKUYzX5Is6zr/vP9vJGqPwcdqsWjOt+V8J7+bTc=
|
||||
github.com/google/pprof v0.0.0-20260115054156-294ebfa9ad83/go.mod h1:MxpfABSjhmINe3F1It9d+8exIHFvUqtLIRCdOGNXqiI=
|
||||
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
||||
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 h1:JeSE6pjso5THxAzdVpqr6/geYxZytqFMBCOtn/ujyeo=
|
||||
@ -163,10 +163,10 @@ github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq
|
||||
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
|
||||
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f h1:y5//uYreIhSUg3J1GEMiLbxo1LJaP8RfCpH6pymGZus=
|
||||
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw=
|
||||
github.com/onsi/ginkgo/v2 v2.27.3 h1:ICsZJ8JoYafeXFFlFAG75a7CxMsJHwgKwtO+82SE9L8=
|
||||
github.com/onsi/ginkgo/v2 v2.27.3/go.mod h1:ArE1D/XhNXBXCBkKOLkbsb2c81dQHCRcF5zwn/ykDRo=
|
||||
github.com/onsi/gomega v1.38.3 h1:eTX+W6dobAYfFeGC2PV6RwXRu/MyT+cQguijutvkpSM=
|
||||
github.com/onsi/gomega v1.38.3/go.mod h1:ZCU1pkQcXDO5Sl9/VVEGlDyp+zm0m1cmeG5TOzLgdh4=
|
||||
github.com/onsi/ginkgo/v2 v2.28.1 h1:S4hj+HbZp40fNKuLUQOYLDgZLwNUVn19N3Atb98NCyI=
|
||||
github.com/onsi/ginkgo/v2 v2.28.1/go.mod h1:CLtbVInNckU3/+gC8LzkGUb9oF+e8W8TdUsxPwvdOgE=
|
||||
github.com/onsi/gomega v1.39.1 h1:1IJLAad4zjPn2PsnhH70V4DKRFlrCzGBNrNaru+Vf28=
|
||||
github.com/onsi/gomega v1.39.1/go.mod h1:hL6yVALoTOxeWudERyfppUcZXjMwIMLnuSfruD2lcfg=
|
||||
github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4=
|
||||
github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY=
|
||||
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
|
||||
@ -269,24 +269,24 @@ go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
|
||||
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
|
||||
golang.org/x/exp v0.0.0-20250718183923-645b1fa84792 h1:R9PFI6EUdfVKgwKjZef7QIwGcBKu86OEFpJ9nUEP2l4=
|
||||
golang.org/x/exp v0.0.0-20250718183923-645b1fa84792/go.mod h1:A+z0yzpGtvnG90cToK5n2tu8UJVP2XUATh+r+sfOOOc=
|
||||
golang.org/x/mod v0.30.0 h1:fDEXFVZ/fmCKProc/yAXXUijritrDzahmwwefnjoPFk=
|
||||
golang.org/x/mod v0.30.0/go.mod h1:lAsf5O2EvJeSFMiBxXDki7sCgAxEUcZHXoXMKT4GJKc=
|
||||
golang.org/x/net v0.48.0 h1:zyQRTTrjc33Lhh0fBgT/H3oZq9WuvRR5gPC70xpDiQU=
|
||||
golang.org/x/net v0.48.0/go.mod h1:+ndRgGjkh8FGtu1w1FGbEC31if4VrNVMuKTgcAAnQRY=
|
||||
golang.org/x/mod v0.32.0 h1:9F4d3PHLljb6x//jOyokMv3eX+YDeepZSEo3mFJy93c=
|
||||
golang.org/x/mod v0.32.0/go.mod h1:SgipZ/3h2Ci89DlEtEXWUk/HteuRin+HHhN+WbNhguU=
|
||||
golang.org/x/net v0.49.0 h1:eeHFmOGUTtaaPSGNmjBKpbng9MulQsJURQUAfUwY++o=
|
||||
golang.org/x/net v0.49.0/go.mod h1:/ysNB2EvaqvesRkuLAyjI1ycPZlQHM3q01F02UY/MV8=
|
||||
golang.org/x/oauth2 v0.34.0 h1:hqK/t4AKgbqWkdkcAeI8XLmbK+4m4G5YeQRrmiotGlw=
|
||||
golang.org/x/oauth2 v0.34.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
|
||||
golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=
|
||||
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
|
||||
golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk=
|
||||
golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||
golang.org/x/term v0.38.0 h1:PQ5pkm/rLO6HnxFR7N2lJHOZX6Kez5Y1gDSJla6jo7Q=
|
||||
golang.org/x/term v0.38.0/go.mod h1:bSEAKrOT1W+VSu9TSCMtoGEOUcKxOKgl3LE5QEF/xVg=
|
||||
golang.org/x/text v0.32.0 h1:ZD01bjUt1FQ9WJ0ClOL5vxgxOI/sVCNgX1YtKwcY0mU=
|
||||
golang.org/x/text v0.32.0/go.mod h1:o/rUWzghvpD5TXrTIBuJU77MTaN0ljMWE47kxGJQ7jY=
|
||||
golang.org/x/sys v0.40.0 h1:DBZZqJ2Rkml6QMQsZywtnjnnGvHza6BTfYFWY9kjEWQ=
|
||||
golang.org/x/sys v0.40.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||
golang.org/x/term v0.39.0 h1:RclSuaJf32jOqZz74CkPA9qFuVTX7vhLlpfj/IGWlqY=
|
||||
golang.org/x/term v0.39.0/go.mod h1:yxzUCTP/U+FzoxfdKmLaA0RV1WgE0VY7hXBwKtY/4ww=
|
||||
golang.org/x/text v0.33.0 h1:B3njUFyqtHDUI5jMn1YIr5B0IE2U0qck04r6d4KPAxE=
|
||||
golang.org/x/text v0.33.0/go.mod h1:LuMebE6+rBincTi9+xWTY8TztLzKHc/9C1uBCG27+q8=
|
||||
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
|
||||
golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
|
||||
golang.org/x/tools v0.39.0 h1:ik4ho21kwuQln40uelmciQPp9SipgNDdrafrYA4TmQQ=
|
||||
golang.org/x/tools v0.39.0/go.mod h1:JnefbkDPyD8UU2kI5fuf8ZX4/yUeh9W877ZeBONxUqQ=
|
||||
golang.org/x/tools v0.41.0 h1:a9b8iMweWG+S0OBnlU36rzLp20z1Rp10w+IY2czHTQc=
|
||||
golang.org/x/tools v0.41.0/go.mod h1:XSY6eDqxVNiYgezAVqqCeihT4j1U2CCsqvH3WhQpnlg=
|
||||
gomodules.xyz/jsonpatch/v2 v2.5.0 h1:JELs8RLM12qJGXU4u/TO3V25KW8GreMKl9pdkk14RM0=
|
||||
gomodules.xyz/jsonpatch/v2 v2.5.0/go.mod h1:AH3dM2RI6uoBZxn3LVrfvJ3E0/9dG4cSrbuBJT4moAY=
|
||||
gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk=
|
||||
@ -326,12 +326,12 @@ k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk=
|
||||
k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE=
|
||||
k8s.io/kube-openapi v0.0.0-20251125145642-4e65d59e963e h1:iW9ChlU0cU16w8MpVYjXk12dqQ4BPFBEgif+ap7/hqQ=
|
||||
k8s.io/kube-openapi v0.0.0-20251125145642-4e65d59e963e/go.mod h1:kdmbQkyfwUagLfXIad1y2TdrjPFWp2Q89B3qkRwf/pQ=
|
||||
k8s.io/utils v0.0.0-20251222233032-718f0e51e6d2 h1:OfgiEo21hGiwx1oJUU5MpEaeOEg6coWndBkZF/lkFuE=
|
||||
k8s.io/utils v0.0.0-20251222233032-718f0e51e6d2/go.mod h1:xDxuJ0whA3d0I4mf/C4ppKHxXynQ+fxnkmQH0vTHnuk=
|
||||
k8s.io/utils v0.0.0-20260108192941-914a6e750570 h1:JT4W8lsdrGENg9W+YwwdLJxklIuKWdRm+BC+xt33FOY=
|
||||
k8s.io/utils v0.0.0-20260108192941-914a6e750570/go.mod h1:xDxuJ0whA3d0I4mf/C4ppKHxXynQ+fxnkmQH0vTHnuk=
|
||||
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.33.0 h1:qPrZsv1cwQiFeieFlRqT627fVZ+tyfou/+S5S0H5ua0=
|
||||
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.33.0/go.mod h1:Ve9uj1L+deCXFrPOk1LpFXqTg7LCFzFso6PA48q/XZw=
|
||||
sigs.k8s.io/controller-runtime v0.22.4 h1:GEjV7KV3TY8e+tJ2LCTxUTanW4z/FmNB7l327UfMq9A=
|
||||
sigs.k8s.io/controller-runtime v0.22.4/go.mod h1:+QX1XUpTXN4mLoblf4tqr5CQcyHPAki2HLXqQMY6vh8=
|
||||
sigs.k8s.io/controller-runtime v0.23.1 h1:TjJSM80Nf43Mg21+RCy3J70aj/W6KyvDtOlpKf+PupE=
|
||||
sigs.k8s.io/controller-runtime v0.23.1/go.mod h1:B6COOxKptp+YaUT5q4l6LqUJTRpizbgf9KSRNdQGns0=
|
||||
sigs.k8s.io/gateway-api v1.4.0 h1:ZwlNM6zOHq0h3WUX2gfByPs2yAEsy/EenYJB78jpQfQ=
|
||||
sigs.k8s.io/gateway-api v1.4.0/go.mod h1:AR5RSqciWP98OPckEjOjh2XJhAe2Na4LHyXD2FUY7Qk=
|
||||
sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 h1:IpInykpT6ceI+QxKBbEflcR5EXP7sU1kvOlxwZh5txg=
|
||||
@ -342,7 +342,7 @@ sigs.k8s.io/kustomize/kyaml v0.21.0 h1:7mQAf3dUwf0wBerWJd8rXhVcnkk5Tvn/q91cGkaP6
|
||||
sigs.k8s.io/kustomize/kyaml v0.21.0/go.mod h1:hmxADesM3yUN2vbA5z1/YTBnzLJ1dajdqpQonwBL1FQ=
|
||||
sigs.k8s.io/randfill v1.0.0 h1:JfjMILfT8A6RbawdsK2JXGBR5AQVfd+9TbzrlneTyrU=
|
||||
sigs.k8s.io/randfill v1.0.0/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY=
|
||||
sigs.k8s.io/structured-merge-diff/v6 v6.3.1 h1:JrhdFMqOd/+3ByqlP2I45kTOZmTRLBUm5pvRjeheg7E=
|
||||
sigs.k8s.io/structured-merge-diff/v6 v6.3.1/go.mod h1:M3W8sfWvn2HhQDIbGWj3S099YozAsymCo/wrT5ohRUE=
|
||||
sigs.k8s.io/structured-merge-diff/v6 v6.3.2-0.20260122202528-d9cc6641c482 h1:2WOzJpHUBVrrkDjU4KBT8n5LDcj824eX0I5UKcgeRUs=
|
||||
sigs.k8s.io/structured-merge-diff/v6 v6.3.2-0.20260122202528-d9cc6641c482/go.mod h1:M3W8sfWvn2HhQDIbGWj3S099YozAsymCo/wrT5ohRUE=
|
||||
sigs.k8s.io/yaml v1.6.0 h1:G8fkbMSAFqgEFgh4b1wmtzDnioxFCUgTZhlbj5P9QYs=
|
||||
sigs.k8s.io/yaml v1.6.0/go.mod h1:796bPqUfzR/0jLAl6XjHl3Ck7MiyVv8dbTdyT3/pMf4=
|
||||
|
||||
@ -1,9 +1,6 @@
|
||||
processor:
|
||||
ignoreGroupVersions:
|
||||
- "GVK"
|
||||
customMarkers:
|
||||
- name: "optional"
|
||||
target: field
|
||||
ignoreFields:
|
||||
# - "status$"
|
||||
- "TypeMeta$"
|
||||
|
||||
@ -31,7 +31,7 @@ _Appears in:_
|
||||
{{ end -}}
|
||||
|
||||
{{ range $type.Members -}}
|
||||
| `{{ .Name }}` _{{ markdownRenderType .Type }}_ | {{ template "type_members" . }} | {{ if not .Markers.optional -}}True{{- end }} | {{ markdownRenderDefault .Default }} | {{ range .Validation -}} {{ markdownRenderFieldDoc . }} <br />{{ end }} |
|
||||
| `{{ .Name }}` _{{ markdownRenderType .Type }}_ | {{ template "type_members" . }} | {{ if not .Markers.optional -}}True{{- end }} | {{ markdownRenderDefault .Default }} | {{ range .Validation -}}{{- $v := markdownRenderFieldDoc . }}{{- if and $v (ne $v "Optional: \\{\\}") -}} {{ $v }} <br />{{ end }}{{- end }} |
|
||||
{{ end -}}
|
||||
|
||||
{{ end -}}
|
||||
|
||||
@ -100,6 +100,7 @@ func Start(ctx context.Context) error {
|
||||
|
||||
if err := mgr.Add(&CatalogMaintenanceRunnable{
|
||||
Client: customCacheClient,
|
||||
//nolint:staticcheck // SA1019: old API required for RBAC compatibility
|
||||
Recorder: mgr.GetEventRecorderFor("policy-runnable"),
|
||||
ClusterKey: types.NamespacedName{
|
||||
Namespace: namespace,
|
||||
|
||||
@ -43,7 +43,7 @@ const (
|
||||
// Data is the metadata of this plugin.
|
||||
var Data = identity.GetPluginMetadataResponse{
|
||||
Name: PluginName,
|
||||
Version: "0.10.0", // x-release-please-version
|
||||
Version: "0.11.0", // x-release-please-version
|
||||
DisplayName: "BarmanCloudInstance",
|
||||
ProjectUrl: "https://github.com/cloudnative-pg/plugin-barman-cloud",
|
||||
RepositoryUrl: "https://github.com/cloudnative-pg/plugin-barman-cloud",
|
||||
|
||||
@ -353,30 +353,31 @@ func reconcilePodSpec(
|
||||
sidecarTemplate corev1.Container,
|
||||
config sidecarConfiguration,
|
||||
) error {
|
||||
envs := []corev1.EnvVar{
|
||||
{
|
||||
envs := make([]corev1.EnvVar, 0, 5+len(config.env))
|
||||
envs = append(envs,
|
||||
corev1.EnvVar{
|
||||
Name: "NAMESPACE",
|
||||
Value: cluster.Namespace,
|
||||
},
|
||||
{
|
||||
corev1.EnvVar{
|
||||
Name: "CLUSTER_NAME",
|
||||
Value: cluster.Name,
|
||||
},
|
||||
{
|
||||
corev1.EnvVar{
|
||||
// TODO: should we really use this one?
|
||||
// should we mount an emptyDir volume just for that?
|
||||
Name: "SPOOL_DIRECTORY",
|
||||
Value: "/controller/wal-restore-spool",
|
||||
},
|
||||
{
|
||||
corev1.EnvVar{
|
||||
Name: "CUSTOM_CNPG_GROUP",
|
||||
Value: cluster.GetObjectKind().GroupVersionKind().Group,
|
||||
},
|
||||
{
|
||||
corev1.EnvVar{
|
||||
Name: "CUSTOM_CNPG_VERSION",
|
||||
Value: cluster.GetObjectKind().GroupVersionKind().Version,
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
envs = append(envs, config.env...)
|
||||
|
||||
|
||||
@ -37,6 +37,9 @@ func CollectSecretNamesFromCredentials(barmanCredentials *barmanapi.BarmanCreden
|
||||
)
|
||||
}
|
||||
if barmanCredentials.Azure != nil {
|
||||
// When using default Azure credentials or managed identity, no secrets are required
|
||||
if !barmanCredentials.Azure.UseDefaultAzureCredentials &&
|
||||
!barmanCredentials.Azure.InheritFromAzureAD {
|
||||
references = append(
|
||||
references,
|
||||
barmanCredentials.Azure.ConnectionString,
|
||||
@ -45,6 +48,7 @@ func CollectSecretNamesFromCredentials(barmanCredentials *barmanapi.BarmanCreden
|
||||
barmanCredentials.Azure.StorageSasToken,
|
||||
)
|
||||
}
|
||||
}
|
||||
if barmanCredentials.Google != nil {
|
||||
references = append(
|
||||
references,
|
||||
|
||||
227
internal/cnpgi/operator/specs/secrets_test.go
Normal file
227
internal/cnpgi/operator/specs/secrets_test.go
Normal file
@ -0,0 +1,227 @@
|
||||
/*
|
||||
Copyright © contributors to CloudNativePG, established as
|
||||
CloudNativePG a Series of LF Projects, LLC.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
|
||||
SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
package specs
|
||||
|
||||
import (
|
||||
barmanapi "github.com/cloudnative-pg/barman-cloud/pkg/api"
|
||||
machineryapi "github.com/cloudnative-pg/machinery/pkg/api"
|
||||
|
||||
. "github.com/onsi/ginkgo/v2"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
var _ = Describe("CollectSecretNamesFromCredentials", func() {
|
||||
Context("when collecting secrets from AWS credentials", func() {
|
||||
It("should return secret names from S3 credentials", func() {
|
||||
credentials := &barmanapi.BarmanCredentials{
|
||||
AWS: &barmanapi.S3Credentials{
|
||||
AccessKeyIDReference: &machineryapi.SecretKeySelector{
|
||||
LocalObjectReference: machineryapi.LocalObjectReference{
|
||||
Name: "aws-secret",
|
||||
},
|
||||
Key: "access-key-id",
|
||||
},
|
||||
SecretAccessKeyReference: &machineryapi.SecretKeySelector{
|
||||
LocalObjectReference: machineryapi.LocalObjectReference{
|
||||
Name: "aws-secret",
|
||||
},
|
||||
Key: "secret-access-key",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
secrets := CollectSecretNamesFromCredentials(credentials)
|
||||
Expect(secrets).To(ContainElement("aws-secret"))
|
||||
})
|
||||
|
||||
It("should handle nil AWS credentials", func() {
|
||||
credentials := &barmanapi.BarmanCredentials{}
|
||||
|
||||
secrets := CollectSecretNamesFromCredentials(credentials)
|
||||
Expect(secrets).To(BeEmpty())
|
||||
})
|
||||
})
|
||||
|
||||
Context("when collecting secrets from Azure credentials", func() {
|
||||
It("should return secret names when using explicit credentials", func() {
|
||||
credentials := &barmanapi.BarmanCredentials{
|
||||
Azure: &barmanapi.AzureCredentials{
|
||||
ConnectionString: &machineryapi.SecretKeySelector{
|
||||
LocalObjectReference: machineryapi.LocalObjectReference{
|
||||
Name: "azure-secret",
|
||||
},
|
||||
Key: "connection-string",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
secrets := CollectSecretNamesFromCredentials(credentials)
|
||||
Expect(secrets).To(ContainElement("azure-secret"))
|
||||
})
|
||||
|
||||
It("should return empty list when using UseDefaultAzureCredentials", func() {
|
||||
credentials := &barmanapi.BarmanCredentials{
|
||||
Azure: &barmanapi.AzureCredentials{
|
||||
UseDefaultAzureCredentials: true,
|
||||
ConnectionString: &machineryapi.SecretKeySelector{
|
||||
LocalObjectReference: machineryapi.LocalObjectReference{
|
||||
Name: "azure-secret",
|
||||
},
|
||||
Key: "connection-string",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
secrets := CollectSecretNamesFromCredentials(credentials)
|
||||
Expect(secrets).To(BeEmpty())
|
||||
})
|
||||
|
||||
It("should return empty list when using InheritFromAzureAD", func() {
|
||||
credentials := &barmanapi.BarmanCredentials{
|
||||
Azure: &barmanapi.AzureCredentials{
|
||||
InheritFromAzureAD: true,
|
||||
},
|
||||
}
|
||||
|
||||
secrets := CollectSecretNamesFromCredentials(credentials)
|
||||
Expect(secrets).To(BeEmpty())
|
||||
})
|
||||
|
||||
It("should return secret names for storage account and key", func() {
|
||||
credentials := &barmanapi.BarmanCredentials{
|
||||
Azure: &barmanapi.AzureCredentials{
|
||||
StorageAccount: &machineryapi.SecretKeySelector{
|
||||
LocalObjectReference: machineryapi.LocalObjectReference{
|
||||
Name: "azure-storage",
|
||||
},
|
||||
Key: "account-name",
|
||||
},
|
||||
StorageKey: &machineryapi.SecretKeySelector{
|
||||
LocalObjectReference: machineryapi.LocalObjectReference{
|
||||
Name: "azure-storage",
|
||||
},
|
||||
Key: "account-key",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
secrets := CollectSecretNamesFromCredentials(credentials)
|
||||
Expect(secrets).To(ContainElement("azure-storage"))
|
||||
})
|
||||
})
|
||||
|
||||
Context("when collecting secrets from Google credentials", func() {
|
||||
It("should return secret names from Google credentials", func() {
|
||||
credentials := &barmanapi.BarmanCredentials{
|
||||
Google: &barmanapi.GoogleCredentials{
|
||||
ApplicationCredentials: &machineryapi.SecretKeySelector{
|
||||
LocalObjectReference: machineryapi.LocalObjectReference{
|
||||
Name: "google-secret",
|
||||
},
|
||||
Key: "credentials.json",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
secrets := CollectSecretNamesFromCredentials(credentials)
|
||||
Expect(secrets).To(ContainElement("google-secret"))
|
||||
})
|
||||
})
|
||||
|
||||
Context("when collecting secrets from multiple cloud providers", func() {
|
||||
It("should return secret names from all providers", func() {
|
||||
credentials := &barmanapi.BarmanCredentials{
|
||||
AWS: &barmanapi.S3Credentials{
|
||||
AccessKeyIDReference: &machineryapi.SecretKeySelector{
|
||||
LocalObjectReference: machineryapi.LocalObjectReference{
|
||||
Name: "aws-secret",
|
||||
},
|
||||
Key: "access-key-id",
|
||||
},
|
||||
},
|
||||
Azure: &barmanapi.AzureCredentials{
|
||||
ConnectionString: &machineryapi.SecretKeySelector{
|
||||
LocalObjectReference: machineryapi.LocalObjectReference{
|
||||
Name: "azure-secret",
|
||||
},
|
||||
Key: "connection-string",
|
||||
},
|
||||
},
|
||||
Google: &barmanapi.GoogleCredentials{
|
||||
ApplicationCredentials: &machineryapi.SecretKeySelector{
|
||||
LocalObjectReference: machineryapi.LocalObjectReference{
|
||||
Name: "google-secret",
|
||||
},
|
||||
Key: "credentials.json",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
secrets := CollectSecretNamesFromCredentials(credentials)
|
||||
Expect(secrets).To(ContainElements("aws-secret", "azure-secret", "google-secret"))
|
||||
})
|
||||
|
||||
It("should skip Azure secrets when using UseDefaultAzureCredentials with other providers", func() {
|
||||
credentials := &barmanapi.BarmanCredentials{
|
||||
AWS: &barmanapi.S3Credentials{
|
||||
AccessKeyIDReference: &machineryapi.SecretKeySelector{
|
||||
LocalObjectReference: machineryapi.LocalObjectReference{
|
||||
Name: "aws-secret",
|
||||
},
|
||||
Key: "access-key-id",
|
||||
},
|
||||
},
|
||||
Azure: &barmanapi.AzureCredentials{
|
||||
UseDefaultAzureCredentials: true,
|
||||
ConnectionString: &machineryapi.SecretKeySelector{
|
||||
LocalObjectReference: machineryapi.LocalObjectReference{
|
||||
Name: "azure-secret",
|
||||
},
|
||||
Key: "connection-string",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
secrets := CollectSecretNamesFromCredentials(credentials)
|
||||
Expect(secrets).To(ContainElement("aws-secret"))
|
||||
Expect(secrets).NotTo(ContainElement("azure-secret"))
|
||||
})
|
||||
})
|
||||
|
||||
Context("when handling nil references", func() {
|
||||
It("should skip nil secret references", func() {
|
||||
credentials := &barmanapi.BarmanCredentials{
|
||||
AWS: &barmanapi.S3Credentials{
|
||||
AccessKeyIDReference: &machineryapi.SecretKeySelector{
|
||||
LocalObjectReference: machineryapi.LocalObjectReference{
|
||||
Name: "aws-secret",
|
||||
},
|
||||
Key: "access-key-id",
|
||||
},
|
||||
SecretAccessKeyReference: nil,
|
||||
},
|
||||
}
|
||||
|
||||
secrets := CollectSecretNamesFromCredentials(credentials)
|
||||
Expect(secrets).To(ContainElement("aws-secret"))
|
||||
Expect(len(secrets)).To(Equal(1))
|
||||
})
|
||||
})
|
||||
})
|
||||
32
internal/cnpgi/operator/specs/suite_test.go
Normal file
32
internal/cnpgi/operator/specs/suite_test.go
Normal file
@ -0,0 +1,32 @@
|
||||
/*
|
||||
Copyright © contributors to CloudNativePG, established as
|
||||
CloudNativePG a Series of LF Projects, LLC.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
|
||||
SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
package specs
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
. "github.com/onsi/ginkgo/v2"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
func TestSpecs(t *testing.T) {
|
||||
RegisterFailHandler(Fail)
|
||||
RunSpecs(t, "Specs Suite")
|
||||
}
|
||||
@ -107,6 +107,11 @@ spec:
|
||||
- key
|
||||
- name
|
||||
type: object
|
||||
useDefaultAzureCredentials:
|
||||
description: |-
|
||||
Use the default Azure authentication flow, which includes DefaultAzureCredential.
|
||||
This allows authentication using environment variables and managed identities.
|
||||
type: boolean
|
||||
type: object
|
||||
data:
|
||||
description: |-
|
||||
|
||||
@ -11,6 +11,17 @@
|
||||
],
|
||||
rebaseWhen: 'never',
|
||||
prConcurrentLimit: 5,
|
||||
// Override default ignorePaths to scan test/e2e for emulator image dependencies
|
||||
// Removed: '**/test/**'
|
||||
ignorePaths: [
|
||||
'**/node_modules/**',
|
||||
'**/bower_components/**',
|
||||
'**/vendor/**',
|
||||
'**/examples/**',
|
||||
'**/__tests__/**',
|
||||
'**/tests/**',
|
||||
'**/__fixtures__/**',
|
||||
],
|
||||
lockFileMaintenance: {
|
||||
enabled: true,
|
||||
},
|
||||
@ -28,7 +39,7 @@
|
||||
{
|
||||
customType: 'regex',
|
||||
managerFilePatterns: [
|
||||
'/(^Taskfile\\.yml$)/',
|
||||
'/(^|/)Taskfile\\.yml$/',
|
||||
],
|
||||
matchStrings: [
|
||||
'# renovate: datasource=(?<datasource>[a-z-.]+?) depName=(?<depName>[^\\s]+?)(?: (?:lookupName|packageName)=(?<packageName>[^\\s]+?))?(?: versioning=(?<versioning>[^\\s]+?))?(?: extractVersion=(?<extractVersion>[^\\s]+?))?(?: currentValue=(?<currentValue>[^\\s]+?))?\\s+[A-Za-z0-9_]+?_SHA\\s*:\\s*["\']?(?<currentDigest>[a-f0-9]+?)["\']?\\s',
|
||||
@ -38,7 +49,16 @@
|
||||
{
|
||||
customType: 'regex',
|
||||
managerFilePatterns: [
|
||||
'/(^docs/config\\.yaml$)/',
|
||||
'/\\.go$/',
|
||||
],
|
||||
matchStrings: [
|
||||
'//\\s*renovate:\\s*datasource=(?<datasource>[a-z-.]+?)\\s+depName=(?<depName>[^\\s]+?)(?:\\s+versioning=(?<versioning>[^\\s]+?))?\\s*\\n\\s*//\\s*Version:\\s*(?<currentValue>[^\\s]+?)\\s*\\n\\s*Image:\\s*"[^@]+@(?<currentDigest>sha256:[a-f0-9]+)"',
|
||||
],
|
||||
},
|
||||
{
|
||||
customType: 'regex',
|
||||
managerFilePatterns: [
|
||||
'/(^|/)docs/config\\.yaml$/',
|
||||
],
|
||||
matchStrings: [
|
||||
'# renovate: datasource=(?<datasource>[a-z-.]+?) depName=(?<depName>[^\\s]+?)(?: (?:lookupName|packageName)=(?<packageName>[^\\s]+?))?(?: versioning=(?<versioning>[^\\s]+?))?(?: extractVersion=(?<extractVersion>[^\\s]+?))?\\s+kubernetesVersion:\\s*["\']?(?<currentValue>.+?)["\']?\\s',
|
||||
@ -59,12 +79,6 @@
|
||||
enabled: false,
|
||||
},
|
||||
packageRules: [
|
||||
{
|
||||
matchPackageNames: [
|
||||
'ghcr.io/cloudnative-pg/plugin-barman-cloud-base',
|
||||
],
|
||||
versioning: 'loose',
|
||||
},
|
||||
{
|
||||
matchDatasources: [
|
||||
'go',
|
||||
|
||||
@ -71,8 +71,15 @@ func newAzuriteDeployment(namespace, name string) *appsv1.Deployment {
|
||||
Containers: []corev1.Container{
|
||||
{
|
||||
Name: name,
|
||||
// TODO: renovate the image
|
||||
Image: "mcr.microsoft.com/azure-storage/azurite",
|
||||
// renovate: datasource=docker depName=mcr.microsoft.com/azure-storage/azurite versioning=docker
|
||||
// Version: 3.35.0
|
||||
Image: "mcr.microsoft.com/azure-storage/azurite@sha256:647c63a91102a9d8e8000aab803436e1fc85fbb285e7ce830a82ee5d6661cf37",
|
||||
Args: []string{
|
||||
"azurite-blob",
|
||||
"--blobHost",
|
||||
"0.0.0.0",
|
||||
"--skipApiVersionCheck",
|
||||
},
|
||||
Ports: []corev1.ContainerPort{
|
||||
{
|
||||
ContainerPort: 10000,
|
||||
|
||||
@ -71,7 +71,9 @@ func newGCSDeployment(namespace, name string) *appsv1.Deployment {
|
||||
Containers: []corev1.Container{
|
||||
{
|
||||
Name: name,
|
||||
Image: "fsouza/fake-gcs-server:latest",
|
||||
// renovate: datasource=docker depName=fsouza/fake-gcs-server versioning=docker
|
||||
// Version: 1.53.0
|
||||
Image: "fsouza/fake-gcs-server@sha256:73d85653b92da21a93759b8c319e32916de748261161413c205d6b6762c21929",
|
||||
Ports: []corev1.ContainerPort{
|
||||
{
|
||||
ContainerPort: 4443,
|
||||
|
||||
@ -71,8 +71,9 @@ func newMinioDeployment(namespace, name string) *appsv1.Deployment {
|
||||
Containers: []corev1.Container{
|
||||
{
|
||||
Name: name,
|
||||
// TODO: renovate the image
|
||||
Image: "minio/minio:latest",
|
||||
// renovate: datasource=docker depName=minio/minio versioning=docker
|
||||
// Version: RELEASE.2025-09-07T16-13-09Z
|
||||
Image: "minio/minio@sha256:14cea493d9a34af32f524e538b8346cf79f3321eff8e708c1e2960462bd8936e",
|
||||
Args: []string{"server", "/data"},
|
||||
Ports: []corev1.ContainerPort{
|
||||
{
|
||||
|
||||
@ -29,6 +29,16 @@ the specific object storage provider you are using.
|
||||
|
||||
The following sections detail the setup for each.
|
||||
|
||||
:::note Authentication Methods
|
||||
The Barman Cloud Plugin does not independently test all authentication methods
|
||||
supported by `barman-cloud`. The plugin's responsibility is limited to passing
|
||||
the provided credentials to `barman-cloud`, which then handles authentication
|
||||
according to its own implementation. Users should refer to the
|
||||
[Barman Cloud documentation](https://docs.pgbarman.org/release/latest/) to
|
||||
verify that their chosen authentication method is supported and properly
|
||||
configured.
|
||||
:::
|
||||
|
||||
---
|
||||
|
||||
## AWS S3
|
||||
@ -103,7 +113,7 @@ spec:
|
||||
|
||||
### S3 Lifecycle Policy
|
||||
|
||||
Barman Cloud uploads backup files to S3 but does not modify or delete them afterward.
|
||||
Barman Cloud uploads backup files to S3 but does not modify them afterward.
|
||||
To enhance data durability and protect against accidental or malicious loss,
|
||||
it's recommended to implement the following best practices:
|
||||
|
||||
@ -119,7 +129,6 @@ These strategies help you safeguard backups without requiring broad delete
|
||||
permissions, ensuring both security and compliance with minimal operational
|
||||
overhead.
|
||||
|
||||
|
||||
### S3-Compatible Storage Providers
|
||||
|
||||
You can use S3-compatible services like **MinIO**, **Linode (Akamai) Object Storage**,
|
||||
@ -230,14 +239,18 @@ is Microsoft’s cloud-based object storage solution.
|
||||
Barman Cloud supports the following authentication methods:
|
||||
|
||||
- [Connection String](https://learn.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string)
|
||||
- Storage Account Name + [Access Key](https://learn.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage)
|
||||
- Storage Account Name + [SAS Token](https://learn.microsoft.com/en-us/azure/storage/blobs/sas-service-create)
|
||||
- [Azure AD Workload Identity](https://azure.github.io/azure-workload-identity/docs/introduction.html)
|
||||
- Storage Account Name + [Storage Account Access Key](https://learn.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage)
|
||||
- Storage Account Name + [Storage Account SAS Token](https://learn.microsoft.com/en-us/azure/storage/blobs/sas-service-create)
|
||||
- [Azure AD Managed Identity](https://learn.microsoft.com/en-us/entra/identity/managed-identities-azure-resources/overview)
|
||||
- [Default Azure Credentials](https://learn.microsoft.com/en-us/dotnet/api/azure.identity.defaultazurecredential?view=azure-dotnet)
|
||||
|
||||
### Azure AD Workload Identity
|
||||
### Azure AD Managed Identity
|
||||
|
||||
This method avoids storing credentials in Kubernetes via the
|
||||
`.spec.configuration.inheritFromAzureAD` option:
|
||||
This method avoids storing credentials in Kubernetes by enabling the
|
||||
usage of [Azure Managed Identities](https://learn.microsoft.com/en-us/entra/identity/managed-identities-azure-resources/overview) authentication mechanism.
|
||||
This can be enabled by setting the `inheritFromAzureAD` option to `true`.
|
||||
Managed Identity can be configured for the AKS Cluster by following
|
||||
the [Azure documentation](https://learn.microsoft.com/en-us/azure/aks/use-managed-identity?pivots=system-assigned).
|
||||
|
||||
```yaml
|
||||
apiVersion: barmancloud.cnpg.io/v1
|
||||
@ -252,6 +265,36 @@ spec:
|
||||
[...]
|
||||
```
|
||||
|
||||
### Default Azure Credentials
|
||||
|
||||
The `useDefaultAzureCredentials` option enables the default Azure credentials
|
||||
flow, which uses [`DefaultAzureCredential`](https://learn.microsoft.com/en-us/python/api/azure-identity/azure.identity.defaultazurecredential)
|
||||
to automatically discover and use available credentials in the following order:
|
||||
|
||||
1. **Environment Variables** — `AZURE_CLIENT_ID`, `AZURE_CLIENT_SECRET`, and `AZURE_TENANT_ID` for Service Principal authentication
|
||||
2. **Managed Identity** — Uses the managed identity assigned to the pod
|
||||
3. **Azure CLI** — Uses credentials from the Azure CLI if available
|
||||
4. **Azure PowerShell** — Uses credentials from Azure PowerShell if available
|
||||
|
||||
This approach is particularly useful for getting started with development and testing; it allows
|
||||
the SDK to attempt multiple authentication mechanisms seamlessly across different environments.
|
||||
However, this is not recommended for production. Please refer to the
|
||||
[official Azure guidance](https://learn.microsoft.com/en-us/dotnet/azure/sdk/authentication/credential-chains?tabs=dac#usage-guidance-for-defaultazurecredential)
|
||||
for a comprehensive understanding of `DefaultAzureCredential`.
|
||||
|
||||
```yaml
|
||||
apiVersion: barmancloud.cnpg.io/v1
|
||||
kind: ObjectStore
|
||||
metadata:
|
||||
name: azure-store
|
||||
spec:
|
||||
configuration:
|
||||
destinationPath: "<destination path here>"
|
||||
azureCredentials:
|
||||
useDefaultAzureCredentials: true
|
||||
[...]
|
||||
```
|
||||
|
||||
### Access Key, SAS Token, or Connection String
|
||||
|
||||
Store credentials in a Kubernetes secret:
|
||||
|
||||
@ -206,7 +206,7 @@ When a backup fails, follow these steps in order:
|
||||
plugins:
|
||||
- name: barman-cloud.cloudnative-pg.io
|
||||
parameters:
|
||||
barmanObjectStore: <your-objectstore-name>
|
||||
barmanObjectName: <your-objectstore-name>
|
||||
```
|
||||
|
||||
c. **Check plugin deployment is running**:
|
||||
@ -406,7 +406,7 @@ For detailed PITR configuration and WAL management, see the
|
||||
|
||||
3. **Adjust provider-specific settings (endpoint, path style, etc.)**
|
||||
- See [Object Store Configuration](object_stores.md) for provider-specific settings
|
||||
- Ensure `endpointURL` and `s3UsePathStyle` match your storage type
|
||||
- Ensure `endpointURL` is set correctly for your storage provider
|
||||
- Verify network policies allow egress to your storage provider
|
||||
|
||||
## Diagnostic Commands
|
||||
|
||||
@ -103,7 +103,7 @@ spec:
|
||||
|
||||
### S3 Lifecycle Policy
|
||||
|
||||
Barman Cloud uploads backup files to S3 but does not modify or delete them afterward.
|
||||
Barman Cloud uploads backup files to S3 but does not modify them afterward.
|
||||
To enhance data durability and protect against accidental or malicious loss,
|
||||
it's recommended to implement the following best practices:
|
||||
|
||||
|
||||
@ -206,7 +206,7 @@ When a backup fails, follow these steps in order:
|
||||
plugins:
|
||||
- name: barman-cloud.cloudnative-pg.io
|
||||
parameters:
|
||||
barmanObjectStore: <your-objectstore-name>
|
||||
barmanObjectName: <your-objectstore-name>
|
||||
```
|
||||
|
||||
c. **Check plugin deployment is running**:
|
||||
@ -406,7 +406,7 @@ For detailed PITR configuration and WAL management, see the
|
||||
|
||||
3. **Adjust provider-specific settings (endpoint, path style, etc.)**
|
||||
- See [Object Store Configuration](object_stores.md) for provider-specific settings
|
||||
- Ensure `endpointURL` and `s3UsePathStyle` match your storage type
|
||||
- Ensure `endpointURL` match your storage type
|
||||
- Verify network policies allow egress to your storage provider
|
||||
|
||||
## Diagnostic Commands
|
||||
|
||||
43
web/versioned_docs/version-0.11.0/compression.md
Normal file
43
web/versioned_docs/version-0.11.0/compression.md
Normal file
@ -0,0 +1,43 @@
|
||||
---
|
||||
sidebar_position: 80
|
||||
---
|
||||
|
||||
# Compression
|
||||
|
||||
<!-- SPDX-License-Identifier: CC-BY-4.0 -->
|
||||
|
||||
By default, backups and WAL files are archived **uncompressed**. However, the
|
||||
Barman Cloud Plugin supports multiple compression algorithms via
|
||||
`barman-cloud-backup` and `barman-cloud-wal-archive`, allowing you to optimize
|
||||
for space, speed, or a balance of both.
|
||||
|
||||
### Supported Compression Algorithms
|
||||
|
||||
- `bzip2`
|
||||
- `gzip`
|
||||
- `lz4` (WAL only)
|
||||
- `snappy`
|
||||
- `xz` (WAL only)
|
||||
- `zstd` (WAL only)
|
||||
|
||||
Compression settings for base backups and WAL archives are configured
|
||||
independently. For implementation details, refer to the corresponding API
|
||||
definitions:
|
||||
|
||||
- [`DataBackupConfiguration`](https://pkg.go.dev/github.com/cloudnative-pg/barman-cloud/pkg/api#DataBackupConfiguration)
|
||||
- [`WALBackupConfiguration`](https://pkg.go.dev/github.com/cloudnative-pg/barman-cloud/pkg/api#WalBackupConfiguration)
|
||||
|
||||
:::important
|
||||
Compression impacts both performance and storage efficiency. Choose the right
|
||||
algorithm based on your recovery time objectives (RTO), storage capacity, and
|
||||
network throughput.
|
||||
:::
|
||||
|
||||
## Compression Benchmark (on MinIO)
|
||||
|
||||
| Compression | Backup Time (ms) | Restore Time (ms) | Uncompressed Size (MB) | Compressed Size (MB) | Ratio |
|
||||
| ----------- | ---------------- | ----------------- | ---------------------- | -------------------- | ----- |
|
||||
| None | 10,927 | 7,553 | 395 | 395 | 1.0:1 |
|
||||
| bzip2 | 25,404 | 13,886 | 395 | 67 | 5.9:1 |
|
||||
| gzip | 116,281 | 3,077 | 395 | 91 | 4.3:1 |
|
||||
| snappy | 8,134 | 8,341 | 395 | 166 | 2.4:1 |
|
||||
177
web/versioned_docs/version-0.11.0/concepts.md
Normal file
177
web/versioned_docs/version-0.11.0/concepts.md
Normal file
@ -0,0 +1,177 @@
|
||||
---
|
||||
sidebar_position: 10
|
||||
---
|
||||
|
||||
# Main Concepts
|
||||
|
||||
<!-- SPDX-License-Identifier: CC-BY-4.0 -->
|
||||
|
||||
:::important
|
||||
Before proceeding, make sure to review the following sections of the
|
||||
CloudNativePG documentation:
|
||||
|
||||
- [**Backup**](https://cloudnative-pg.io/documentation/current/backup/)
|
||||
- [**WAL Archiving**](https://cloudnative-pg.io/documentation/current/wal_archiving/)
|
||||
- [**Recovery**](https://cloudnative-pg.io/documentation/current/recovery/)
|
||||
:::
|
||||
|
||||
The **Barman Cloud Plugin** enables **hot (online) backups** of PostgreSQL
|
||||
clusters in CloudNativePG through [`barman-cloud`](https://pgbarman.org),
|
||||
supporting continuous physical backups and WAL archiving to an **object
|
||||
store**—without interrupting write operations.
|
||||
|
||||
It also supports both **full recovery** and **Point-in-Time Recovery (PITR)**
|
||||
of a PostgreSQL cluster.
|
||||
|
||||
## The Object Store
|
||||
|
||||
At the core is the [`ObjectStore` custom resource (CRD)](plugin-barman-cloud.v1.md#objectstorespec),
|
||||
which acts as the interface between the PostgreSQL cluster and the target
|
||||
object storage system. It allows you to configure:
|
||||
|
||||
- **Authentication and bucket location** via the `.spec.configuration` section
|
||||
- **WAL archiving** settings—such as compression type, parallelism, and
|
||||
server-side encryption—under `.spec.configuration.wal`
|
||||
- **Base backup options**—with similar settings for compression, concurrency,
|
||||
and encryption—under `.spec.configuration.data`
|
||||
- **Retention policies** to manage the life-cycle of archived WALs and backups
|
||||
via `.spec.configuration.retentionPolicy`
|
||||
|
||||
WAL files are archived in the `wals` directory, while base backups are stored
|
||||
as **tarballs** in the `base` directory, following the
|
||||
[Barman Cloud convention](https://docs.pgbarman.org/cloud/latest/usage/#object-store-layout).
|
||||
|
||||
The plugin also offers advanced capabilities, including
|
||||
[backup tagging](misc.md#backup-object-tagging) and
|
||||
[extra options for backups and WAL archiving](misc.md#extra-options-for-backup-and-wal-archiving).
|
||||
|
||||
:::tip
|
||||
For details, refer to the
|
||||
[API reference for the `ObjectStore` resource](plugin-barman-cloud.v1.md#objectstorespec).
|
||||
:::
|
||||
|
||||
## Integration with a CloudNativePG Cluster
|
||||
|
||||
CloudNativePG can delegate continuous backup and recovery responsibilities to
|
||||
the **Barman Cloud Plugin** by configuring the `.spec.plugins` section of a
|
||||
`Cluster` resource. This setup requires a corresponding `ObjectStore` resource
|
||||
to be defined.
|
||||
|
||||
:::important
|
||||
While it is technically possible to reuse the same `ObjectStore` for multiple
|
||||
`Cluster` resources within the same namespace, it is strongly recommended to
|
||||
dedicate one object store per PostgreSQL cluster to ensure data isolation and
|
||||
operational clarity.
|
||||
:::
|
||||
|
||||
The following example demonstrates how to configure a CloudNativePG cluster
|
||||
named `cluster-example` to use a previously defined `ObjectStore` (also named
|
||||
`cluster-example`) in the same namespace. Setting `isWALArchiver: true` enables
|
||||
WAL archiving through the plugin:
|
||||
|
||||
```yaml
|
||||
apiVersion: postgresql.cnpg.io/v1
|
||||
kind: Cluster
|
||||
metadata:
|
||||
name: cluster-example
|
||||
spec:
|
||||
# Other cluster settings...
|
||||
plugins:
|
||||
- name: barman-cloud.cloudnative-pg.io
|
||||
isWALArchiver: true
|
||||
parameters:
|
||||
barmanObjectName: cluster-example
|
||||
```
|
||||
|
||||
## Backup of a Postgres Cluster
|
||||
|
||||
Once the object store is defined and the `Cluster` is configured to use the
|
||||
Barman Cloud Plugin, **WAL archiving is activated immediately** on the
|
||||
PostgreSQL primary.
|
||||
|
||||
Physical base backups are seamlessly managed by CloudNativePG using the
|
||||
`Backup` and `ScheduledBackup` resources, respectively for
|
||||
[on-demand](https://cloudnative-pg.io/documentation/current/backup/#on-demand-backups)
|
||||
and
|
||||
[scheduled](https://cloudnative-pg.io/documentation/current/backup/#scheduled-backups)
|
||||
backups.
|
||||
|
||||
To use the Barman Cloud Plugin, you must set the `method` to `plugin` and
|
||||
configure the `pluginConfiguration` section as shown:
|
||||
|
||||
```yaml
|
||||
[...]
|
||||
spec:
|
||||
method: plugin
|
||||
pluginConfiguration:
|
||||
name: barman-cloud.cloudnative-pg.io
|
||||
[...]
|
||||
```
|
||||
|
||||
With this configuration, CloudNativePG supports:
|
||||
|
||||
- Backups from both **primary** and **standby** instances
|
||||
- Backups from **designated primaries** in a distributed topology using
|
||||
[replica clusters](https://cloudnative-pg.io/documentation/current/replica_cluster/)
|
||||
|
||||
:::tip
|
||||
For details on how to back up from a standby, refer to the official documentation:
|
||||
[Backup from a standby](https://cloudnative-pg.io/documentation/current/backup/#backup-from-a-standby).
|
||||
:::
|
||||
|
||||
:::important
|
||||
Both backup and WAL archiving operations are executed by sidecar containers
|
||||
running in the same pod as the PostgreSQL `Cluster` primary instance—except
|
||||
when backups are taken from a standby, in which case the sidecar runs alongside
|
||||
the standby pod.
|
||||
The sidecar containers use a [dedicated container image](images.md) that
|
||||
includes only the supported version of Barman Cloud.
|
||||
:::
|
||||
|
||||
## Recovery of a Postgres Cluster
|
||||
|
||||
In PostgreSQL, *recovery* refers to the process of starting a database instance
|
||||
from an existing backup. The Barman Cloud Plugin integrates with CloudNativePG
|
||||
to support both **full recovery** and **Point-in-Time Recovery (PITR)** from an
|
||||
object store.
|
||||
|
||||
Recovery in this context is *not in-place*: it bootstraps a brand-new
|
||||
PostgreSQL cluster from a backup and replays the necessary WAL files to reach
|
||||
the desired recovery target.
|
||||
|
||||
To perform a recovery, define an *external cluster* that references the
|
||||
appropriate `ObjectStore`, and use it as the source in the `bootstrap` section
|
||||
of the target cluster:
|
||||
|
||||
```yaml
|
||||
[...]
|
||||
spec:
|
||||
[...]
|
||||
bootstrap:
|
||||
recovery:
|
||||
source: source
|
||||
externalClusters:
|
||||
- name: source
|
||||
plugin:
|
||||
name: barman-cloud.cloudnative-pg.io
|
||||
parameters:
|
||||
barmanObjectName: cluster-example
|
||||
serverName: cluster-example
|
||||
[...]
|
||||
```
|
||||
|
||||
The critical element here is the `externalClusters` section of the `Cluster`
|
||||
resource, where the `plugin` stanza instructs CloudNativePG to use the Barman
|
||||
Cloud Plugin to access the object store for recovery.
|
||||
|
||||
This same mechanism can be used for a variety of scenarios enabled by the
|
||||
CloudNativePG API, including:
|
||||
|
||||
* **Full cluster recovery** from the latest backup
|
||||
* **Point-in-Time Recovery (PITR)**
|
||||
* Bootstrapping **replica clusters** in a distributed topology
|
||||
|
||||
:::tip
|
||||
For complete instructions and advanced use cases, refer to the official
|
||||
[Recovery documentation](https://cloudnative-pg.io/documentation/current/recovery/).
|
||||
:::
|
||||
37
web/versioned_docs/version-0.11.0/images.md
Normal file
37
web/versioned_docs/version-0.11.0/images.md
Normal file
@ -0,0 +1,37 @@
|
||||
---
|
||||
sidebar_position: 99
|
||||
---
|
||||
|
||||
# Container Images
|
||||
|
||||
<!-- SPDX-License-Identifier: CC-BY-4.0 -->
|
||||
|
||||
The Barman Cloud Plugin is distributed using two container images:
|
||||
|
||||
- One for deploying the plugin components
|
||||
- One for the sidecar that runs alongside each PostgreSQL instance in a
|
||||
CloudNativePG `Cluster` using the plugin
|
||||
|
||||
## Plugin Container Image
|
||||
|
||||
The plugin image contains the logic required to operate the Barman Cloud Plugin
|
||||
within your Kubernetes environment with CloudNativePG. It is published on the
|
||||
GitHub Container Registry at `ghcr.io/cloudnative-pg/plugin-barman-cloud`.
|
||||
|
||||
This image is built from the
|
||||
[`Dockerfile.plugin`](https://github.com/cloudnative-pg/plugin-barman-cloud/blob/main/containers/Dockerfile.plugin)
|
||||
in the plugin repository.
|
||||
|
||||
## Sidecar Container Image
|
||||
|
||||
The sidecar image is used within each PostgreSQL pod in the cluster. It
|
||||
includes the latest supported version of Barman Cloud and is responsible for
|
||||
performing WAL archiving and backups on behalf of CloudNativePG.
|
||||
|
||||
It is available at `ghcr.io/cloudnative-pg/plugin-barman-cloud-sidecar` and is
|
||||
built from the
|
||||
[`Dockerfile.sidecar`](https://github.com/cloudnative-pg/plugin-barman-cloud/blob/main/containers/Dockerfile.sidecar).
|
||||
|
||||
These sidecar images are designed to work seamlessly with the
|
||||
[`minimal` PostgreSQL container images](https://github.com/cloudnative-pg/postgres-containers?tab=readme-ov-file#minimal-images)
|
||||
maintained by the CloudNativePG Community.
|
||||
124
web/versioned_docs/version-0.11.0/installation.mdx
Normal file
124
web/versioned_docs/version-0.11.0/installation.mdx
Normal file
@ -0,0 +1,124 @@
|
||||
---
|
||||
sidebar_position: 20
|
||||
---
|
||||
|
||||
# Installation
|
||||
|
||||
:::important
|
||||
1. The plugin **must** be installed in the same namespace as the CloudNativePG
|
||||
operator (typically `cnpg-system`).
|
||||
|
||||
2. Keep in mind that the operator's **listening namespaces** may differ from its
|
||||
installation namespace. Double-check this to avoid configuration issues.
|
||||
:::
|
||||
|
||||
## Verifying the Requirements
|
||||
|
||||
Before installing the plugin, make sure the [requirements](intro.md#requirements) are met.
|
||||
|
||||
### CloudNativePG Version
|
||||
|
||||
Ensure you're running a version of CloudNativePG that is compatible with the
|
||||
plugin. If installed in the default `cnpg-system` namespace, you can verify the
|
||||
version with:
|
||||
|
||||
```sh
|
||||
kubectl get deployment -n cnpg-system cnpg-controller-manager \
|
||||
-o jsonpath="{.spec.template.spec.containers[*].image}"
|
||||
```
|
||||
|
||||
Example output:
|
||||
|
||||
```output
|
||||
ghcr.io/cloudnative-pg/cloudnative-pg:1.26.0
|
||||
```
|
||||
|
||||
The version **must be 1.26 or newer**.
|
||||
|
||||
### cert-manager
|
||||
|
||||
Use the [cmctl](https://cert-manager.io/docs/reference/cmctl/#installation)
|
||||
tool to confirm that `cert-manager` is installed and available:
|
||||
|
||||
```sh
|
||||
cmctl check api
|
||||
```
|
||||
|
||||
Example output:
|
||||
|
||||
```output
|
||||
The cert-manager API is ready
|
||||
```
|
||||
|
||||
Both checks are required before proceeding with the installation.
|
||||
|
||||
## Installing the Barman Cloud Plugin
|
||||
|
||||
import { InstallationSnippet } from '@site/src/components/Installation';
|
||||
|
||||
### Directly using the manifest
|
||||
|
||||
Install the plugin using `kubectl` by applying the manifest for the latest
|
||||
release:
|
||||
|
||||
<InstallationSnippet />
|
||||
|
||||
Example output:
|
||||
|
||||
```output
|
||||
customresourcedefinition.apiextensions.k8s.io/objectstores.barmancloud.cnpg.io created
|
||||
serviceaccount/plugin-barman-cloud created
|
||||
role.rbac.authorization.k8s.io/leader-election-role created
|
||||
clusterrole.rbac.authorization.k8s.io/metrics-auth-role created
|
||||
clusterrole.rbac.authorization.k8s.io/metrics-reader created
|
||||
clusterrole.rbac.authorization.k8s.io/objectstore-editor-role created
|
||||
clusterrole.rbac.authorization.k8s.io/objectstore-viewer-role created
|
||||
clusterrole.rbac.authorization.k8s.io/plugin-barman-cloud created
|
||||
rolebinding.rbac.authorization.k8s.io/leader-election-rolebinding created
|
||||
clusterrolebinding.rbac.authorization.k8s.io/metrics-auth-rolebinding created
|
||||
clusterrolebinding.rbac.authorization.k8s.io/plugin-barman-cloud-binding created
|
||||
secret/plugin-barman-cloud-8tfddg42gf created
|
||||
service/barman-cloud created
|
||||
deployment.apps/barman-cloud configured
|
||||
certificate.cert-manager.io/barman-cloud-client created
|
||||
certificate.cert-manager.io/barman-cloud-server created
|
||||
issuer.cert-manager.io/selfsigned-issuer created
|
||||
```
|
||||
|
||||
Finally, check that the deployment is up and running:
|
||||
|
||||
```sh
|
||||
kubectl rollout status deployment \
|
||||
-n cnpg-system barman-cloud
|
||||
```
|
||||
|
||||
Example output:
|
||||
|
||||
```output
|
||||
deployment "barman-cloud" successfully rolled out
|
||||
```
|
||||
|
||||
This confirms that the plugin is deployed and ready to use.
|
||||
|
||||
### Using the Helm Chart
|
||||
|
||||
The plugin can be installed using the provided [Helm chart](https://github.com/cloudnative-pg/charts/tree/main/charts/plugin-barman-cloud):
|
||||
|
||||
```sh
|
||||
helm repo add cnpg https://cloudnative-pg.github.io/charts
|
||||
helm upgrade --install barman-cloud \
|
||||
--namespace cnpg-system \
|
||||
--create-namespace \
|
||||
--version 0.5.0 \
|
||||
cnpg/plugin-barman-cloud
|
||||
```
|
||||
|
||||
## Testing the latest development snapshot
|
||||
|
||||
You can also test the latest development snapshot of the plugin with the
|
||||
following command:
|
||||
|
||||
```sh
|
||||
kubectl apply -f \
|
||||
https://raw.githubusercontent.com/cloudnative-pg/plugin-barman-cloud/refs/heads/main/manifest.yaml
|
||||
```
|
||||
86
web/versioned_docs/version-0.11.0/intro.md
Normal file
86
web/versioned_docs/version-0.11.0/intro.md
Normal file
@ -0,0 +1,86 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
sidebar_label: "Introduction"
|
||||
---
|
||||
|
||||
# Barman Cloud Plugin
|
||||
|
||||
<!-- SPDX-License-Identifier: CC-BY-4.0 -->
|
||||
|
||||
The **Barman Cloud Plugin** for [CloudNativePG](https://cloudnative-pg.io/)
|
||||
enables online continuous physical backups of PostgreSQL clusters to object storage
|
||||
using the `barman-cloud` suite from the [Barman](https://docs.pgbarman.org/release/latest/)
|
||||
project.
|
||||
|
||||
:::important
|
||||
If you plan to migrate your existing CloudNativePG cluster to the new
|
||||
plugin-based approach using the Barman Cloud Plugin, see
|
||||
["Migrating from Built-in CloudNativePG Backup"](migration.md)
|
||||
for detailed instructions.
|
||||
:::
|
||||
|
||||
## Requirements
|
||||
|
||||
Before using the Barman Cloud Plugin, ensure that the following components are
|
||||
installed and properly configured:
|
||||
|
||||
- [CloudNativePG](https://cloudnative-pg.io) version 1.26 or later
|
||||
|
||||
- We strongly recommend version 1.27.0 or later, which includes improved
|
||||
error handling and status reporting for the plugin.
|
||||
- If you are running an earlier release, refer to the
|
||||
[upgrade guide](https://cloudnative-pg.io/documentation/current/installation_upgrade).
|
||||
|
||||
- [cert-manager](https://cert-manager.io/)
|
||||
|
||||
- The recommended way to enable secure TLS communication between the plugin
|
||||
and the operator.
|
||||
- Alternatively, you can provide your own certificate bundles. See the
|
||||
[CloudNativePG documentation on TLS configuration](https://cloudnative-pg.io/documentation/current/cnpg_i/#configuring-tls-certificates).
|
||||
|
||||
- [`kubectl-cnpg`](https://cloudnative-pg.io/documentation/current/kubectl-plugin/)
|
||||
plugin (optional but recommended)
|
||||
|
||||
- Simplifies debugging and monitoring with additional status and inspection
|
||||
commands.
|
||||
- Multiple installation options are available in the
|
||||
[installation guide](https://cloudnative-pg.io/documentation/current/kubectl-plugin/#install).
|
||||
|
||||
## Key Features
|
||||
|
||||
This plugin provides the following capabilities:
|
||||
|
||||
- Physical online backup of the data directory
|
||||
- Physical restore of the data directory
|
||||
- Write-Ahead Log (WAL) archiving
|
||||
- WAL restore
|
||||
- Full cluster recovery
|
||||
- Point-in-Time Recovery (PITR)
|
||||
- Seamless integration with replica clusters for bootstrap and WAL restore from archive
|
||||
|
||||
:::important
|
||||
The Barman Cloud Plugin is designed to **replace the in-tree object storage support**
|
||||
previously provided via the `.spec.backup.barmanObjectStore` section in the
|
||||
`Cluster` resource.
|
||||
Backups created using the in-tree approach are fully supported and compatible
|
||||
with this plugin.
|
||||
:::
|
||||
|
||||
## Supported Object Storage Providers
|
||||
|
||||
The plugin works with all storage backends supported by `barman-cloud`, including:
|
||||
|
||||
- **Amazon S3**
|
||||
- **Google Cloud Storage**
|
||||
- **Microsoft Azure Blob Storage**
|
||||
|
||||
In addition, the following S3-compatible and simulator solutions have been
|
||||
tested and verified:
|
||||
|
||||
- [MinIO](https://min.io/) – An S3-compatible storage solution
|
||||
- [Azurite](https://github.com/Azure/Azurite) – A simulator for Azure Blob Storage
|
||||
- [fake-gcs-server](https://github.com/fsouza/fake-gcs-server) – A simulator for Google Cloud Storage
|
||||
|
||||
:::tip
|
||||
For more details, refer to [Object Store Providers](object_stores.md).
|
||||
:::
|
||||
274
web/versioned_docs/version-0.11.0/migration.md
Normal file
274
web/versioned_docs/version-0.11.0/migration.md
Normal file
@ -0,0 +1,274 @@
|
||||
---
|
||||
sidebar_position: 40
|
||||
---
|
||||
|
||||
# Migrating from Built-in CloudNativePG Backup
|
||||
|
||||
<!-- SPDX-License-Identifier: CC-BY-4.0 -->
|
||||
|
||||
The in-tree support for Barman Cloud in CloudNativePG is **deprecated starting
|
||||
from version 1.26** and will be removed in a future release.
|
||||
|
||||
If you're currently relying on the built-in Barman Cloud integration, you can
|
||||
migrate seamlessly to the new **plugin-based architecture** using the Barman
|
||||
Cloud Plugin, without data loss. Follow these steps:
|
||||
|
||||
- [Install the Barman Cloud Plugin](installation.mdx)
|
||||
- Create an `ObjectStore` resource by translating the contents of the
|
||||
`.spec.backup.barmanObjectStore` section from your existing `Cluster`
|
||||
definition
|
||||
- Modify the `Cluster` resource in a single atomic change to switch from
|
||||
in-tree backup to the plugin
|
||||
- Update any `ScheduledBackup` resources to use the plugin
|
||||
- Update the `externalClusters` configuration, where applicable
|
||||
|
||||
:::tip
|
||||
For a working example, refer to [this commit](https://github.com/cloudnative-pg/cnpg-playground/commit/596f30e252896edf8f734991c3538df87630f6f7)
|
||||
from the [CloudNativePG Playground project](https://github.com/cloudnative-pg/cnpg-playground),
|
||||
which demonstrates a full migration.
|
||||
:::
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Define the `ObjectStore`
|
||||
|
||||
Begin by creating an `ObjectStore` resource in the same namespace as your
|
||||
PostgreSQL `Cluster`.
|
||||
|
||||
There is a **direct mapping** between the `.spec.backup.barmanObjectStore`
|
||||
section in CloudNativePG and the `.spec.configuration` field in the
|
||||
`ObjectStore` CR. The conversion is mostly mechanical, with one key difference:
|
||||
|
||||
:::warning
|
||||
In the plugin architecture, retention policies are defined as part of the `ObjectStore`.
|
||||
In contrast, the in-tree implementation defined them at the `Cluster` level.
|
||||
:::
|
||||
|
||||
If your `Cluster` used `.spec.backup.retentionPolicy`, move that configuration
|
||||
to `.spec.retentionPolicy` in the `ObjectStore`.
|
||||
|
||||
---
|
||||
|
||||
### Example
|
||||
|
||||
Here’s an excerpt from a traditional in-tree CloudNativePG backup configuration
|
||||
taken from the CloudNativePG Playground project:
|
||||
|
||||
```yaml
|
||||
apiVersion: postgresql.cnpg.io/v1
|
||||
kind: Cluster
|
||||
metadata:
|
||||
name: pg-eu
|
||||
spec:
|
||||
# [...]
|
||||
backup:
|
||||
barmanObjectStore:
|
||||
destinationPath: s3://backups/
|
||||
endpointURL: http://minio-eu:9000
|
||||
s3Credentials:
|
||||
accessKeyId:
|
||||
name: minio-eu
|
||||
key: ACCESS_KEY_ID
|
||||
secretAccessKey:
|
||||
name: minio-eu
|
||||
key: ACCESS_SECRET_KEY
|
||||
wal:
|
||||
compression: gzip
|
||||
```
|
||||
|
||||
This configuration translates to the following `ObjectStore` resource for the
|
||||
plugin:
|
||||
|
||||
```yaml
|
||||
apiVersion: barmancloud.cnpg.io/v1
|
||||
kind: ObjectStore
|
||||
metadata:
|
||||
name: minio-eu
|
||||
spec:
|
||||
configuration:
|
||||
destinationPath: s3://backups/
|
||||
endpointURL: http://minio-eu:9000
|
||||
s3Credentials:
|
||||
accessKeyId:
|
||||
name: minio-eu
|
||||
key: ACCESS_KEY_ID
|
||||
secretAccessKey:
|
||||
name: minio-eu
|
||||
key: ACCESS_SECRET_KEY
|
||||
wal:
|
||||
compression: gzip
|
||||
```
|
||||
|
||||
As you can see, the contents of `barmanObjectStore` have been copied directly
|
||||
under the `configuration` field of the `ObjectStore` resource, using the same
|
||||
secret references.
|
||||
|
||||
## Step 2: Update the `Cluster` for plugin WAL archiving
|
||||
|
||||
Once the `ObjectStore` resource is in place, update the `Cluster` resource as
|
||||
follows in a single atomic change:
|
||||
|
||||
- Remove the `.spec.backup.barmanObjectStore` section
|
||||
- Remove `.spec.backup.retentionPolicy` if it was defined (as it is now in the
|
||||
`ObjectStore`)
|
||||
- Remove the entire `spec.backup` section if it is now empty
|
||||
- Add `barman-cloud.cloudnative-pg.io` to the `plugins` list, as described in
|
||||
[Configuring WAL archiving](usage.md#configuring-wal-archiving)
|
||||
|
||||
This will trigger a rolling update of the `Cluster`, switching continuous
|
||||
backup from the in-tree implementation to the plugin-based approach.
|
||||
|
||||
### Example
|
||||
|
||||
The updated `pg-eu` cluster will have this configuration instead of the
|
||||
previous `backup` section:
|
||||
|
||||
```yaml
|
||||
plugins:
|
||||
- name: barman-cloud.cloudnative-pg.io
|
||||
isWALArchiver: true
|
||||
parameters:
|
||||
barmanObjectName: minio-eu
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Update the `ScheduledBackup`
|
||||
|
||||
After switching the `Cluster` to use the plugin, update your `ScheduledBackup`
|
||||
resources to match.
|
||||
|
||||
Set the backup `method` to `plugin` and reference the plugin name via
|
||||
`pluginConfiguration`, as shown in ["Performing a base backup"](usage.md#performing-a-base-backup).
|
||||
|
||||
### Example
|
||||
|
||||
Original in-tree `ScheduledBackup`:
|
||||
|
||||
```yaml
|
||||
apiVersion: postgresql.cnpg.io/v1
|
||||
kind: ScheduledBackup
|
||||
metadata:
|
||||
name: pg-eu-backup
|
||||
spec:
|
||||
cluster:
|
||||
name: pg-eu
|
||||
schedule: '0 0 0 * * *'
|
||||
backupOwnerReference: self
|
||||
```
|
||||
|
||||
Updated version using the plugin:
|
||||
|
||||
```yaml
|
||||
apiVersion: postgresql.cnpg.io/v1
|
||||
kind: ScheduledBackup
|
||||
metadata:
|
||||
name: pg-eu-backup
|
||||
spec:
|
||||
cluster:
|
||||
name: pg-eu
|
||||
schedule: '0 0 0 * * *'
|
||||
backupOwnerReference: self
|
||||
method: plugin
|
||||
pluginConfiguration:
|
||||
name: barman-cloud.cloudnative-pg.io
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 4: Update the `externalClusters` configuration
|
||||
|
||||
If your `Cluster` relies on one or more external clusters that use the in-tree
|
||||
Barman Cloud integration, you need to update those configurations to use the
|
||||
plugin-based architecture.
|
||||
|
||||
When a replica cluster fetches WAL files or base backups from an external
|
||||
source that used the built-in backup method, follow these steps:
|
||||
|
||||
1. Create a corresponding `ObjectStore` resource for the external cluster, as
|
||||
shown in [Step 1](#step-1-define-the-objectstore)
|
||||
2. Update the `externalClusters` section of your replica cluster to use the
|
||||
plugin instead of the in-tree `barmanObjectStore` field
|
||||
|
||||
### Example
|
||||
|
||||
Consider the original configuration using in-tree Barman Cloud:
|
||||
|
||||
```yaml
|
||||
apiVersion: postgresql.cnpg.io/v1
|
||||
kind: Cluster
|
||||
metadata:
|
||||
name: pg-us
|
||||
spec:
|
||||
# [...]
|
||||
externalClusters:
|
||||
- name: pg-eu
|
||||
barmanObjectStore:
|
||||
destinationPath: s3://backups/
|
||||
endpointURL: http://minio-eu:9000
|
||||
serverName: pg-eu
|
||||
s3Credentials:
|
||||
accessKeyId:
|
||||
name: minio-eu
|
||||
key: ACCESS_KEY_ID
|
||||
secretAccessKey:
|
||||
name: minio-eu
|
||||
key: ACCESS_SECRET_KEY
|
||||
wal:
|
||||
compression: gzip
|
||||
```
|
||||
|
||||
Create the `ObjectStore` resource for the external cluster:
|
||||
|
||||
```yaml
|
||||
apiVersion: barmancloud.cnpg.io/v1
|
||||
kind: ObjectStore
|
||||
metadata:
|
||||
name: minio-eu
|
||||
spec:
|
||||
configuration:
|
||||
destinationPath: s3://backups/
|
||||
endpointURL: http://minio-eu:9000
|
||||
s3Credentials:
|
||||
accessKeyId:
|
||||
name: minio-eu
|
||||
key: ACCESS_KEY_ID
|
||||
secretAccessKey:
|
||||
name: minio-eu
|
||||
key: ACCESS_SECRET_KEY
|
||||
wal:
|
||||
compression: gzip
|
||||
```
|
||||
|
||||
Update the external cluster configuration to use the plugin:
|
||||
|
||||
```yaml
|
||||
apiVersion: postgresql.cnpg.io/v1
|
||||
kind: Cluster
|
||||
metadata:
|
||||
name: pg-us
|
||||
spec:
|
||||
# [...]
|
||||
externalClusters:
|
||||
- name: pg-eu
|
||||
plugin:
|
||||
name: barman-cloud.cloudnative-pg.io
|
||||
parameters:
|
||||
barmanObjectName: minio-eu
|
||||
serverName: pg-eu
|
||||
```
|
||||
|
||||
## Step 5: Verify your metrics
|
||||
|
||||
When migrating from the in-core solution to the plugin-based approach, you need
|
||||
to monitor a different set of metrics, as described in the
|
||||
["Observability"](observability.md) section.
|
||||
|
||||
The table below summarizes the name changes between the old in-core metrics and
|
||||
the new plugin-based ones:
|
||||
|
||||
| Old metric name | New metric name |
|
||||
| ------------------------------------------------ | ---------------------------------------------------------------- |
|
||||
| `cnpg_collector_last_failed_backup_timestamp` | `barman_cloud_cloudnative_pg_io_last_failed_backup_timestamp` |
|
||||
| `cnpg_collector_last_available_backup_timestamp` | `barman_cloud_cloudnative_pg_io_last_available_backup_timestamp` |
|
||||
| `cnpg_collector_first_recoverability_point` | `barman_cloud_cloudnative_pg_io_first_recoverability_point` |
|
||||
97
web/versioned_docs/version-0.11.0/misc.md
Normal file
97
web/versioned_docs/version-0.11.0/misc.md
Normal file
@ -0,0 +1,97 @@
|
||||
---
|
||||
sidebar_position: 90
|
||||
---
|
||||
|
||||
# Miscellaneous
|
||||
|
||||
<!-- SPDX-License-Identifier: CC-BY-4.0 -->
|
||||
|
||||
## Backup Object Tagging
|
||||
|
||||
You can attach key-value metadata tags to backup artifacts—such as base
|
||||
backups, WAL files, and history files—via the `.spec.configuration` section of
|
||||
the `ObjectStore` resource.
|
||||
|
||||
- `tags`: applied to base backups and WAL files
|
||||
- `historyTags`: applied to history files only
|
||||
|
||||
### Example
|
||||
|
||||
```yaml
|
||||
apiVersion: barmancloud.cnpg.io/v1
|
||||
kind: ObjectStore
|
||||
metadata:
|
||||
name: my-store
|
||||
spec:
|
||||
configuration:
|
||||
[...]
|
||||
tags:
|
||||
backupRetentionPolicy: "expire"
|
||||
historyTags:
|
||||
backupRetentionPolicy: "keep"
|
||||
[...]
|
||||
```
|
||||
|
||||
## Extra Options for Backup and WAL Archiving
|
||||
|
||||
You can pass additional command-line arguments to `barman-cloud-backup` and
|
||||
`barman-cloud-wal-archive` using the `additionalCommandArgs` field in the
|
||||
`ObjectStore` configuration.
|
||||
|
||||
- `.spec.configuration.data.additionalCommandArgs`: for `barman-cloud-backup`
|
||||
- `.spec.configuration.wal.archiveAdditionalCommandArgs`: for `barman-cloud-wal-archive`
|
||||
|
||||
Each field accepts a list of string arguments. If an argument is already
|
||||
configured elsewhere in the plugin, the duplicate will be ignored.
|
||||
|
||||
### Example: Extra Backup Options
|
||||
|
||||
```yaml
|
||||
kind: ObjectStore
|
||||
metadata:
|
||||
name: my-store
|
||||
spec:
|
||||
configuration:
|
||||
data:
|
||||
additionalCommandArgs:
|
||||
- "--min-chunk-size=5MB"
|
||||
- "--read-timeout=60"
|
||||
```
|
||||
|
||||
### Example: Extra WAL Archive Options
|
||||
|
||||
```yaml
|
||||
kind: ObjectStore
|
||||
metadata:
|
||||
name: my-store
|
||||
spec:
|
||||
configuration:
|
||||
wal:
|
||||
archiveAdditionalCommandArgs:
|
||||
- "--max-concurrency=1"
|
||||
- "--read-timeout=60"
|
||||
```
|
||||
|
||||
For a complete list of supported options, refer to the
|
||||
[official Barman Cloud documentation](https://docs.pgbarman.org/release/latest/).
|
||||
|
||||
## Enable the pprof debug server for the sidecar
|
||||
|
||||
You can enable the instance sidecar's pprof debug HTTP server by adding the `--pprof-server=<address>` flag to the container's
|
||||
arguments via `.spec.instanceSidecarConfiguration.additionalContainerArgs`.
|
||||
|
||||
Pass a bind address in the form `<host>:<port>` (for example, `0.0.0.0:6061`).
|
||||
An empty value disables the server (disabled by default).
|
||||
|
||||
### Example
|
||||
|
||||
```yaml
|
||||
apiVersion: barmancloud.cnpg.io/v1
|
||||
kind: ObjectStore
|
||||
metadata:
|
||||
name: my-store
|
||||
spec:
|
||||
instanceSidecarConfiguration:
|
||||
additionalContainerArgs:
|
||||
- "--pprof-server=0.0.0.0:6061"
|
||||
```
|
||||
498
web/versioned_docs/version-0.11.0/object_stores.md
Normal file
498
web/versioned_docs/version-0.11.0/object_stores.md
Normal file
@ -0,0 +1,498 @@
|
||||
---
|
||||
sidebar_position: 50
|
||||
---
|
||||
|
||||
# Object Store Providers
|
||||
|
||||
<!-- SPDX-License-Identifier: CC-BY-4.0 -->
|
||||
|
||||
The Barman Cloud Plugin enables the storage of PostgreSQL cluster backup files
|
||||
in any object storage service supported by the
|
||||
[Barman Cloud infrastructure](https://docs.pgbarman.org/release/latest/).
|
||||
|
||||
Currently, Barman Cloud supports the following providers:
|
||||
|
||||
- [Amazon S3](#aws-s3)
|
||||
- [Microsoft Azure Blob Storage](#azure-blob-storage)
|
||||
- [Google Cloud Storage](#google-cloud-storage)
|
||||
|
||||
You may also use any S3- or Azure-compatible implementation of the above
|
||||
services.
|
||||
|
||||
To configure object storage with Barman Cloud, you must define an
|
||||
[`ObjectStore` object](plugin-barman-cloud.v1.md#objectstore), which
|
||||
establishes the connection between your PostgreSQL cluster and the object
|
||||
storage backend.
|
||||
|
||||
Configuration details — particularly around authentication — will vary depending on
|
||||
the specific object storage provider you are using.
|
||||
|
||||
The following sections detail the setup for each.
|
||||
|
||||
:::note Authentication Methods
|
||||
The Barman Cloud Plugin does not independently test all authentication methods
|
||||
supported by `barman-cloud`. The plugin's responsibility is limited to passing
|
||||
the provided credentials to `barman-cloud`, which then handles authentication
|
||||
according to its own implementation. Users should refer to the
|
||||
[Barman Cloud documentation](https://docs.pgbarman.org/release/latest/) to
|
||||
verify that their chosen authentication method is supported and properly
|
||||
configured.
|
||||
:::
|
||||
|
||||
---
|
||||
|
||||
## AWS S3
|
||||
|
||||
[AWS Simple Storage Service (S3)](https://aws.amazon.com/s3/) is one of the
|
||||
most widely adopted object storage solutions.
|
||||
|
||||
The Barman Cloud plugin for CloudNativePG integrates with S3 through two
|
||||
primary authentication mechanisms:
|
||||
|
||||
- [IAM Roles for Service Accounts (IRSA)](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) —
|
||||
recommended for clusters running on EKS
|
||||
- Access keys — using `ACCESS_KEY_ID` and `ACCESS_SECRET_KEY` credentials
|
||||
|
||||
### Access Keys
|
||||
|
||||
To authenticate using access keys, you’ll need:
|
||||
|
||||
- `ACCESS_KEY_ID`: the public key used to authenticate to S3
|
||||
- `ACCESS_SECRET_KEY`: the corresponding secret key
|
||||
- `ACCESS_SESSION_TOKEN`: (optional) a temporary session token, if required
|
||||
|
||||
These credentials must be stored securely in a Kubernetes secret:
|
||||
|
||||
```sh
|
||||
kubectl create secret generic aws-creds \
|
||||
--from-literal=ACCESS_KEY_ID=<access key here> \
|
||||
--from-literal=ACCESS_SECRET_KEY=<secret key here>
|
||||
# --from-literal=ACCESS_SESSION_TOKEN=<session token here> # if required
|
||||
```
|
||||
|
||||
The credentials will be encrypted at rest if your Kubernetes environment
|
||||
supports it.
|
||||
|
||||
You can then reference the secret in your `ObjectStore` definition:
|
||||
|
||||
```yaml
|
||||
apiVersion: barmancloud.cnpg.io/v1
|
||||
kind: ObjectStore
|
||||
metadata:
|
||||
name: aws-store
|
||||
spec:
|
||||
configuration:
|
||||
destinationPath: "s3://BUCKET_NAME/path/to/folder"
|
||||
s3Credentials:
|
||||
accessKeyId:
|
||||
name: aws-creds
|
||||
key: ACCESS_KEY_ID
|
||||
secretAccessKey:
|
||||
name: aws-creds
|
||||
key: ACCESS_SECRET_KEY
|
||||
[...]
|
||||
```
|
||||
|
||||
### IAM Role for Service Account (IRSA)
|
||||
|
||||
To use IRSA with EKS, configure the service account of the PostgreSQL cluster
|
||||
with the appropriate annotation:
|
||||
|
||||
```yaml
|
||||
apiVersion: postgresql.cnpg.io/v1
|
||||
kind: Cluster
|
||||
metadata:
|
||||
[...]
|
||||
spec:
|
||||
serviceAccountTemplate:
|
||||
metadata:
|
||||
annotations:
|
||||
eks.amazonaws.com/role-arn: arn:[...]
|
||||
[...]
|
||||
```
|
||||
|
||||
### S3 Lifecycle Policy
|
||||
|
||||
Barman Cloud uploads backup files to S3 but does not modify them afterward.
|
||||
To enhance data durability and protect against accidental or malicious loss,
|
||||
it's recommended to implement the following best practices:
|
||||
|
||||
- Enable object versioning
|
||||
- Enable object locking to prevent objects from being deleted or overwritten
|
||||
for a defined period or indefinitely (this provides an additional layer of
|
||||
protection against accidental deletion and ransomware attacks)
|
||||
- Set lifecycle rules to expire current versions a few days after your Barman
|
||||
retention window
|
||||
- Expire non-current versions after a longer period
|
||||
|
||||
These strategies help you safeguard backups without requiring broad delete
|
||||
permissions, ensuring both security and compliance with minimal operational
|
||||
overhead.
|
||||
|
||||
|
||||
### S3-Compatible Storage Providers
|
||||
|
||||
You can use S3-compatible services like **MinIO**, **Linode (Akamai) Object Storage**,
|
||||
or **DigitalOcean Spaces** by specifying a custom `endpointURL`.
|
||||
|
||||
Example with Linode (Akamai) Object Storage (`us-east1`):
|
||||
|
||||
```yaml
|
||||
apiVersion: barmancloud.cnpg.io/v1
|
||||
kind: ObjectStore
|
||||
metadata:
|
||||
name: linode-store
|
||||
spec:
|
||||
configuration:
|
||||
destinationPath: "s3://BUCKET_NAME/"
|
||||
endpointURL: "https://us-east1.linodeobjects.com"
|
||||
s3Credentials:
|
||||
[...]
|
||||
[...]
|
||||
```
|
||||
|
||||
Recent changes to the [boto3 implementation](https://github.com/boto/boto3/issues/4392)
|
||||
of [Amazon S3 Data Integrity Protections](https://docs.aws.amazon.com/sdkref/latest/guide/feature-dataintegrity.html)
|
||||
may lead to the `x-amz-content-sha256` error when using the Barman Cloud
|
||||
Plugin.
|
||||
|
||||
If you encounter this issue (see [GitHub issue #393](https://github.com/cloudnative-pg/plugin-barman-cloud/issues/393)),
|
||||
you can apply the following workaround by setting specific environment
|
||||
variables in the `ObjectStore` resource:
|
||||
|
||||
```yaml
|
||||
apiVersion: barmancloud.cnpg.io/v1
|
||||
kind: ObjectStore
|
||||
metadata:
|
||||
name: linode-store
|
||||
spec:
|
||||
instanceSidecarConfiguration:
|
||||
env:
|
||||
- name: AWS_REQUEST_CHECKSUM_CALCULATION
|
||||
value: when_required
|
||||
- name: AWS_RESPONSE_CHECKSUM_VALIDATION
|
||||
value: when_required
|
||||
[...]
|
||||
```
|
||||
|
||||
These settings ensure that checksum calculations and validations are only
|
||||
applied when explicitly required, avoiding compatibility issues with certain
|
||||
S3-compatible storage providers.
|
||||
|
||||
Example with DigitalOcean Spaces (SFO3, path-style):
|
||||
|
||||
```yaml
|
||||
apiVersion: barmancloud.cnpg.io/v1
|
||||
kind: ObjectStore
|
||||
metadata:
|
||||
name: digitalocean-store
|
||||
spec:
|
||||
configuration:
|
||||
destinationPath: "s3://BUCKET_NAME/path/to/folder"
|
||||
endpointURL: "https://sfo3.digitaloceanspaces.com"
|
||||
s3Credentials:
|
||||
[...]
|
||||
[...]
|
||||
```
|
||||
|
||||
### Using Object Storage with a Private CA
|
||||
|
||||
For object storage services (e.g., MinIO) that use HTTPS with certificates
|
||||
signed by a private CA, set the `endpointCA` field in the `ObjectStore`
|
||||
definition. Unless you already have it, create a Kubernetes `Secret` with the
|
||||
CA bundle:
|
||||
|
||||
```sh
|
||||
kubectl create secret generic my-ca-secret --from-file=ca.crt
|
||||
```
|
||||
|
||||
Then reference it:
|
||||
|
||||
```yaml
|
||||
apiVersion: barmancloud.cnpg.io/v1
|
||||
kind: ObjectStore
|
||||
metadata:
|
||||
name: minio-store
|
||||
spec:
|
||||
configuration:
|
||||
endpointURL: <myEndpointURL>
|
||||
endpointCA:
|
||||
name: my-ca-secret
|
||||
key: ca.crt
|
||||
[...]
|
||||
```
|
||||
|
||||
<!-- TODO: does this also apply to the plugin? -->
|
||||
:::note
|
||||
If you want `ConfigMaps` and `Secrets` to be **automatically** reloaded by
|
||||
instances, you can add a label with the key `cnpg.io/reload` to the
|
||||
`Secrets`/`ConfigMaps`. Otherwise, you will have to reload the instances using the
|
||||
`kubectl cnpg reload` subcommand.
|
||||
:::
|
||||
|
||||
---
|
||||
|
||||
## Azure Blob Storage
|
||||
|
||||
[Azure Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/)
|
||||
is Microsoft’s cloud-based object storage solution.
|
||||
|
||||
Barman Cloud supports the following authentication methods:
|
||||
|
||||
- [Connection String](https://learn.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string)
|
||||
- Storage Account Name + [Storage Account Access Key](https://learn.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage)
|
||||
- Storage Account Name + [Storage Account SAS Token](https://learn.microsoft.com/en-us/azure/storage/blobs/sas-service-create)
|
||||
- [Azure AD Managed Identity](https://learn.microsoft.com/en-us/entra/identity/managed-identities-azure-resources/overview)
|
||||
- [Default Azure Credentials](https://learn.microsoft.com/en-us/dotnet/api/azure.identity.defaultazurecredential?view=azure-dotnet)
|
||||
|
||||
### Azure AD Managed Identity
|
||||
|
||||
This method avoids storing credentials in Kubernetes by enabling the
|
||||
usage of [Azure Managed Identities](https://learn.microsoft.com/en-us/entra/identity/managed-identities-azure-resources/overview) authentication mechanism.
|
||||
This can be enabled by setting the `inheritFromAzureAD` option to `true`.
|
||||
Managed Identity can be configured for the AKS Cluster by following
|
||||
the [Azure documentation](https://learn.microsoft.com/en-us/azure/aks/use-managed-identity?pivots=system-assigned).
|
||||
|
||||
```yaml
|
||||
apiVersion: barmancloud.cnpg.io/v1
|
||||
kind: ObjectStore
|
||||
metadata:
|
||||
name: azure-store
|
||||
spec:
|
||||
configuration:
|
||||
destinationPath: "<destination path here>"
|
||||
azureCredentials:
|
||||
inheritFromAzureAD: true
|
||||
[...]
|
||||
```
|
||||
|
||||
### Default Azure Credentials
|
||||
|
||||
The `useDefaultAzureCredentials` option enables the default Azure credentials
|
||||
flow, which uses [`DefaultAzureCredential`](https://learn.microsoft.com/en-us/python/api/azure-identity/azure.identity.defaultazurecredential)
|
||||
to automatically discover and use available credentials in the following order:
|
||||
|
||||
1. **Environment Variables** — `AZURE_CLIENT_ID`, `AZURE_CLIENT_SECRET`, and `AZURE_TENANT_ID` for Service Principal authentication
|
||||
2. **Managed Identity** — Uses the managed identity assigned to the pod
|
||||
3. **Azure CLI** — Uses credentials from the Azure CLI if available
|
||||
4. **Azure PowerShell** — Uses credentials from Azure PowerShell if available
|
||||
|
||||
This approach is particularly useful for getting started with development and testing; it allows
|
||||
the SDK to attempt multiple authentication mechanisms seamlessly across different environments.
|
||||
However, this is not recommended for production. Please refer to the
|
||||
[official Azure guidance](https://learn.microsoft.com/en-us/dotnet/azure/sdk/authentication/credential-chains?tabs=dac#usage-guidance-for-defaultazurecredential)
|
||||
for a comprehensive understanding of `DefaultAzureCredential`.
|
||||
|
||||
```yaml
|
||||
apiVersion: barmancloud.cnpg.io/v1
|
||||
kind: ObjectStore
|
||||
metadata:
|
||||
name: azure-store
|
||||
spec:
|
||||
configuration:
|
||||
destinationPath: "<destination path here>"
|
||||
azureCredentials:
|
||||
useDefaultAzureCredentials: true
|
||||
[...]
|
||||
```
|
||||
|
||||
### Access Key, SAS Token, or Connection String
|
||||
|
||||
Store credentials in a Kubernetes secret:
|
||||
|
||||
```sh
|
||||
kubectl create secret generic azure-creds \
|
||||
--from-literal=AZURE_STORAGE_ACCOUNT=<storage account name> \
|
||||
--from-literal=AZURE_STORAGE_KEY=<storage account key> \
|
||||
--from-literal=AZURE_STORAGE_SAS_TOKEN=<SAS token> \
|
||||
--from-literal=AZURE_STORAGE_CONNECTION_STRING=<connection string>
|
||||
```
|
||||
|
||||
Then reference the required keys in your `ObjectStore`:
|
||||
|
||||
```yaml
|
||||
apiVersion: barmancloud.cnpg.io/v1
|
||||
kind: ObjectStore
|
||||
metadata:
|
||||
name: azure-store
|
||||
spec:
|
||||
configuration:
|
||||
destinationPath: "<destination path here>"
|
||||
azureCredentials:
|
||||
connectionString:
|
||||
name: azure-creds
|
||||
key: AZURE_CONNECTION_STRING
|
||||
storageAccount:
|
||||
name: azure-creds
|
||||
key: AZURE_STORAGE_ACCOUNT
|
||||
storageKey:
|
||||
name: azure-creds
|
||||
key: AZURE_STORAGE_KEY
|
||||
storageSasToken:
|
||||
name: azure-creds
|
||||
key: AZURE_STORAGE_SAS_TOKEN
|
||||
[...]
|
||||
```
|
||||
|
||||
For Azure Blob, the destination path format is:
|
||||
|
||||
```
|
||||
<http|https>://<account-name>.<service-name>.core.windows.net/<container>/<blob>
|
||||
```
|
||||
|
||||
### Azure-Compatible Providers
|
||||
|
||||
If you're using a different implementation (e.g., Azurite or emulator):
|
||||
|
||||
```
|
||||
<http|https>://<local-machine-address>:<port>/<account-name>/<container>/<blob>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Google Cloud Storage
|
||||
|
||||
[Google Cloud Storage](https://cloud.google.com/storage/) is supported with two
|
||||
authentication modes:
|
||||
|
||||
- **GKE Workload Identity** (recommended inside Google Kubernetes Engine)
|
||||
- **Service Account JSON key** via the `GOOGLE_APPLICATION_CREDENTIALS` environment variable
|
||||
|
||||
### GKE Workload Identity
|
||||
|
||||
Use the [Workload Identity authentication](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity)
|
||||
when running in GKE:
|
||||
|
||||
1. Set `googleCredentials.gkeEnvironment` to `true` in the `ObjectStore`
|
||||
resource
|
||||
2. Annotate the `serviceAccountTemplate` in the `Cluster` resource with the GCP
|
||||
service account
|
||||
|
||||
For example, in the `ObjectStore` resource:
|
||||
|
||||
```yaml
|
||||
apiVersion: barmancloud.cnpg.io/v1
|
||||
kind: ObjectStore
|
||||
metadata:
|
||||
name: google-store
|
||||
spec:
|
||||
configuration:
|
||||
destinationPath: "gs://<bucket>/<folder>"
|
||||
googleCredentials:
|
||||
gkeEnvironment: true
|
||||
```
|
||||
|
||||
And in the `Cluster` resource:
|
||||
|
||||
```yaml
|
||||
apiVersion: postgresql.cnpg.io/v1
|
||||
kind: Cluster
|
||||
spec:
|
||||
serviceAccountTemplate:
|
||||
metadata:
|
||||
annotations:
|
||||
iam.gke.io/gcp-service-account: [...].iam.gserviceaccount.com
|
||||
```
|
||||
|
||||
### Service Account JSON Key
|
||||
|
||||
Follow Google’s [authentication setup](https://cloud.google.com/docs/authentication/getting-started),
|
||||
then:
|
||||
|
||||
```sh
|
||||
kubectl create secret generic backup-creds --from-file=gcsCredentials=gcs_credentials_file.json
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: barmancloud.cnpg.io/v1
|
||||
kind: ObjectStore
|
||||
metadata:
|
||||
name: google-store
|
||||
spec:
|
||||
configuration:
|
||||
destinationPath: "gs://<bucket>/<folder>"
|
||||
googleCredentials:
|
||||
applicationCredentials:
|
||||
name: backup-creds
|
||||
key: gcsCredentials
|
||||
[...]
|
||||
```
|
||||
|
||||
:::important
|
||||
This authentication method generates a JSON file within the container
|
||||
with all the credentials required to access your Google Cloud Storage
|
||||
bucket. As a result, if someone gains access to the `Pod`, they will also have
|
||||
write permissions to the bucket.
|
||||
:::
|
||||
|
||||
---
|
||||
|
||||
|
||||
## MinIO Object Store
|
||||
|
||||
In order to use the Tenant resource you first need to deploy the
|
||||
[MinIO operator](https://docs.min.io/community/minio-object-store/operations/deployments/installation.html).
|
||||
For the latest documentation of MinIO, please refer to the
|
||||
[MinIO official documentation](https://docs.min.io/community/minio-object-store/).
|
||||
|
||||
MinIO Object Store's API is compatible with S3, and the default configuration of the Tenant
|
||||
will create these services:
|
||||
- `<tenant>-console` on port 9090 (with autocert) or 9443 (without autocert)
|
||||
- `<tenant>-hl` on port 9000
|
||||
Where `<tenant>` is the `metadata.name` you assigned to your Tenant resource.
|
||||
|
||||
:::note
|
||||
The `<tenant>-console` service will only be available if you have enabled the
|
||||
[MinIO Console](https://docs.min.io/community/minio-object-store/administration/minio-console.html).
|
||||
|
||||
For example, the following Tenant:
|
||||
```yml
|
||||
apiVersion: minio.min.io/v2
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: cnpg-backups
|
||||
spec:
|
||||
[...]
|
||||
```
|
||||
would have services called `cnpg-backups-console` and `cnpg-backups-hl` respectively.
|
||||
|
||||
The `console` service is for managing the tenant, while the `hl` service exposes the S3
|
||||
compatible API. If your tenant is configured with `requestAutoCert` you will communicate
|
||||
to these services over HTTPS, if not you will use HTTP.
|
||||
|
||||
For authentication you can use your username and password, or create an access key.
|
||||
Whichever method you choose, it has to be stored as a secret.
|
||||
|
||||
```sh
|
||||
kubectl create secret generic minio-creds \
|
||||
--from-literal=MINIO_ACCESS_KEY=<minio access key or username> \
|
||||
--from-literal=MINIO_SECRET_KEY=<minio secret key or password>
|
||||
```
|
||||
|
||||
Finally, create the Barman ObjectStore:
|
||||
|
||||
```yaml
|
||||
apiVersion: barmancloud.cnpg.io/v1
|
||||
kind: ObjectStore
|
||||
metadata:
|
||||
name: minio-store
|
||||
spec:
|
||||
configuration:
|
||||
destinationPath: s3://BUCKET_NAME/
|
||||
endpointURL: http://<tenant>-hl:9000
|
||||
s3Credentials:
|
||||
accessKeyId:
|
||||
name: minio-creds
|
||||
key: MINIO_ACCESS_KEY
|
||||
secretAccessKey:
|
||||
name: minio-creds
|
||||
key: MINIO_SECRET_KEY
|
||||
[...]
|
||||
```
|
||||
|
||||
:::important
|
||||
Verify on `s3://BUCKET_NAME/` the presence of archived WAL files before
|
||||
proceeding with a backup.
|
||||
:::
|
||||
|
||||
---
|
||||
24
web/versioned_docs/version-0.11.0/observability.md
Normal file
24
web/versioned_docs/version-0.11.0/observability.md
Normal file
@ -0,0 +1,24 @@
|
||||
---
|
||||
sidebar_position: 55
|
||||
---
|
||||
|
||||
# Observability
|
||||
|
||||
<!-- SPDX-License-Identifier: CC-BY-4.0 -->
|
||||
|
||||
The Barman Cloud Plugin exposes the following metrics through the native
|
||||
Prometheus exporter of the instance manager:
|
||||
|
||||
- `barman_cloud_cloudnative_pg_io_last_failed_backup_timestamp`:
|
||||
the UNIX timestamp of the most recent failed backup.
|
||||
|
||||
- `barman_cloud_cloudnative_pg_io_last_available_backup_timestamp`:
|
||||
the UNIX timestamp of the most recent successfully available backup.
|
||||
|
||||
- `barman_cloud_cloudnative_pg_io_first_recoverability_point`:
|
||||
the UNIX timestamp representing the earliest point in time from which the
|
||||
cluster can be recovered.
|
||||
|
||||
These metrics supersede the previously available in-core metrics that used the
|
||||
`cnpg_collector` prefix. The new metrics are exposed under the
|
||||
`barman_cloud_cloudnative_pg_io` prefix instead.
|
||||
19
web/versioned_docs/version-0.11.0/parameters.md
Normal file
19
web/versioned_docs/version-0.11.0/parameters.md
Normal file
@ -0,0 +1,19 @@
|
||||
---
|
||||
sidebar_position: 100
|
||||
---
|
||||
|
||||
# Parameters
|
||||
|
||||
<!-- SPDX-License-Identifier: CC-BY-4.0 -->
|
||||
|
||||
The following parameters are available for the Barman Cloud Plugin:
|
||||
|
||||
- `barmanObjectName`: references the `ObjectStore` resource to be used by the
|
||||
plugin.
|
||||
- `serverName`: Specifies the server name in the object store.
|
||||
|
||||
:::important
|
||||
The `serverName` parameter in the `ObjectStore` resource is retained solely for
|
||||
API compatibility with the in-tree `barmanObjectStore` and must always be left empty.
|
||||
When needed, use the `serverName` plugin parameter in the Cluster configuration instead.
|
||||
:::
|
||||
108
web/versioned_docs/version-0.11.0/plugin-barman-cloud.v1.md
Normal file
108
web/versioned_docs/version-0.11.0/plugin-barman-cloud.v1.md
Normal file
@ -0,0 +1,108 @@
|
||||
# API Reference
|
||||
|
||||
## Packages
|
||||
- [barmancloud.cnpg.io/v1](#barmancloudcnpgiov1)
|
||||
|
||||
|
||||
## barmancloud.cnpg.io/v1
|
||||
|
||||
Package v1 contains API Schema definitions for the barmancloud v1 API group
|
||||
|
||||
### Resource Types
|
||||
- [ObjectStore](#objectstore)
|
||||
|
||||
|
||||
|
||||
#### InstanceSidecarConfiguration
|
||||
|
||||
|
||||
|
||||
InstanceSidecarConfiguration defines the configuration for the sidecar that runs in the instance pods.
|
||||
|
||||
|
||||
|
||||
_Appears in:_
|
||||
- [ObjectStoreSpec](#objectstorespec)
|
||||
|
||||
| Field | Description | Required | Default | Validation |
|
||||
| --- | --- | --- | --- | --- |
|
||||
| `env` _[EnvVar](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#envvar-v1-core) array_ | The environment to be explicitly passed to the sidecar | | | |
|
||||
| `retentionPolicyIntervalSeconds` _integer_ | The retentionCheckInterval defines the frequency at which the<br />system checks and enforces retention policies. | | 1800 | |
|
||||
| `resources` _[ResourceRequirements](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#resourcerequirements-v1-core)_ | Resources define cpu/memory requests and limits for the sidecar that runs in the instance pods. | | | |
|
||||
| `additionalContainerArgs` _string array_ | AdditionalContainerArgs is an optional list of command-line arguments<br />to be passed to the sidecar container when it starts.<br />The provided arguments are appended to the container’s default arguments. | | | |
|
||||
| `logLevel` _string_ | The log level for PostgreSQL instances. Valid values are: `error`, `warning`, `info` (default), `debug`, `trace` | | info | Enum: [error warning info debug trace] <br /> |
|
||||
|
||||
|
||||
#### ObjectStore
|
||||
|
||||
|
||||
|
||||
ObjectStore is the Schema for the objectstores API.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
| Field | Description | Required | Default | Validation |
|
||||
| --- | --- | --- | --- | --- |
|
||||
| `apiVersion` _string_ | `barmancloud.cnpg.io/v1` | True | | |
|
||||
| `kind` _string_ | `ObjectStore` | True | | |
|
||||
| `metadata` _[ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | True | | |
|
||||
| `spec` _[ObjectStoreSpec](#objectstorespec)_ | Specification of the desired behavior of the ObjectStore.<br />More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status | True | | |
|
||||
| `status` _[ObjectStoreStatus](#objectstorestatus)_ | Most recently observed status of the ObjectStore. This data may not be up to<br />date. Populated by the system. Read-only.<br />More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status | | | |
|
||||
|
||||
|
||||
#### ObjectStoreSpec
|
||||
|
||||
|
||||
|
||||
ObjectStoreSpec defines the desired state of ObjectStore.
|
||||
|
||||
|
||||
|
||||
_Appears in:_
|
||||
- [ObjectStore](#objectstore)
|
||||
|
||||
| Field | Description | Required | Default | Validation |
|
||||
| --- | --- | --- | --- | --- |
|
||||
| `configuration` _[BarmanObjectStoreConfiguration](https://pkg.go.dev/github.com/cloudnative-pg/barman-cloud/pkg/api#BarmanObjectStoreConfiguration)_ | The configuration for the barman-cloud tool suite | True | | |
|
||||
| `retentionPolicy` _string_ | RetentionPolicy is the retention policy to be used for backups<br />and WALs (i.e. '60d'). The retention policy is expressed in the form<br />of `XXu` where `XX` is a positive integer and `u` is in `[dwm]` -<br />days, weeks, months. | | | Pattern: `^[1-9][0-9]*[dwm]$` <br /> |
|
||||
| `instanceSidecarConfiguration` _[InstanceSidecarConfiguration](#instancesidecarconfiguration)_ | The configuration for the sidecar that runs in the instance pods | | | |
|
||||
|
||||
|
||||
#### ObjectStoreStatus
|
||||
|
||||
|
||||
|
||||
ObjectStoreStatus defines the observed state of ObjectStore.
|
||||
|
||||
|
||||
|
||||
_Appears in:_
|
||||
- [ObjectStore](#objectstore)
|
||||
|
||||
| Field | Description | Required | Default | Validation |
|
||||
| --- | --- | --- | --- | --- |
|
||||
| `serverRecoveryWindow` _object (keys:string, values:[RecoveryWindow](#recoverywindow))_ | ServerRecoveryWindow maps each server to its recovery window | True | | |
|
||||
|
||||
|
||||
#### RecoveryWindow
|
||||
|
||||
|
||||
|
||||
RecoveryWindow represents the time span between the first
|
||||
recoverability point and the last successful backup of a PostgreSQL
|
||||
server, defining the period during which data can be restored.
|
||||
|
||||
|
||||
|
||||
_Appears in:_
|
||||
- [ObjectStoreStatus](#objectstorestatus)
|
||||
|
||||
| Field | Description | Required | Default | Validation |
|
||||
| --- | --- | --- | --- | --- |
|
||||
| `firstRecoverabilityPoint` _[Time](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#time-v1-meta)_ | The first recoverability point in a PostgreSQL server refers to<br />the earliest point in time to which the database can be<br />restored. | True | | |
|
||||
| `lastSuccessfulBackupTime` _[Time](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#time-v1-meta)_ | The last successful backup time | True | | |
|
||||
| `lastFailedBackupTime` _[Time](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#time-v1-meta)_ | The last failed backup time | True | | |
|
||||
|
||||
|
||||
219
web/versioned_docs/version-0.11.0/resource-name-migration.md
Normal file
219
web/versioned_docs/version-0.11.0/resource-name-migration.md
Normal file
@ -0,0 +1,219 @@
|
||||
---
|
||||
sidebar_position: 90
|
||||
---
|
||||
|
||||
# Resource name migration guide
|
||||
|
||||
<!-- SPDX-License-Identifier: CC-BY-4.0 -->
|
||||
|
||||
:::warning
|
||||
Before proceeding with the migration process, please:
|
||||
1. **Read this guide in its entirety** to understand what changes will be made
|
||||
2. **Test in a non-production environment** first if possible
|
||||
3. **Ensure you have proper backups** of your cluster configuration
|
||||
|
||||
This migration will delete old RBAC resources only after the
|
||||
`plugin-barman-cloud` upgrade. While the operation is designed to be safe, you
|
||||
should review and understand the changes before proceeding. The maintainers of
|
||||
this project are not responsible for any issues that may arise during
|
||||
migration.
|
||||
|
||||
**Note:** This guide assumes you are using the default `cnpg-system` namespace.
|
||||
:::
|
||||
|
||||
## Overview
|
||||
|
||||
Starting from version **0.8.0**, the `plugin-barman-cloud` deployment manifests
|
||||
use more specific, prefixed resource names to avoid conflicts with other
|
||||
components deployed in the same Kubernetes cluster.
|
||||
|
||||
## What Changed
|
||||
|
||||
The following resources have been renamed to use proper prefixes.
|
||||
|
||||
### Cluster-scoped Resources
|
||||
|
||||
| Old Name | New Name |
|
||||
|----------------------------|------------------------------------------|
|
||||
| `metrics-auth-role` | `barman-plugin-metrics-auth-role` |
|
||||
| `metrics-auth-rolebinding` | `barman-plugin-metrics-auth-rolebinding` |
|
||||
| `metrics-reader` | `barman-plugin-metrics-reader` |
|
||||
| `objectstore-viewer-role` | `barman-plugin-objectstore-viewer-role` |
|
||||
| `objectstore-editor-role` | `barman-plugin-objectstore-editor-role` |
|
||||
|
||||
### Namespace-scoped Resources
|
||||
|
||||
| Old Name | New Name | Namespace |
|
||||
|-------------------------------|---------------------------------------------|---------------|
|
||||
| `leader-election-role` | `barman-plugin-leader-election-role` | `cnpg-system` |
|
||||
| `leader-election-rolebinding` | `barman-plugin-leader-election-rolebinding` | `cnpg-system` |
|
||||
|
||||
## Why This Change?
|
||||
|
||||
Using generic names for cluster-wide resources is discouraged as they may
|
||||
conflict with other components deployed in the same cluster. The new names make
|
||||
it clear that these resources belong to the Barman Cloud plugin and help avoid
|
||||
naming collisions.
|
||||
|
||||
## Migration Instructions
|
||||
|
||||
This three steps migration process is straightforward and can be completed with
|
||||
a few `kubectl` commands.
|
||||
|
||||
### Step 1: Upgrade plugin-barman-cloud
|
||||
|
||||
Please refer to the [Installation](installation.mdx) section to deploy the new
|
||||
`plugin-barman-cloud` release.
|
||||
|
||||
### Step 2: Delete Old Cluster-scoped Resources
|
||||
|
||||
:::danger Verify Resources Before Deletion
|
||||
**IMPORTANT**: The old resource names are generic and could potentially belong
|
||||
to other components in your cluster.
|
||||
|
||||
**Before deleting each resource, verify it belongs to the Barman Cloud plugin
|
||||
by checking:**
|
||||
- For `objectstore-*` roles: Look for `barmancloud.cnpg.io` in the API groups
|
||||
- For `metrics-*` roles: Check if they reference the `plugin-barman-cloud`
|
||||
ServiceAccount in `cnpg-system` namespace
|
||||
- For other roles: Look for labels like `app.kubernetes.io/name: plugin-barman-cloud`
|
||||
|
||||
If a resource doesn't have these indicators, **DO NOT DELETE IT** as it may
|
||||
belong to another application.
|
||||
|
||||
Carefully review the output of each verification command before proceeding with
|
||||
the `delete`.
|
||||
:::
|
||||
|
||||
:::tip Dry Run First
|
||||
You can add `--dry-run=client` to any `kubectl delete` command to preview what
|
||||
would be deleted without actually removing anything.
|
||||
:::
|
||||
|
||||
**Only proceed if you've verified these resources belong to the Barman Cloud
|
||||
plugin (see warning above).**
|
||||
|
||||
For each resource below, first verify it belongs to Barman Cloud, then delete
|
||||
it:
|
||||
|
||||
```bash
|
||||
# 1. Check metrics-auth-rolebinding FIRST (we'll check the role after)
|
||||
# Look for references to plugin-barman-cloud ServiceAccount
|
||||
kubectl describe clusterrolebinding metrics-auth-rolebinding
|
||||
# If it references plugin-barman-cloud ServiceAccount in cnpg-system namespace,
|
||||
# delete it:
|
||||
kubectl delete clusterrolebinding metrics-auth-rolebinding
|
||||
|
||||
# 2. Check metrics-auth-role
|
||||
# Look for references to authentication.k8s.io and authorization.k8s.io
|
||||
kubectl describe clusterrole metrics-auth-role
|
||||
# Verify it's not being used by any other rolebindings:
|
||||
kubectl get clusterrolebinding -o json \
|
||||
| jq -r '.items[] | select(.roleRef.name=="metrics-auth-role") \
|
||||
| .metadata.name'
|
||||
# If the above returns nothing (role is not in use) and the role looks like the
|
||||
# Barman Cloud one, delete it (see warnings section):
|
||||
kubectl delete clusterrole metrics-auth-role
|
||||
|
||||
# 3. Check objectstore-viewer-role
|
||||
# Look for barmancloud.cnpg.io API group or
|
||||
# for `app.kubernetes.io/name: plugin-barman-cloud` label
|
||||
kubectl describe clusterrole objectstore-viewer-role
|
||||
# If it shows barmancloud.cnpg.io in API groups, delete it:
|
||||
kubectl delete clusterrole objectstore-viewer-role
|
||||
|
||||
# 4. Check objectstore-editor-role
|
||||
# Look for barmancloud.cnpg.io API group or
|
||||
# for `app.kubernetes.io/name: plugin-barman-cloud` label
|
||||
kubectl describe clusterrole objectstore-editor-role
|
||||
# If it shows barmancloud.cnpg.io in API groups, delete it:
|
||||
kubectl delete clusterrole objectstore-editor-role
|
||||
|
||||
# 5. Check metrics-reader (MOST DANGEROUS - very generic name)
|
||||
# First, check if it's being used by any rolebindings OTHER than barman's:
|
||||
kubectl get clusterrolebinding -o json | jq -r '.items[] \
|
||||
| select(.roleRef.name=="metrics-reader") \
|
||||
| "\(.metadata.name) -> \(.subjects[0].name) in \(.subjects[0].namespace)"'
|
||||
# If this shows ANY rolebindings, review them carefully. Only proceed if
|
||||
# they're all Barman-related. Then check the role itself:
|
||||
kubectl describe clusterrole metrics-reader
|
||||
# If it ONLY has nonResourceURLs: /metrics and NO other rolebindings use it,
|
||||
# delete it:
|
||||
kubectl delete clusterrole metrics-reader
|
||||
```
|
||||
|
||||
:::warning
|
||||
The `metrics-reader` role is particularly dangerous to delete blindly. Many
|
||||
monitoring systems use this exact name. Only delete it if:
|
||||
|
||||
1. You've verified it ONLY grants access to `/metrics`
|
||||
2. No other rolebindings reference it (checked with the jq command above)
|
||||
3. You're certain it was created by the Barman Cloud plugin
|
||||
|
||||
If you're unsure, it's safer to leave it and let the new
|
||||
`barman-plugin-metrics-reader` role coexist with it.
|
||||
:::
|
||||
|
||||
If any resource is not found during the `describe` command, that's okay - it
|
||||
means it was never created or already deleted. Simply skip the delete command
|
||||
for that resource.
|
||||
|
||||
### Step 3: Delete Old Namespace-scoped Resources
|
||||
|
||||
Delete the old namespace-scoped resources in the `cnpg-system` namespace:
|
||||
|
||||
```bash
|
||||
# Delete the old leader-election resources
|
||||
kubectl delete role leader-election-role -n cnpg-system
|
||||
kubectl delete rolebinding leader-election-rolebinding -n cnpg-system
|
||||
```
|
||||
|
||||
If any resource is not found, that's okay - it means it was never created or
|
||||
already deleted.
|
||||
|
||||
## Impact
|
||||
|
||||
- **Permissions:** If you have custom RBAC rules or tools that reference the
|
||||
old resource names, they will need to be updated.
|
||||
- **External Users:** If end users have been granted the
|
||||
`objectstore-viewer-role` or `objectstore-editor-role`, they will need to be
|
||||
re-granted the new role names (`barman-plugin-objectstore-viewer-role` and
|
||||
`barman-plugin-objectstore-editor-role`).
|
||||
|
||||
## Verification
|
||||
|
||||
After migration, verify that the new resources are created:
|
||||
|
||||
```bash
|
||||
# Check cluster-scoped resources
|
||||
kubectl get clusterrole | grep barman
|
||||
kubectl get clusterrolebinding | grep barman
|
||||
|
||||
# Check namespace-scoped resources
|
||||
kubectl get role,rolebinding -n cnpg-system | grep barman
|
||||
```
|
||||
|
||||
You should see the new prefixed resource names.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Plugin Not Starting After Migration
|
||||
|
||||
If the plugin fails to start after migration, check:
|
||||
|
||||
1. **ServiceAccount permissions:** Ensure the `plugin-barman-cloud` ServiceAccount is bound to the new roles:
|
||||
```bash
|
||||
kubectl get clusterrolebinding barman-plugin-metrics-auth-rolebinding -o yaml
|
||||
kubectl get rolebinding barman-plugin-leader-election-rolebinding -n cnpg-system -o yaml
|
||||
```
|
||||
|
||||
2. **Role references:** Verify that the rolebindings reference the correct role names:
|
||||
```bash
|
||||
kubectl describe rolebinding barman-plugin-leader-election-rolebinding -n cnpg-system
|
||||
kubectl describe clusterrolebinding barman-plugin-metrics-auth-rolebinding
|
||||
```
|
||||
|
||||
## Support
|
||||
|
||||
If you encounter issues during migration, please open an issue on the [GitHub
|
||||
repository](https://github.com/cloudnative-pg/plugin-barman-cloud/issues).
|
||||
38
web/versioned_docs/version-0.11.0/retention.md
Normal file
38
web/versioned_docs/version-0.11.0/retention.md
Normal file
@ -0,0 +1,38 @@
|
||||
---
|
||||
sidebar_position: 60
|
||||
---
|
||||
|
||||
# Retention Policies
|
||||
|
||||
<!-- SPDX-License-Identifier: CC-BY-4.0 -->
|
||||
|
||||
The Barman Cloud Plugin supports **automated cleanup of obsolete backups** via
|
||||
retention policies, configured in the `.spec.retentionPolicy` field of the
|
||||
`ObjectStore` resource.
|
||||
|
||||
:::note
|
||||
This feature uses the `barman-cloud-backup-delete` command with the
|
||||
`--retention-policy "RECOVERY WINDOW OF {{ value }} {{ unit }}"` syntax.
|
||||
:::
|
||||
|
||||
#### Example: 30-Day Retention Policy
|
||||
|
||||
```yaml
|
||||
apiVersion: barmancloud.cnpg.io/v1
|
||||
kind: ObjectStore
|
||||
metadata:
|
||||
name: my-store
|
||||
spec:
|
||||
[...]
|
||||
retentionPolicy: "30d"
|
||||
````
|
||||
|
||||
:::note
|
||||
A **recovery window retention policy** ensures the cluster can be restored to
|
||||
any point in time between the calculated *Point of Recoverability* (PoR) and
|
||||
the latest WAL archive. The PoR is defined as `current time - recovery window`.
|
||||
The **first valid backup** is the most recent backup completed before the PoR.
|
||||
Backups older than that are marked as *obsolete* and deleted after the next
|
||||
backup completes.
|
||||
:::
|
||||
|
||||
591
web/versioned_docs/version-0.11.0/troubleshooting.md
Normal file
591
web/versioned_docs/version-0.11.0/troubleshooting.md
Normal file
@ -0,0 +1,591 @@
|
||||
---
|
||||
sidebar_position: 90
|
||||
---
|
||||
|
||||
# Troubleshooting
|
||||
|
||||
<!-- SPDX-License-Identifier: CC-BY-4.0 -->
|
||||
|
||||
This guide helps you diagnose and resolve common issues with the Barman Cloud
|
||||
plugin.
|
||||
|
||||
:::important
|
||||
We are continuously improving the integration between CloudNativePG and the
|
||||
Barman Cloud plugin as it moves toward greater stability and maturity. For this
|
||||
reason, we recommend using the latest available version of both components.
|
||||
See the [*Requirements* section](intro.md#requirements) for details.
|
||||
:::
|
||||
|
||||
:::note
|
||||
The following commands assume you installed the CloudNativePG operator in
|
||||
the default `cnpg-system` namespace. If you installed it in a different
|
||||
namespace, adjust the commands accordingly.
|
||||
:::
|
||||
|
||||
## Viewing Logs
|
||||
|
||||
To troubleshoot effectively, you’ll often need to review logs from multiple
|
||||
sources:
|
||||
|
||||
```sh
|
||||
# View operator logs (includes plugin interaction logs)
|
||||
kubectl logs -n cnpg-system deployment/cnpg-controller-manager -f
|
||||
|
||||
# View plugin manager logs
|
||||
kubectl logs -n cnpg-system deployment/barman-cloud -f
|
||||
|
||||
# View sidecar container logs (Barman Cloud operations)
|
||||
kubectl logs -n <namespace> <cluster-pod-name> -c plugin-barman-cloud -f
|
||||
|
||||
# View all containers in a pod
|
||||
kubectl logs -n <namespace> <cluster-pod-name> --all-containers=true
|
||||
|
||||
# View previous container logs (if container restarted)
|
||||
kubectl logs -n <namespace> <cluster-pod-name> -c plugin-barman-cloud --previous
|
||||
```
|
||||
|
||||
## Common Issues
|
||||
|
||||
### Plugin Installation Issues
|
||||
|
||||
#### Plugin pods not starting
|
||||
|
||||
**Symptoms:**
|
||||
|
||||
- Plugin pods stuck in `CrashLoopBackOff` or `Error`
|
||||
- Plugin deployment not ready
|
||||
|
||||
**Possible causes and solutions:**
|
||||
|
||||
1. **Certificate issues**
|
||||
|
||||
```sh
|
||||
# Check if cert-manager is installed and running
|
||||
kubectl get pods -n cert-manager
|
||||
|
||||
# Check if the plugin certificate is created
|
||||
kubectl get certificates -n cnpg-system
|
||||
```
|
||||
|
||||
If cert-manager is not installed, install it first:
|
||||
|
||||
```sh
|
||||
# Note: other installation methods for cert-manager are available
|
||||
kubectl apply -f \
|
||||
https://github.com/cert-manager/cert-manager/releases/latest/download/cert-manager.yaml
|
||||
```
|
||||
|
||||
If you are using your own certificates without cert-manager, you will need
|
||||
to verify the entire certificate chain yourself.
|
||||
|
||||
|
||||
2. **Image pull errors**
|
||||
|
||||
```sh
|
||||
# Check pod events for image pull errors
|
||||
kubectl describe pod -n cnpg-system -l app=barman-cloud
|
||||
```
|
||||
|
||||
Verify the image exists and you have proper credentials if using a private
|
||||
registry.
|
||||
|
||||
|
||||
3. **Resource constraints**
|
||||
|
||||
```sh
|
||||
# Check node resources
|
||||
kubectl top nodes
|
||||
kubectl describe nodes
|
||||
```
|
||||
|
||||
Make sure your cluster has sufficient CPU and memory resources.
|
||||
|
||||
### Backup Failures
|
||||
|
||||
#### Quick Backup Troubleshooting Checklist
|
||||
|
||||
When a backup fails, follow these steps in order:
|
||||
|
||||
1. **Check backup status**:
|
||||
|
||||
```sh
|
||||
kubectl get backups.postgresql.cnpg.io -n <namespace>
|
||||
```
|
||||
2. **Get error details and target pod**:
|
||||
|
||||
```sh
|
||||
kubectl describe backups.postgresql.cnpg.io \
|
||||
-n <namespace> <backup-name>
|
||||
|
||||
kubectl get backups.postgresql.cnpg.io \
|
||||
-n <namespace> <backup-name> \
|
||||
-o jsonpath='{.status.instanceID.podName}'
|
||||
```
|
||||
3. **Check the target pod’s sidecar logs**:
|
||||
|
||||
```sh
|
||||
TARGET_POD=$(kubectl get backups.postgresql.cnpg.io \
|
||||
-n <namespace> <backup-name> \
|
||||
-o jsonpath='{.status.instanceID.podName}')
|
||||
|
||||
kubectl logs \
|
||||
-n <namespace> $TARGET_POD -c plugin-barman-cloud \
|
||||
--tail=100 | grep -E "ERROR|FATAL|panic"
|
||||
```
|
||||
4. **Check cluster events**:
|
||||
|
||||
```sh
|
||||
kubectl get events -n <namespace> \
|
||||
--field-selector involvedObject.name=<cluster-name> \
|
||||
--sort-by='.lastTimestamp'
|
||||
```
|
||||
5. **Verify plugin is running**:
|
||||
|
||||
```sh
|
||||
kubectl get pods \
|
||||
-n cnpg-system -l app=barman-cloud
|
||||
```
|
||||
6. **Check operator logs**:
|
||||
|
||||
```sh
|
||||
kubectl logs \
|
||||
-n cnpg-system deployment/cnpg-controller-manager \
|
||||
--tail=100 | grep -i "backup\|plugin"
|
||||
```
|
||||
7. **Check plugin manager logs**:
|
||||
|
||||
```sh
|
||||
kubectl logs \
|
||||
-n cnpg-system deployment/barman-cloud --tail=100
|
||||
```
|
||||
|
||||
#### Backup job fails immediately
|
||||
|
||||
**Symptoms:**
|
||||
|
||||
- Backup pods terminate with error
|
||||
- No backup files appear in object storage
|
||||
- Backup shows `failed` phase with various error messages
|
||||
|
||||
**Common failure modes and solutions:**
|
||||
|
||||
1. **"requested plugin is not available" errors**
|
||||
|
||||
```
|
||||
requested plugin is not available: barman
|
||||
requested plugin is not available: barman-cloud
|
||||
requested plugin is not available: barman-cloud.cloudnative-pg.io
|
||||
```
|
||||
|
||||
**Cause:** The plugin name in the Cluster configuration doesn’t match the
|
||||
deployed plugin, or the plugin isn’t registered.
|
||||
|
||||
**Solution:**
|
||||
|
||||
a. **Check plugin registration:**
|
||||
|
||||
```sh
|
||||
# If you have the `cnpg` plugin installed (v1.27.0+)
|
||||
kubectl cnpg status -n <namespace> <cluster-name>
|
||||
```
|
||||
|
||||
Look for the "Plugins status" section:
|
||||
```
|
||||
Plugins status
|
||||
Name Version Status Reported Operator Capabilities
|
||||
---- ------- ------ ------------------------------
|
||||
barman-cloud.cloudnative-pg.io 0.6.0 N/A Reconciler Hooks, Lifecycle Service
|
||||
```
|
||||
|
||||
b. **Verify plugin name in `Cluster` spec**:
|
||||
|
||||
```yaml
|
||||
apiVersion: postgresql.cnpg.io/v1
|
||||
kind: Cluster
|
||||
spec:
|
||||
plugins:
|
||||
- name: barman-cloud.cloudnative-pg.io
|
||||
parameters:
|
||||
barmanObjectName: <your-objectstore-name>
|
||||
```
|
||||
|
||||
c. **Check plugin deployment is running**:
|
||||
|
||||
```sh
|
||||
kubectl get deployment -n cnpg-system barman-cloud
|
||||
```
|
||||
|
||||
2. **"rpc error: code = Unknown desc = panic caught: assignment to entry in nil map" errors**
|
||||
|
||||
**Cause:** Misconfiguration in the `ObjectStore` (e.g., typo or missing field).
|
||||
|
||||
**Solution:**
|
||||
|
||||
- Review sidecar logs for details
|
||||
- Verify `ObjectStore` configuration and secrets
|
||||
- Common issues include:
|
||||
- Missing or incorrect secret references
|
||||
- Typos in configuration parameters
|
||||
- Missing required environment variables in secrets
|
||||
|
||||
#### Backup performance issues
|
||||
|
||||
**Symptoms:**
|
||||
|
||||
- Backups take extremely long
|
||||
- Backups timeout
|
||||
|
||||
**Plugin-specific considerations:**
|
||||
|
||||
1. **Check `ObjectStore` parallelism settings**
|
||||
- Adjust `maxParallel` in `ObjectStore` configuration
|
||||
- Monitor sidecar container resource usage during backups
|
||||
|
||||
2. **Verify plugin resource allocation**
|
||||
- Check if the sidecar container has sufficient CPU/memory
|
||||
- Review plugin container logs for resource-related warnings
|
||||
|
||||
:::tip
|
||||
For Barman-specific features like compression, encryption, and performance
|
||||
tuning, refer to the [Barman documentation](https://docs.pgbarman.org/latest/).
|
||||
:::
|
||||
|
||||
### WAL Archiving Issues
|
||||
|
||||
#### WAL archiving stops
|
||||
|
||||
**Symptoms:**
|
||||
|
||||
- WAL files accumulate on the primary
|
||||
- Cluster shows WAL archiving warnings
|
||||
- Sidecar logs show WAL errors
|
||||
|
||||
**Debugging steps:**
|
||||
|
||||
1. **Check plugin sidecar logs for WAL archiving errors**
|
||||
```sh
|
||||
# Check recent WAL archive operations in sidecar
|
||||
kubectl logs -n <namespace> <primary-pod> -c plugin-barman-cloud \
|
||||
--tail=50 | grep -i wal
|
||||
```
|
||||
|
||||
2. **Check ObjectStore configuration for WAL settings**
|
||||
- Ensure ObjectStore has proper WAL retention settings
|
||||
- Verify credentials have permissions for WAL operations
|
||||
|
||||
### Restore Issues
|
||||
|
||||
#### Restore fails during recovery
|
||||
|
||||
**Symptoms:**
|
||||
|
||||
- New cluster stuck in recovery
|
||||
- Plugin sidecar shows restore errors
|
||||
- PostgreSQL won’t start
|
||||
|
||||
**Debugging steps:**
|
||||
|
||||
1. **Check plugin sidecar logs during restore**
|
||||
|
||||
```sh
|
||||
# Check the sidecar logs on the recovering cluster pods
|
||||
kubectl logs -n <namespace> <cluster-pod-name> \
|
||||
-c plugin-barman-cloud --tail=100
|
||||
|
||||
# Look for restore-related errors
|
||||
kubectl logs -n <namespace> <cluster-pod-name> \
|
||||
-c plugin-barman-cloud | grep -E "restore|recovery|ERROR"
|
||||
```
|
||||
|
||||
2. **Verify plugin can access backups**
|
||||
|
||||
```sh
|
||||
# Check if `ObjectStore` is properly configured for restore
|
||||
kubectl get objectstores.barmancloud.cnpg.io \
|
||||
-n <namespace> <objectstore-name> -o yaml
|
||||
|
||||
# Check PostgreSQL recovery logs
|
||||
kubectl logs -n <namespace> <cluster-pod> \
|
||||
-c postgres | grep -i recovery
|
||||
```
|
||||
|
||||
:::tip
|
||||
For detailed Barman restore operations and troubleshooting, refer to the
|
||||
[Barman documentation](https://docs.pgbarman.org/latest/barman-cloud-restore.html).
|
||||
:::
|
||||
|
||||
#### Point-in-time recovery (PITR) configuration issues
|
||||
|
||||
**Symptoms:**
|
||||
|
||||
- PITR doesn’t reach target time
|
||||
- WAL access errors
|
||||
- Recovery halts early
|
||||
|
||||
**Debugging steps:**
|
||||
|
||||
1. **Verify PITR configuration in the `Cluster` spec**
|
||||
|
||||
```yaml
|
||||
apiVersion: postgresql.cnpg.io/v1
|
||||
kind: Cluster
|
||||
metadata:
|
||||
name: <cluster-restore-name>
|
||||
spec:
|
||||
storage:
|
||||
size: 1Gi
|
||||
|
||||
bootstrap:
|
||||
recovery:
|
||||
source: origin
|
||||
recoveryTarget:
|
||||
targetTime: "2024-01-15T10:30:00Z"
|
||||
|
||||
externalClusters:
|
||||
- name: origin
|
||||
plugin:
|
||||
enabled: true
|
||||
name: barman-cloud.cloudnative-pg.io
|
||||
parameters:
|
||||
barmanObjectName: <object-store-name>
|
||||
serverName: <source-cluster-name>
|
||||
```
|
||||
|
||||
2. **Check sidecar logs for WAL-related errors**
|
||||
|
||||
```sh
|
||||
kubectl logs -n <namespace> <cluster-pod> \
|
||||
-c plugin-barman-cloud | grep -i wal
|
||||
```
|
||||
|
||||
:::note
|
||||
Timestamps without an explicit timezone suffix
|
||||
(e.g., `2024-01-15 10:30:00`) are interpreted as UTC.
|
||||
:::
|
||||
|
||||
:::warning
|
||||
Always specify an explicit timezone in your timestamp to avoid ambiguity.
|
||||
For example, use `2024-01-15T10:30:00Z` or `2024-01-15T10:30:00+02:00`
|
||||
instead of `2024-01-15 10:30:00`.
|
||||
:::
|
||||
|
||||
:::note
|
||||
For detailed PITR configuration and WAL management, see the
|
||||
[Barman PITR documentation](https://docs.pgbarman.org/latest/).
|
||||
:::
|
||||
|
||||
### Plugin Configuration Issues
|
||||
|
||||
#### Plugin cannot connect to object storage
|
||||
|
||||
**Symptoms:**
|
||||
|
||||
- Sidecar logs show connection errors
|
||||
- Backups fail with authentication or network errors
|
||||
- `ObjectStore` resource reports errors
|
||||
|
||||
**Solution:**
|
||||
|
||||
1. **Verify `ObjectStore` CRD configuration and secrets**
|
||||
|
||||
```sh
|
||||
# Check ObjectStore resource status
|
||||
kubectl get objectstores.barmancloud.cnpg.io \
|
||||
-n <namespace> <objectstore-name> -o yaml
|
||||
|
||||
# Verify the secret exists and has correct keys for your provider
|
||||
kubectl get secret -n <namespace> <secret-name> \
|
||||
-o jsonpath='{.data}' | jq 'keys'
|
||||
```
|
||||
|
||||
2. **Check sidecar logs for connectivity issues**
|
||||
```sh
|
||||
kubectl logs -n <namespace> <cluster-pod> \
|
||||
-c plugin-barman-cloud | grep -E "connect|timeout|SSL|cert"
|
||||
```
|
||||
|
||||
3. **Adjust provider-specific settings (endpoint, path style, etc.)**
|
||||
- See [Object Store Configuration](object_stores.md) for provider-specific settings
|
||||
- Ensure `endpointURL` is set correctly for your storage provider
|
||||
- Verify network policies allow egress to your storage provider
|
||||
|
||||
## Diagnostic Commands
|
||||
|
||||
### Using the `cnpg` plugin for `kubectl`
|
||||
|
||||
The `cnpg` plugin for `kubectl` provides extended debugging capabilities.
|
||||
Keep it updated:
|
||||
|
||||
```sh
|
||||
# Install or update the `cnpg` plugin
|
||||
kubectl krew install cnpg
|
||||
# Or using an alternative method: https://cloudnative-pg.io/documentation/current/kubectl-plugin/#install
|
||||
|
||||
# Check plugin status (requires CNPG 1.27.0+)
|
||||
kubectl cnpg status <cluster-name> -n <namespace>
|
||||
|
||||
# View cluster status in detail
|
||||
kubectl cnpg status <cluster-name> -n <namespace> --verbose
|
||||
```
|
||||
|
||||
## Getting Help
|
||||
|
||||
If problems persist:
|
||||
|
||||
1. **Check the documentation**
|
||||
|
||||
- [Installation Guide](installation.mdx)
|
||||
- [Object Store Configuration](object_stores.md) (for provider-specific settings)
|
||||
- [Usage Examples](usage.md)
|
||||
|
||||
|
||||
2. **Gather diagnostic information**
|
||||
|
||||
```sh
|
||||
# Create a diagnostic bundle (⚠️ sanitize these before sharing!)
|
||||
kubectl get objectstores.barmancloud.cnpg.io -A -o yaml > /tmp/objectstores.yaml
|
||||
kubectl get clusters.postgresql.cnpg.io -A -o yaml > /tmp/clusters.yaml
|
||||
kubectl logs -n cnpg-system deployment/barman-cloud --tail=1000 > /tmp/plugin.log
|
||||
```
|
||||
|
||||
|
||||
3. **Community support**
|
||||
|
||||
- CloudNativePG Slack: [#cloudnativepg-users](https://cloud-native.slack.com/messages/cloudnativepg-users)
|
||||
- GitHub Issues: [plugin-barman-cloud](https://github.com/cloudnative-pg/plugin-barman-cloud/issues)
|
||||
|
||||
|
||||
4. **Include when reporting**
|
||||
|
||||
- CloudNativePG version
|
||||
- Plugin version
|
||||
- Kubernetes version
|
||||
- Cloud provider and region
|
||||
- Relevant configuration (⚠️ sanitize/redact sensitive information)
|
||||
- Error messages and logs
|
||||
- Steps to reproduce
|
||||
|
||||
## Known Issues and Limitations
|
||||
|
||||
### Current Known Issues
|
||||
|
||||
1. **Migration compatibility**: After migrating from in-tree backup to the
|
||||
plugin, the `kubectl cnpg backup` command syntax has changed
|
||||
([#353](https://github.com/cloudnative-pg/plugin-barman-cloud/issues/353)):
|
||||
|
||||
```sh
|
||||
# Old command (in-tree, no longer works after migration)
|
||||
kubectl cnpg backup -n <namespace> <cluster-name> \
|
||||
--method=barmanObjectStore
|
||||
|
||||
# New command (plugin-based)
|
||||
kubectl cnpg backup -n <namespace> <cluster-name> \
|
||||
--method=plugin --plugin-name=barman-cloud.cloudnative-pg.io
|
||||
```
|
||||
|
||||
### Plugin Limitations
|
||||
|
||||
1. **Installation method**: Currently only supports manifest and Kustomize
|
||||
installation ([#351](https://github.com/cloudnative-pg/plugin-barman-cloud/issues/351) -
|
||||
Helm chart requested)
|
||||
|
||||
2. **Sidecar resource sharing**: The plugin sidecar container shares pod
|
||||
resources with PostgreSQL
|
||||
|
||||
3. **Plugin restart behavior**: Restarting the sidecar container requires
|
||||
restarting the entire PostgreSQL pod
|
||||
|
||||
## Recap of General Debugging Steps
|
||||
|
||||
### Check Backup Status and Identify the Target Instance
|
||||
|
||||
```sh
|
||||
# List all backups and their status
|
||||
kubectl get backups.postgresql.cnpg.io -n <namespace>
|
||||
|
||||
# Get detailed backup information including error messages and target instance
|
||||
kubectl describe backups.postgresql.cnpg.io \
|
||||
-n <namespace> <backup-name>
|
||||
|
||||
# Extract the target pod name from a failed backup
|
||||
kubectl get backups.postgresql.cnpg.io \
|
||||
-n <namespace> <backup-name> \
|
||||
-o jsonpath='{.status.instanceID.podName}'
|
||||
|
||||
# Get more details including the target pod, method, phase, and error
|
||||
kubectl get backups.postgresql.cnpg.io \
|
||||
-n <namespace> <backup-name> \
|
||||
-o jsonpath='Pod: {.status.instanceID.podName}{"\n"}Method: {.status.method}{"\n"}Phase: {.status.phase}{"\n"}Error: {.status.error}{"\n"}'
|
||||
|
||||
# Check the cluster status for backup-related information
|
||||
kubectl cnpg status <cluster-name> -n <namespace> --verbose
|
||||
```
|
||||
|
||||
### Check Sidecar Logs on the Backup Target Pod
|
||||
|
||||
```sh
|
||||
# Identify which pod was the backup target (from the previous step)
|
||||
TARGET_POD=$(kubectl get backups.postgresql.cnpg.io \
|
||||
-n <namespace> <backup-name> \
|
||||
-o jsonpath='{.status.instanceID.podName}')
|
||||
echo "Backup target pod: $TARGET_POD"
|
||||
|
||||
# Check the sidecar logs on the specific target pod
|
||||
kubectl logs -n <namespace> $TARGET_POD \
|
||||
-c plugin-barman-cloud --tail=100
|
||||
|
||||
# Follow the logs in real time
|
||||
kubectl logs -n <namespace> $TARGET_POD \
|
||||
-c plugin-barman-cloud -f
|
||||
|
||||
# Check for specific errors in the target pod around the backup time
|
||||
kubectl logs -n <namespace> $TARGET_POD \
|
||||
-c plugin-barman-cloud --since=10m | grep -E "ERROR|FATAL|panic|failed"
|
||||
|
||||
# Alternative: List all cluster pods and their roles
|
||||
kubectl get pods -n <namespace> -l cnpg.io/cluster=<cluster-name> \
|
||||
-o custom-columns=NAME:.metadata.name,ROLE:.metadata.labels.cnpg\\.io/instanceRole,INSTANCE:.metadata.labels.cnpg\\.io/instanceName
|
||||
|
||||
# Check sidecar logs on ALL cluster pods (if the target is unclear)
|
||||
for pod in $(kubectl get pods -n <namespace> -l cnpg.io/cluster=<cluster-name> -o name); do
|
||||
echo "=== Checking $pod ==="
|
||||
kubectl logs -n <namespace> $pod -c plugin-barman-cloud \
|
||||
--tail=20 | grep -i error || echo "No errors found"
|
||||
done
|
||||
```
|
||||
|
||||
### Check Events for Backup-Related Issues
|
||||
|
||||
```sh
|
||||
# Check events for the cluster
|
||||
kubectl get events -n <namespace> \
|
||||
--field-selector involvedObject.name=<cluster-name>
|
||||
|
||||
# Check events for failed backups
|
||||
kubectl get events -n <namespace> \
|
||||
--field-selector involvedObject.kind=Backup
|
||||
|
||||
# Get all recent events in the namespace
|
||||
kubectl get events -n <namespace> --sort-by='.lastTimestamp' | tail -20
|
||||
```
|
||||
|
||||
### Verify `ObjectStore` Configuration
|
||||
|
||||
```sh
|
||||
# Check the ObjectStore resource
|
||||
kubectl get objectstores.barmancloud.cnpg.io \
|
||||
-n <namespace> <objectstore-name> -o yaml
|
||||
|
||||
# Verify the secret exists and has the correct keys
|
||||
kubectl get secret -n <namespace> <secret-name> -o yaml
|
||||
# Alternatively
|
||||
kubectl get secret -n <namespace> <secret-name> -o jsonpath='{.data}' | jq 'keys'
|
||||
```
|
||||
|
||||
### Common Error Messages and Solutions
|
||||
|
||||
* **"AccessDenied" or "403 Forbidden"** — Check cloud credentials and bucket permissions.
|
||||
* **"NoSuchBucket"** — Verify the bucket exists and the endpoint URL is correct.
|
||||
* **"Connection timeout"** — Check network connectivity and firewall rules.
|
||||
* **"SSL certificate problem"** — For self-signed certificates, verify the CA bundle configuration.
|
||||
|
||||
16
web/versioned_docs/version-0.11.0/upgrades.mdx
Normal file
16
web/versioned_docs/version-0.11.0/upgrades.mdx
Normal file
@ -0,0 +1,16 @@
|
||||
---
|
||||
sidebar_position: 25
|
||||
---
|
||||
|
||||
# Upgrades
|
||||
|
||||
<!-- SPDX-License-Identifier: CC-BY-4.0 -->
|
||||
|
||||
You can upgrade the plugin simply by installing the new version. Unless
|
||||
explicitly stated below or in the release notes, no special steps are required.
|
||||
|
||||
## Upgrading to version 0.8.x from previous versions
|
||||
|
||||
Version **0.8.0** introduces breaking changes to resource naming.
|
||||
To complete the upgrade successfully, follow the instructions in the
|
||||
["Resource name migration guide"](resource-name-migration.md).
|
||||
283
web/versioned_docs/version-0.11.0/usage.md
Normal file
283
web/versioned_docs/version-0.11.0/usage.md
Normal file
@ -0,0 +1,283 @@
|
||||
---
|
||||
sidebar_position: 30
|
||||
---
|
||||
|
||||
# Using the Barman Cloud Plugin
|
||||
|
||||
<!-- SPDX-License-Identifier: CC-BY-4.0 -->
|
||||
|
||||
After [installing the plugin](installation.mdx) in the same namespace as the
|
||||
CloudNativePG operator, enabling your PostgreSQL cluster to use the Barman
|
||||
Cloud Plugin involves just a few steps:
|
||||
|
||||
- Defining the object store containing your WAL archive and base backups, using
|
||||
your preferred [provider](object_stores.md)
|
||||
- Instructing the Postgres cluster to use the Barman Cloud Plugin
|
||||
|
||||
From that moment, you’ll be able to issue on-demand backups or define a backup
|
||||
schedule, as well as rely on the object store for recovery operations.
|
||||
|
||||
The rest of this page details each step, using MinIO as object store provider.
|
||||
|
||||
## Defining the `ObjectStore`
|
||||
|
||||
An `ObjectStore` resource must be created for each object store used in your
|
||||
PostgreSQL architecture. Here's an example configuration using MinIO:
|
||||
|
||||
```yaml
|
||||
apiVersion: barmancloud.cnpg.io/v1
|
||||
kind: ObjectStore
|
||||
metadata:
|
||||
name: minio-store
|
||||
spec:
|
||||
configuration:
|
||||
destinationPath: s3://backups/
|
||||
endpointURL: http://minio:9000
|
||||
s3Credentials:
|
||||
accessKeyId:
|
||||
name: minio
|
||||
key: ACCESS_KEY_ID
|
||||
secretAccessKey:
|
||||
name: minio
|
||||
key: ACCESS_SECRET_KEY
|
||||
wal:
|
||||
compression: gzip
|
||||
```
|
||||
|
||||
The `.spec.configuration` schema follows the same format as the
|
||||
[in-tree barman-cloud support](https://pkg.go.dev/github.com/cloudnative-pg/barman-cloud/pkg/api#BarmanObjectStoreConfiguration).
|
||||
Refer to [the CloudNativePG documentation](https://cloudnative-pg.io/documentation/preview/backup_barmanobjectstore/)
|
||||
for additional details.
|
||||
|
||||
:::important
|
||||
The `serverName` parameter in the `ObjectStore` resource is retained solely for
|
||||
API compatibility with the in-tree `barmanObjectStore` and must always be left empty.
|
||||
When needed, use the `serverName` plugin parameter in the Cluster configuration instead.
|
||||
:::
|
||||
|
||||
## Configuring WAL Archiving
|
||||
|
||||
Once the `ObjectStore` is defined, you can configure your PostgreSQL cluster
|
||||
to archive WALs by referencing the store in the `.spec.plugins` section:
|
||||
|
||||
```yaml
|
||||
apiVersion: postgresql.cnpg.io/v1
|
||||
kind: Cluster
|
||||
metadata:
|
||||
name: cluster-example
|
||||
spec:
|
||||
instances: 3
|
||||
imagePullPolicy: Always
|
||||
plugins:
|
||||
- name: barman-cloud.cloudnative-pg.io
|
||||
isWALArchiver: true
|
||||
parameters:
|
||||
barmanObjectName: minio-store
|
||||
storage:
|
||||
size: 1Gi
|
||||
```
|
||||
|
||||
This configuration enables both WAL archiving and data directory backups.
|
||||
|
||||
## Performing a Base Backup
|
||||
|
||||
Once WAL archiving is enabled, the cluster is ready for backups. Backups can be
|
||||
created either declaratively (with YAML manifests) or imperatively (with the
|
||||
`cnpg` plugin).
|
||||
|
||||
### Declarative approach (YAML manifest)
|
||||
|
||||
Create a backup resource by applying a YAML manifest:
|
||||
|
||||
```yaml
|
||||
apiVersion: postgresql.cnpg.io/v1
|
||||
kind: Backup
|
||||
metadata:
|
||||
name: backup-example
|
||||
spec:
|
||||
cluster:
|
||||
name: cluster-example
|
||||
method: plugin
|
||||
pluginConfiguration:
|
||||
name: barman-cloud.cloudnative-pg.io
|
||||
```
|
||||
|
||||
### Imperative approach (using the `cnpg` plugin)
|
||||
|
||||
The quickest way to trigger an on-demand backup is with the `cnpg` plugin:
|
||||
|
||||
```bash
|
||||
kubectl cnpg backup -n <namespace> <cluster-name> \
|
||||
--method=plugin \
|
||||
--plugin-name=barman-cloud.cloudnative-pg.io
|
||||
```
|
||||
|
||||
:::note Migration from in-tree backups
|
||||
If you are migrating from the in-tree backup system, note the change in syntax:
|
||||
|
||||
```bash
|
||||
# Old command (in-tree backup)
|
||||
kubectl cnpg backup -n <namespace> <cluster-name> --method=barmanObjectStore
|
||||
|
||||
# New command (plugin-based backup)
|
||||
kubectl cnpg backup -n <namespace> <cluster-name> \
|
||||
--method=plugin \
|
||||
--plugin-name=barman-cloud.cloudnative-pg.io
|
||||
```
|
||||
:::
|
||||
|
||||
## Restoring a Cluster
|
||||
|
||||
To restore a cluster from an object store, create a new `Cluster` resource that
|
||||
references the store containing the backup. Below is an example configuration:
|
||||
|
||||
```yaml
|
||||
apiVersion: postgresql.cnpg.io/v1
|
||||
kind: Cluster
|
||||
metadata:
|
||||
name: cluster-restore
|
||||
spec:
|
||||
instances: 3
|
||||
imagePullPolicy: IfNotPresent
|
||||
bootstrap:
|
||||
recovery:
|
||||
source: source
|
||||
externalClusters:
|
||||
- name: source
|
||||
plugin:
|
||||
name: barman-cloud.cloudnative-pg.io
|
||||
parameters:
|
||||
barmanObjectName: minio-store
|
||||
serverName: cluster-example
|
||||
storage:
|
||||
size: 1Gi
|
||||
```
|
||||
|
||||
:::important
|
||||
The above configuration does **not** enable WAL archiving for the restored cluster.
|
||||
:::
|
||||
|
||||
To enable WAL archiving for the restored cluster, include the `.spec.plugins`
|
||||
section alongside the `externalClusters.plugin` section, as shown below:
|
||||
|
||||
```yaml
|
||||
apiVersion: postgresql.cnpg.io/v1
|
||||
kind: Cluster
|
||||
metadata:
|
||||
name: cluster-restore
|
||||
spec:
|
||||
instances: 3
|
||||
imagePullPolicy: IfNotPresent
|
||||
bootstrap:
|
||||
recovery:
|
||||
source: source
|
||||
plugins:
|
||||
- name: barman-cloud.cloudnative-pg.io
|
||||
isWALArchiver: true
|
||||
parameters:
|
||||
# Backup Object Store (push, read-write)
|
||||
barmanObjectName: minio-store-bis
|
||||
externalClusters:
|
||||
- name: source
|
||||
plugin:
|
||||
name: barman-cloud.cloudnative-pg.io
|
||||
parameters:
|
||||
# Recovery Object Store (pull, read-only)
|
||||
barmanObjectName: minio-store
|
||||
serverName: cluster-example
|
||||
storage:
|
||||
size: 1Gi
|
||||
```
|
||||
|
||||
The same object store may be used for both transaction log archiving and
|
||||
restoring a cluster, or you can configure separate stores for these purposes.
|
||||
|
||||
## Configuring Replica Clusters
|
||||
|
||||
You can set up a distributed topology by combining the previously defined
|
||||
configurations with the `.spec.replica` section. Below is an example of how to
|
||||
define a replica cluster:
|
||||
|
||||
```yaml
|
||||
apiVersion: postgresql.cnpg.io/v1
|
||||
kind: Cluster
|
||||
metadata:
|
||||
name: cluster-dc-a
|
||||
spec:
|
||||
instances: 3
|
||||
primaryUpdateStrategy: unsupervised
|
||||
|
||||
storage:
|
||||
storageClass: csi-hostpath-sc
|
||||
size: 1Gi
|
||||
|
||||
plugins:
|
||||
- name: barman-cloud.cloudnative-pg.io
|
||||
isWALArchiver: true
|
||||
parameters:
|
||||
barmanObjectName: minio-store-a
|
||||
|
||||
replica:
|
||||
self: cluster-dc-a
|
||||
primary: cluster-dc-a
|
||||
source: cluster-dc-b
|
||||
|
||||
externalClusters:
|
||||
- name: cluster-dc-a
|
||||
plugin:
|
||||
name: barman-cloud.cloudnative-pg.io
|
||||
parameters:
|
||||
barmanObjectName: minio-store-a
|
||||
|
||||
- name: cluster-dc-b
|
||||
plugin:
|
||||
name: barman-cloud.cloudnative-pg.io
|
||||
parameters:
|
||||
barmanObjectName: minio-store-b
|
||||
```
|
||||
|
||||
## Configuring the plugin instance sidecar
|
||||
|
||||
The Barman Cloud Plugin runs as a sidecar container next to each PostgreSQL
|
||||
instance pod. It manages backup, WAL archiving, and restore processes.
|
||||
|
||||
Configuration comes from multiple `ObjectStore` resources:
|
||||
|
||||
1. The one referenced in the
|
||||
`.spec.plugins` section of the `Cluster`. This is the
|
||||
object store used for WAL archiving and base backups.
|
||||
2. The one referenced in the external cluster
|
||||
used in the `.spec.replica.source` section of the `Cluster`. This is
|
||||
used by the log-shipping designated primary to get the WAL files.
|
||||
3. The one referenced in the
|
||||
`.spec.bootstrap.recovery.source` section of the `Cluster`. Used by
|
||||
the initial recovery job to create the cluster from an existing backup.
|
||||
|
||||
You can fine-tune sidecar behavior in the `.spec.instanceSidecarConfiguration`
|
||||
of your ObjectStore. These settings apply to all PostgreSQL instances that use
|
||||
this object store. Any updates take effect at the next `Cluster` reconciliation,
|
||||
and could generate a rollout of the `Cluster`.
|
||||
|
||||
```yaml
|
||||
apiVersion: barmancloud.cnpg.io/v1
|
||||
kind: ObjectStore
|
||||
metadata:
|
||||
name: minio-store
|
||||
spec:
|
||||
configuration:
|
||||
# [...]
|
||||
instanceSidecarConfiguration:
|
||||
retentionPolicyIntervalSeconds: 1800
|
||||
resources:
|
||||
requests:
|
||||
memory: "XXX"
|
||||
cpu: "YYY"
|
||||
limits:
|
||||
memory: "XXX"
|
||||
cpu: "YYY"
|
||||
```
|
||||
|
||||
:::note
|
||||
If more than one `ObjectStore` applies, the `instanceSidecarConfiguration` of
|
||||
the one set in `.spec.plugins` has priority.
|
||||
:::
|
||||
@ -103,7 +103,7 @@ spec:
|
||||
|
||||
### S3 Lifecycle Policy
|
||||
|
||||
Barman Cloud uploads backup files to S3 but does not modify or delete them afterward.
|
||||
Barman Cloud uploads backup files to S3 but does not modify them afterward.
|
||||
To enhance data durability and protect against accidental or malicious loss,
|
||||
it's recommended to implement the following best practices:
|
||||
|
||||
|
||||
@ -103,7 +103,7 @@ spec:
|
||||
|
||||
### S3 Lifecycle Policy
|
||||
|
||||
Barman Cloud uploads backup files to S3 but does not modify or delete them afterward.
|
||||
Barman Cloud uploads backup files to S3 but does not modify them afterward.
|
||||
To enhance data durability and protect against accidental or malicious loss,
|
||||
it's recommended to implement the following best practices:
|
||||
|
||||
|
||||
@ -103,7 +103,7 @@ spec:
|
||||
|
||||
### S3 Lifecycle Policy
|
||||
|
||||
Barman Cloud uploads backup files to S3 but does not modify or delete them afterward.
|
||||
Barman Cloud uploads backup files to S3 but does not modify them afterward.
|
||||
To enhance data durability and protect against accidental or malicious loss,
|
||||
it's recommended to implement the following best practices:
|
||||
|
||||
|
||||
@ -103,7 +103,7 @@ spec:
|
||||
|
||||
### S3 Lifecycle Policy
|
||||
|
||||
Barman Cloud uploads backup files to S3 but does not modify or delete them afterward.
|
||||
Barman Cloud uploads backup files to S3 but does not modify them afterward.
|
||||
To enhance data durability and protect against accidental or malicious loss,
|
||||
it's recommended to implement the following best practices:
|
||||
|
||||
|
||||
@ -103,7 +103,7 @@ spec:
|
||||
|
||||
### S3 Lifecycle Policy
|
||||
|
||||
Barman Cloud uploads backup files to S3 but does not modify or delete them afterward.
|
||||
Barman Cloud uploads backup files to S3 but does not modify them afterward.
|
||||
To enhance data durability and protect against accidental or malicious loss,
|
||||
it's recommended to implement the following best practices:
|
||||
|
||||
|
||||
@ -206,7 +206,7 @@ When a backup fails, follow these steps in order:
|
||||
plugins:
|
||||
- name: barman-cloud.cloudnative-pg.io
|
||||
parameters:
|
||||
barmanObjectStore: <your-objectstore-name>
|
||||
barmanObjectName: <your-objectstore-name>
|
||||
```
|
||||
|
||||
c. **Check plugin deployment is running**:
|
||||
@ -395,7 +395,7 @@ For detailed PITR configuration and WAL management, see the
|
||||
|
||||
3. **Adjust provider-specific settings (endpoint, path style, etc.)**
|
||||
- See [Object Store Configuration](object_stores.md) for provider-specific settings
|
||||
- Ensure `endpointURL` and `s3UsePathStyle` match your storage type
|
||||
- Ensure `endpointURL` match your storage type
|
||||
- Verify network policies allow egress to your storage provider
|
||||
|
||||
## Diagnostic Commands
|
||||
|
||||
@ -103,7 +103,7 @@ spec:
|
||||
|
||||
### S3 Lifecycle Policy
|
||||
|
||||
Barman Cloud uploads backup files to S3 but does not modify or delete them afterward.
|
||||
Barman Cloud uploads backup files to S3 but does not modify them afterward.
|
||||
To enhance data durability and protect against accidental or malicious loss,
|
||||
it's recommended to implement the following best practices:
|
||||
|
||||
|
||||
@ -206,7 +206,7 @@ When a backup fails, follow these steps in order:
|
||||
plugins:
|
||||
- name: barman-cloud.cloudnative-pg.io
|
||||
parameters:
|
||||
barmanObjectStore: <your-objectstore-name>
|
||||
barmanObjectName: <your-objectstore-name>
|
||||
```
|
||||
|
||||
c. **Check plugin deployment is running**:
|
||||
@ -395,7 +395,7 @@ For detailed PITR configuration and WAL management, see the
|
||||
|
||||
3. **Adjust provider-specific settings (endpoint, path style, etc.)**
|
||||
- See [Object Store Configuration](object_stores.md) for provider-specific settings
|
||||
- Ensure `endpointURL` and `s3UsePathStyle` match your storage type
|
||||
- Ensure `endpointURL` match your storage type
|
||||
- Verify network policies allow egress to your storage provider
|
||||
|
||||
## Diagnostic Commands
|
||||
|
||||
@ -103,7 +103,7 @@ spec:
|
||||
|
||||
### S3 Lifecycle Policy
|
||||
|
||||
Barman Cloud uploads backup files to S3 but does not modify or delete them afterward.
|
||||
Barman Cloud uploads backup files to S3 but does not modify them afterward.
|
||||
To enhance data durability and protect against accidental or malicious loss,
|
||||
it's recommended to implement the following best practices:
|
||||
|
||||
|
||||
@ -206,7 +206,7 @@ When a backup fails, follow these steps in order:
|
||||
plugins:
|
||||
- name: barman-cloud.cloudnative-pg.io
|
||||
parameters:
|
||||
barmanObjectStore: <your-objectstore-name>
|
||||
barmanObjectName: <your-objectstore-name>
|
||||
```
|
||||
|
||||
c. **Check plugin deployment is running**:
|
||||
@ -395,7 +395,7 @@ For detailed PITR configuration and WAL management, see the
|
||||
|
||||
3. **Adjust provider-specific settings (endpoint, path style, etc.)**
|
||||
- See [Object Store Configuration](object_stores.md) for provider-specific settings
|
||||
- Ensure `endpointURL` and `s3UsePathStyle` match your storage type
|
||||
- Ensure `endpointURL` match your storage type
|
||||
- Verify network policies allow egress to your storage provider
|
||||
|
||||
## Diagnostic Commands
|
||||
|
||||
8
web/versioned_sidebars/version-0.11.0-sidebars.json
Normal file
8
web/versioned_sidebars/version-0.11.0-sidebars.json
Normal file
@ -0,0 +1,8 @@
|
||||
{
|
||||
"docs": [
|
||||
{
|
||||
"type": "autogenerated",
|
||||
"dirName": "."
|
||||
}
|
||||
]
|
||||
}
|
||||
@ -1,4 +1,5 @@
|
||||
[
|
||||
"0.11.0",
|
||||
"0.10.0",
|
||||
"0.9.0",
|
||||
"0.8.0",
|
||||
|
||||
1756
web/yarn.lock
1756
web/yarn.lock
File diff suppressed because it is too large
Load Diff
Loading…
Reference in New Issue
Block a user