diff --git a/web/versioned_docs/version-0.4.1/compression.md b/web/versioned_docs/version-0.4.1/compression.md
new file mode 100644
index 0000000..2abbede
--- /dev/null
+++ b/web/versioned_docs/version-0.4.1/compression.md
@@ -0,0 +1,43 @@
+---
+sidebar_position: 80
+---
+
+# Compression
+
+
+
+By default, backups and WAL files are archived **uncompressed**. However, the
+Barman Cloud Plugin supports multiple compression algorithms via
+`barman-cloud-backup` and `barman-cloud-wal-archive`, allowing you to optimize
+for space, speed, or a balance of both.
+
+### Supported Compression Algorithms
+
+- `bzip2`
+- `gzip`
+- `lz4` (WAL only)
+- `snappy`
+- `xz` (WAL only)
+- `zstd` (WAL only)
+
+Compression settings for base backups and WAL archives are configured
+independently. For implementation details, refer to the corresponding API
+definitions:
+
+- [`DataBackupConfiguration`](https://pkg.go.dev/github.com/cloudnative-pg/barman-cloud/pkg/api#DataBackupConfiguration)
+- [`WALBackupConfiguration`](https://pkg.go.dev/github.com/cloudnative-pg/barman-cloud/pkg/api#WalBackupConfiguration)
+
+:::important
+Compression impacts both performance and storage efficiency. Choose the right
+algorithm based on your recovery time objectives (RTO), storage capacity, and
+network throughput.
+:::
+
+## Compression Benchmark (on MinIO)
+
+| Compression | Backup Time (ms) | Restore Time (ms) | Uncompressed Size (MB) | Compressed Size (MB) | Ratio |
+| ----------- | ---------------- | ----------------- | ---------------------- | -------------------- | ----- |
+| None | 10,927 | 7,553 | 395 | 395 | 1.0:1 |
+| bzip2 | 25,404 | 13,886 | 395 | 67 | 5.9:1 |
+| gzip | 116,281 | 3,077 | 395 | 91 | 4.3:1 |
+| snappy | 8,134 | 8,341 | 395 | 166 | 2.4:1 |
diff --git a/web/versioned_docs/version-0.4.1/concepts.md b/web/versioned_docs/version-0.4.1/concepts.md
new file mode 100644
index 0000000..3832df3
--- /dev/null
+++ b/web/versioned_docs/version-0.4.1/concepts.md
@@ -0,0 +1,177 @@
+---
+sidebar_position: 10
+---
+
+# Main Concepts
+
+
+
+:::important
+Before proceeding, make sure to review the following sections of the
+CloudNativePG documentation:
+
+- [**Backup**](https://cloudnative-pg.io/documentation/current/backup/)
+- [**WAL Archiving**](https://cloudnative-pg.io/documentation/current/wal_archiving/)
+- [**Recovery**](https://cloudnative-pg.io/documentation/current/recovery/)
+:::
+
+The **Barman Cloud Plugin** enables **hot (online) backups** of PostgreSQL
+clusters in CloudNativePG through [`barman-cloud`](https://pgbarman.org),
+supporting continuous physical backups and WAL archiving to an **object
+store**—without interrupting write operations.
+
+It also supports both **full recovery** and **Point-in-Time Recovery (PITR)**
+of a PostgreSQL cluster.
+
+## The Object Store
+
+At the core is the [`ObjectStore` custom resource (CRD)](plugin-barman-cloud.v1.md#objectstorespec),
+which acts as the interface between the PostgreSQL cluster and the target
+object storage system. It allows you to configure:
+
+- **Authentication and bucket location** via the `.spec.configuration` section
+- **WAL archiving** settings—such as compression type, parallelism, and
+ server-side encryption—under `.spec.configuration.wal`
+- **Base backup options**—with similar settings for compression, concurrency,
+ and encryption—under `.spec.configuration.data`
+- **Retention policies** to manage the life-cycle of archived WALs and backups
+ via `.spec.configuration.retentionPolicy`
+
+WAL files are archived in the `wals` directory, while base backups are stored
+as **tarballs** in the `base` directory, following the
+[Barman Cloud convention](https://docs.pgbarman.org/cloud/latest/usage/#object-store-layout).
+
+The plugin also offers advanced capabilities, including
+[backup tagging](misc.md#backup-object-tagging) and
+[extra options for backups and WAL archiving](misc.md#extra-options-for-backup-and-wal-archiving).
+
+:::tip
+For details, refer to the
+[API reference for the `ObjectStore` resource](plugin-barman-cloud.v1.md#objectstorespec).
+:::
+
+## Integration with a CloudNativePG Cluster
+
+CloudNativePG can delegate continuous backup and recovery responsibilities to
+the **Barman Cloud Plugin** by configuring the `.spec.plugins` section of a
+`Cluster` resource. This setup requires a corresponding `ObjectStore` resource
+to be defined.
+
+:::important
+While it is technically possible to reuse the same `ObjectStore` for multiple
+`Cluster` resources within the same namespace, it is strongly recommended to
+dedicate one object store per PostgreSQL cluster to ensure data isolation and
+operational clarity.
+:::
+
+The following example demonstrates how to configure a CloudNativePG cluster
+named `cluster-example` to use a previously defined `ObjectStore` (also named
+`cluster-example`) in the same namespace. Setting `isWALArchiver: true` enables
+WAL archiving through the plugin:
+
+```yaml
+apiVersion: postgresql.cnpg.io/v1
+kind: Cluster
+metadata:
+ name: cluster-example
+spec:
+ # Other cluster settings...
+ plugins:
+ - name: barman-cloud.cloudnative-pg.io
+ isWALArchiver: true
+ parameters:
+ barmanObjectName: cluster-example
+```
+
+## Backup of a Postgres Cluster
+
+Once the object store is defined and the `Cluster` is configured to use the
+Barman Cloud Plugin, **WAL archiving is activated immediately** on the
+PostgreSQL primary.
+
+Physical base backups are seamlessly managed by CloudNativePG using the
+`Backup` and `ScheduledBackup` resources, respectively for
+[on-demand](https://cloudnative-pg.io/documentation/current/backup/#on-demand-backups)
+and
+[scheduled](https://cloudnative-pg.io/documentation/current/backup/#scheduled-backups)
+backups.
+
+To use the Barman Cloud Plugin, you must set the `method` to `plugin` and
+configure the `pluginConfiguration` section as shown:
+
+```yaml
+[...]
+spec:
+ method: plugin
+ pluginConfiguration:
+ name: barman-cloud.cloudnative-pg.io
+ [...]
+```
+
+With this configuration, CloudNativePG supports:
+
+- Backups from both **primary** and **standby** instances
+- Backups from **designated primaries** in a distributed topology using
+ [replica clusters](https://cloudnative-pg.io/documentation/current/replica_cluster/)
+
+:::tip
+For details on how to back up from a standby, refer to the official documentation:
+[Backup from a standby](https://cloudnative-pg.io/documentation/current/backup/#backup-from-a-standby).
+:::
+
+:::important
+Both backup and WAL archiving operations are executed by sidecar containers
+running in the same pod as the PostgreSQL `Cluster` primary instance—except
+when backups are taken from a standby, in which case the sidecar runs alongside
+the standby pod.
+The sidecar containers use a [dedicated container image](images.md) that
+includes only the supported version of Barman Cloud.
+:::
+
+## Recovery of a Postgres Cluster
+
+In PostgreSQL, *recovery* refers to the process of starting a database instance
+from an existing backup. The Barman Cloud Plugin integrates with CloudNativePG
+to support both **full recovery** and **Point-in-Time Recovery (PITR)** from an
+object store.
+
+Recovery in this context is *not in-place*: it bootstraps a brand-new
+PostgreSQL cluster from a backup and replays the necessary WAL files to reach
+the desired recovery target.
+
+To perform a recovery, define an *external cluster* that references the
+appropriate `ObjectStore`, and use it as the source in the `bootstrap` section
+of the target cluster:
+
+```yaml
+[...]
+spec:
+ [...]
+ bootstrap:
+ recovery:
+ source: source
+ externalClusters:
+ - name: source
+ plugin:
+ name: barman-cloud.cloudnative-pg.io
+ parameters:
+ barmanObjectName: cluster-example
+ serverName: cluster-example
+ [...]
+```
+
+The critical element here is the `externalClusters` section of the `Cluster`
+resource, where the `plugin` stanza instructs CloudNativePG to use the Barman
+Cloud Plugin to access the object store for recovery.
+
+This same mechanism can be used for a variety of scenarios enabled by the
+CloudNativePG API, including:
+
+* **Full cluster recovery** from the latest backup
+* **Point-in-Time Recovery (PITR)**
+* Bootstrapping **replica clusters** in a distributed topology
+
+:::tip
+For complete instructions and advanced use cases, refer to the official
+[Recovery documentation](https://cloudnative-pg.io/documentation/current/recovery/).
+:::
diff --git a/web/versioned_docs/version-0.4.1/images.md b/web/versioned_docs/version-0.4.1/images.md
new file mode 100644
index 0000000..f6c32d3
--- /dev/null
+++ b/web/versioned_docs/version-0.4.1/images.md
@@ -0,0 +1,37 @@
+---
+sidebar_position: 99
+---
+
+# Container Images
+
+
+
+The Barman Cloud Plugin is distributed using two container images:
+
+- One for deploying the plugin components
+- One for the sidecar that runs alongside each PostgreSQL instance in a
+ CloudNativePG `Cluster` using the plugin
+
+## Plugin Container Image
+
+The plugin image contains the logic required to operate the Barman Cloud Plugin
+within your Kubernetes environment with CloudNativePG. It is published on the
+GitHub Container Registry at `ghcr.io/cloudnative-pg/plugin-barman-cloud`.
+
+This image is built from the
+[`Dockerfile.plugin`](https://github.com/cloudnative-pg/plugin-barman-cloud/blob/main/containers/Dockerfile.plugin)
+in the plugin repository.
+
+## Sidecar Container Image
+
+The sidecar image is used within each PostgreSQL pod in the cluster. It
+includes the latest supported version of Barman Cloud and is responsible for
+performing WAL archiving and backups on behalf of CloudNativePG.
+
+It is available at `ghcr.io/cloudnative-pg/plugin-barman-cloud-sidecar` and is
+built from the
+[`Dockerfile.sidecar`](https://github.com/cloudnative-pg/plugin-barman-cloud/blob/main/containers/Dockerfile.sidecar).
+
+These sidecar images are designed to work seamlessly with the
+[`minimal` PostgreSQL container images](https://github.com/cloudnative-pg/postgres-containers?tab=readme-ov-file#minimal-images)
+maintained by the CloudNativePG Community.
diff --git a/web/versioned_docs/version-0.4.1/installation.mdx b/web/versioned_docs/version-0.4.1/installation.mdx
new file mode 100644
index 0000000..85bd41b
--- /dev/null
+++ b/web/versioned_docs/version-0.4.1/installation.mdx
@@ -0,0 +1,109 @@
+---
+sidebar_position: 20
+---
+
+# Installation
+
+:::important
+1. The plugin **must** be installed in the same namespace as the CloudNativePG
+ operator (typically `cnpg-system`).
+
+2. Keep in mind that the operator's **listening namespaces** may differ from its
+ installation namespace. Double-check this to avoid configuration issues.
+:::
+
+## Verifying the Requirements
+
+Before installing the plugin, make sure the [requirements](intro.md#requirements) are met.
+
+### CloudNativePG Version
+
+Ensure you're running a version of CloudNativePG that is compatible with the
+plugin. If installed in the default `cnpg-system` namespace, you can verify the
+version with:
+
+```sh
+kubectl get deployment -n cnpg-system cnpg-controller-manager -o yaml \
+ | grep ghcr.io/cloudnative-pg/cloudnative-pg
+```
+
+Example output:
+
+```output
+image: ghcr.io/cloudnative-pg/cloudnative-pg:1.26.0
+```
+
+The version **must be 1.26 or newer**.
+
+### cert-manager
+
+Use the [cmctl](https://cert-manager.io/docs/reference/cmctl/#installation)
+tool to confirm that `cert-manager` is installed and available:
+
+```sh
+cmctl check api
+```
+
+Example output:
+
+```output
+The cert-manager API is ready
+```
+
+Both checks are required before proceeding with the installation.
+
+## Installing the Barman Cloud Plugin
+
+import { InstallationSnippet } from '@site/src/components/Installation';
+
+Install the plugin using `kubectl` by applying the manifest for the latest
+release:
+
+
+
+Example output:
+
+```output
+customresourcedefinition.apiextensions.k8s.io/objectstores.barmancloud.cnpg.io created
+serviceaccount/plugin-barman-cloud created
+role.rbac.authorization.k8s.io/leader-election-role created
+clusterrole.rbac.authorization.k8s.io/metrics-auth-role created
+clusterrole.rbac.authorization.k8s.io/metrics-reader created
+clusterrole.rbac.authorization.k8s.io/objectstore-editor-role created
+clusterrole.rbac.authorization.k8s.io/objectstore-viewer-role created
+clusterrole.rbac.authorization.k8s.io/plugin-barman-cloud created
+rolebinding.rbac.authorization.k8s.io/leader-election-rolebinding created
+clusterrolebinding.rbac.authorization.k8s.io/metrics-auth-rolebinding created
+clusterrolebinding.rbac.authorization.k8s.io/plugin-barman-cloud-binding created
+secret/plugin-barman-cloud-8tfddg42gf created
+service/barman-cloud created
+deployment.apps/barman-cloud configured
+certificate.cert-manager.io/barman-cloud-client created
+certificate.cert-manager.io/barman-cloud-server created
+issuer.cert-manager.io/selfsigned-issuer created
+```
+
+Finally, check that the deployment is up and running:
+
+```sh
+kubectl rollout status deployment \
+ -n cnpg-system barman-cloud
+```
+
+Example output:
+
+```output
+deployment "barman-cloud" successfully rolled out
+```
+
+This confirms that the plugin is deployed and ready to use.
+
+## Testing the latest development snapshot
+
+You can also test the latest development snapshot of the plugin with the
+following command:
+
+```sh
+kubectl apply -f \
+ https://raw.githubusercontent.com/cloudnative-pg/plugin-barman-cloud/refs/heads/main/manifest.yaml
+```
diff --git a/web/versioned_docs/version-0.4.1/intro.md b/web/versioned_docs/version-0.4.1/intro.md
new file mode 100644
index 0000000..f22f383
--- /dev/null
+++ b/web/versioned_docs/version-0.4.1/intro.md
@@ -0,0 +1,67 @@
+---
+sidebar_position: 1
+sidebar_label: "Introduction"
+---
+
+# Barman Cloud Plugin
+
+
+
+The **Barman Cloud Plugin** for [CloudNativePG](https://cloudnative-pg.io/)
+enables online continuous physical backups of PostgreSQL clusters to object storage
+using the `barman-cloud` suite from the [Barman](https://docs.pgbarman.org/release/latest/)
+project.
+
+:::important
+If you plan to migrate your existing CloudNativePG cluster to the new
+plugin-based approach using the Barman Cloud Plugin, see
+["Migrating from Built-in CloudNativePG Backup"](migration.md)
+for detailed instructions.
+:::
+
+## Requirements
+
+To use the Barman Cloud Plugin, you need:
+
+- [CloudNativePG](https://cloudnative-pg.io) version **1.26**
+- [cert-manager](https://cert-manager.io/) to enable TLS communication between
+ the plugin and the operator
+
+## Key Features
+
+This plugin provides the following capabilities:
+
+- Physical online backup of the data directory
+- Physical restore of the data directory
+- Write-Ahead Log (WAL) archiving
+- WAL restore
+- Full cluster recovery
+- Point-in-Time Recovery (PITR)
+- Seamless integration with replica clusters for bootstrap and WAL restore from archive
+
+:::important
+The Barman Cloud Plugin is designed to **replace the in-tree object storage support**
+previously provided via the `.spec.backup.barmanObjectStore` section in the
+`Cluster` resource.
+Backups created using the in-tree approach are fully supported and compatible
+with this plugin.
+:::
+
+## Supported Object Storage Providers
+
+The plugin works with all storage backends supported by `barman-cloud`, including:
+
+- **Amazon S3**
+- **Google Cloud Storage**
+- **Microsoft Azure Blob Storage**
+
+In addition, the following S3-compatible and simulator solutions have been
+tested and verified:
+
+- [MinIO](https://min.io/) – An S3-compatible storage solution
+- [Azurite](https://github.com/Azure/Azurite) – A simulator for Azure Blob Storage
+- [fake-gcs-server](https://github.com/fsouza/fake-gcs-server) – A simulator for Google Cloud Storage
+
+:::tip
+For more details, refer to [Object Store Providers](object_stores.md).
+:::
diff --git a/web/versioned_docs/version-0.4.1/migration.md b/web/versioned_docs/version-0.4.1/migration.md
new file mode 100644
index 0000000..43c4b80
--- /dev/null
+++ b/web/versioned_docs/version-0.4.1/migration.md
@@ -0,0 +1,259 @@
+---
+sidebar_position: 40
+---
+
+# Migrating from Built-in CloudNativePG Backup
+
+
+
+The in-tree support for Barman Cloud in CloudNativePG is **deprecated starting
+from version 1.26** and will be removed in a future release.
+
+If you're currently relying on the built-in Barman Cloud integration, you can
+migrate seamlessly to the new **plugin-based architecture** using the Barman
+Cloud Plugin, without data loss. Follow these steps:
+
+- [Install the Barman Cloud Plugin](installation.mdx)
+- Create an `ObjectStore` resource by translating the contents of the
+ `.spec.backup.barmanObjectStore` section from your existing `Cluster`
+ definition
+- Modify the `Cluster` resource in a single atomic change to switch from
+ in-tree backup to the plugin
+- Update any `ScheduledBackup` resources to use the plugin
+- Update the `externalClusters` configuration, where applicable
+
+:::tip
+For a working example, refer to [this commit](https://github.com/cloudnative-pg/cnpg-playground/commit/596f30e252896edf8f734991c3538df87630f6f7)
+from the [CloudNativePG Playground project](https://github.com/cloudnative-pg/cnpg-playground),
+which demonstrates a full migration.
+:::
+
+---
+
+## Step 1: Define the `ObjectStore`
+
+Begin by creating an `ObjectStore` resource in the same namespace as your
+PostgreSQL `Cluster`.
+
+There is a **direct mapping** between the `.spec.backup.barmanObjectStore`
+section in CloudNativePG and the `.spec.configuration` field in the
+`ObjectStore` CR. The conversion is mostly mechanical, with one key difference:
+
+:::warning
+In the plugin architecture, retention policies are defined as part of the `ObjectStore`.
+In contrast, the in-tree implementation defined them at the `Cluster` level.
+:::
+
+If your `Cluster` used `.spec.backup.retentionPolicy`, move that configuration
+to `.spec.retentionPolicy` in the `ObjectStore`.
+
+---
+
+### Example
+
+Here’s an excerpt from a traditional in-tree CloudNativePG backup configuration
+taken from the CloudNativePG Playground project:
+
+```yaml
+apiVersion: postgresql.cnpg.io/v1
+kind: Cluster
+metadata:
+ name: pg-eu
+spec:
+ # [...]
+ backup:
+ barmanObjectStore:
+ destinationPath: s3://backups/
+ endpointURL: http://minio-eu:9000
+ s3Credentials:
+ accessKeyId:
+ name: minio-eu
+ key: ACCESS_KEY_ID
+ secretAccessKey:
+ name: minio-eu
+ key: ACCESS_SECRET_KEY
+ wal:
+ compression: gzip
+```
+
+This configuration translates to the following `ObjectStore` resource for the
+plugin:
+
+```yaml
+apiVersion: barmancloud.cnpg.io/v1
+kind: ObjectStore
+metadata:
+ name: minio-eu
+spec:
+ configuration:
+ destinationPath: s3://backups/
+ endpointURL: http://minio-eu:9000
+ s3Credentials:
+ accessKeyId:
+ name: minio-eu
+ key: ACCESS_KEY_ID
+ secretAccessKey:
+ name: minio-eu
+ key: ACCESS_SECRET_KEY
+ wal:
+ compression: gzip
+```
+
+As you can see, the contents of `barmanObjectStore` have been copied directly
+under the `configuration` field of the `ObjectStore` resource, using the same
+secret references.
+
+## Step 2: Update the `Cluster` for plugin WAL archiving
+
+Once the `ObjectStore` resource is in place, update the `Cluster` resource as
+follows in a single atomic change:
+
+- Remove the `.spec.backup.barmanObjectStore` section
+- Remove `.spec.backup.retentionPolicy` if it was defined (as it is now in the
+ `ObjectStore`)
+- Remove the entire `spec.backup` section if it is now empty
+- Add `barman-cloud.cloudnative-pg.io` to the `plugins` list, as described in
+ [Configuring WAL archiving](usage.md#configuring-wal-archiving)
+
+This will trigger a rolling update of the `Cluster`, switching continuous
+backup from the in-tree implementation to the plugin-based approach.
+
+### Example
+
+The updated `pg-eu` cluster will have this configuration instead of the
+previous `backup` section:
+
+```yaml
+ plugins:
+ - name: barman-cloud.cloudnative-pg.io
+ isWALArchiver: true
+ parameters:
+ barmanObjectName: minio-eu
+```
+
+---
+
+## Step 3: Update the `ScheduledBackup`
+
+After switching the `Cluster` to use the plugin, update your `ScheduledBackup`
+resources to match.
+
+Set the backup `method` to `plugin` and reference the plugin name via
+`pluginConfiguration`, as shown in ["Performing a base backup"](usage.md#performing-a-base-backup).
+
+### Example
+
+Original in-tree `ScheduledBackup`:
+
+```yaml
+apiVersion: postgresql.cnpg.io/v1
+kind: ScheduledBackup
+metadata:
+ name: pg-eu-backup
+spec:
+ cluster:
+ name: pg-eu
+ schedule: '0 0 0 * * *'
+ backupOwnerReference: self
+```
+
+Updated version using the plugin:
+
+```yaml
+apiVersion: postgresql.cnpg.io/v1
+kind: ScheduledBackup
+metadata:
+ name: pg-eu-backup
+spec:
+ cluster:
+ name: pg-eu
+ schedule: '0 0 0 * * *'
+ backupOwnerReference: self
+ method: plugin
+ pluginConfiguration:
+ name: barman-cloud.cloudnative-pg.io
+```
+
+---
+
+## Step 4: Update the `externalClusters` configuration
+
+If your `Cluster` relies on one or more external clusters that use the in-tree
+Barman Cloud integration, you need to update those configurations to use the
+plugin-based architecture.
+
+When a replica cluster fetches WAL files or base backups from an external
+source that used the built-in backup method, follow these steps:
+
+1. Create a corresponding `ObjectStore` resource for the external cluster, as
+ shown in [Step 1](#step-1-define-the-objectstore)
+2. Update the `externalClusters` section of your replica cluster to use the
+ plugin instead of the in-tree `barmanObjectStore` field
+
+### Example
+
+Consider the original configuration using in-tree Barman Cloud:
+
+```yaml
+apiVersion: postgresql.cnpg.io/v1
+kind: Cluster
+metadata:
+ name: pg-us
+spec:
+ # [...]
+ externalClusters:
+ - name: pg-eu
+ barmanObjectStore:
+ destinationPath: s3://backups/
+ endpointURL: http://minio-eu:9000
+ serverName: pg-eu
+ s3Credentials:
+ accessKeyId:
+ name: minio-eu
+ key: ACCESS_KEY_ID
+ secretAccessKey:
+ name: minio-eu
+ key: ACCESS_SECRET_KEY
+ wal:
+ compression: gzip
+```
+
+Create the `ObjectStore` resource for the external cluster:
+
+```yaml
+apiVersion: barmancloud.cnpg.io/v1
+kind: ObjectStore
+metadata:
+ name: minio-eu
+spec:
+ configuration:
+ destinationPath: s3://backups/
+ endpointURL: http://minio-eu:9000
+ s3Credentials:
+ accessKeyId:
+ name: minio-eu
+ key: ACCESS_KEY_ID
+ secretAccessKey:
+ name: minio-eu
+ key: ACCESS_SECRET_KEY
+ wal:
+ compression: gzip
+```
+
+Update the external cluster configuration to use the plugin:
+
+```yaml
+apiVersion: postgresql.cnpg.io/v1
+kind: Cluster
+metadata:
+ name: pg-us
+spec:
+ # [...]
+ externalClusters:
+ - name: pg-eu
+ plugin:
+ name: barman-cloud.cloudnative-pg.io
+ parameters:
+ barmanObjectName: minio-eu
+ serverName: pg-eu
+```
diff --git a/web/versioned_docs/version-0.4.1/misc.md b/web/versioned_docs/version-0.4.1/misc.md
new file mode 100644
index 0000000..4d3cefc
--- /dev/null
+++ b/web/versioned_docs/version-0.4.1/misc.md
@@ -0,0 +1,76 @@
+---
+sidebar_position: 90
+---
+
+# Miscellaneous
+
+
+
+## Backup Object Tagging
+
+You can attach key-value metadata tags to backup artifacts—such as base
+backups, WAL files, and history files—via the `.spec.configuration` section of
+the `ObjectStore` resource.
+
+- `tags`: applied to base backups and WAL files
+- `historyTags`: applied to history files only
+
+### Example
+
+```yaml
+apiVersion: barmancloud.cnpg.io/v1
+kind: ObjectStore
+metadata:
+ name: my-store
+spec:
+ configuration:
+ [...]
+ tags:
+ backupRetentionPolicy: "expire"
+ historyTags:
+ backupRetentionPolicy: "keep"
+ [...]
+```
+
+## Extra Options for Backup and WAL Archiving
+
+You can pass additional command-line arguments to `barman-cloud-backup` and
+`barman-cloud-wal-archive` using the `additionalCommandArgs` field in the
+`ObjectStore` configuration.
+
+- `.spec.configuration.data.additionalCommandArgs`: for `barman-cloud-backup`
+- `.spec.configuration.wal.additionalCommandArgs`: for `barman-cloud-wal-archive`
+
+Each field accepts a list of string arguments. If an argument is already
+configured elsewhere in the plugin, the duplicate will be ignored.
+
+### Example: Extra Backup Options
+
+```yaml
+kind: ObjectStore
+metadata:
+ name: my-store
+spec:
+ configuration:
+ data:
+ additionalCommandArgs:
+ - "--min-chunk-size=5MB"
+ - "--read-timeout=60"
+```
+
+### Example: Extra WAL Archive Options
+
+```yaml
+kind: ObjectStore
+metadata:
+ name: my-store
+spec:
+ configuration:
+ wal:
+ additionalCommandArgs:
+ - "--max-concurrency=1"
+ - "--read-timeout=60"
+```
+
+For a complete list of supported options, refer to the
+[official Barman Cloud documentation](https://docs.pgbarman.org/release/latest/).
diff --git a/web/versioned_docs/version-0.4.1/object_stores.md b/web/versioned_docs/version-0.4.1/object_stores.md
new file mode 100644
index 0000000..9ca5a2a
--- /dev/null
+++ b/web/versioned_docs/version-0.4.1/object_stores.md
@@ -0,0 +1,465 @@
+---
+sidebar_position: 50
+---
+
+# Object Store Providers
+
+
+
+The Barman Cloud Plugin enables the storage of PostgreSQL cluster backup files
+in any object storage service supported by the
+[Barman Cloud infrastructure](https://docs.pgbarman.org/release/latest/).
+
+Currently, Barman Cloud supports the following providers:
+
+- [Amazon S3](#aws-s3)
+- [Microsoft Azure Blob Storage](#azure-blob-storage)
+- [Google Cloud Storage](#google-cloud-storage)
+
+You may also use any S3- or Azure-compatible implementation of the above
+services.
+
+To configure object storage with Barman Cloud, you must define an
+[`ObjectStore` object](plugin-barman-cloud.v1.md#objectstore), which
+establishes the connection between your PostgreSQL cluster and the object
+storage backend.
+
+Configuration details — particularly around authentication — will vary depending on
+the specific object storage provider you are using.
+
+The following sections detail the setup for each.
+
+---
+
+## AWS S3
+
+[AWS Simple Storage Service (S3)](https://aws.amazon.com/s3/) is one of the
+most widely adopted object storage solutions.
+
+The Barman Cloud plugin for CloudNativePG integrates with S3 through two
+primary authentication mechanisms:
+
+- [IAM Roles for Service Accounts (IRSA)](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) —
+ recommended for clusters running on EKS
+- Access keys — using `ACCESS_KEY_ID` and `ACCESS_SECRET_KEY` credentials
+
+### Access Keys
+
+To authenticate using access keys, you’ll need:
+
+- `ACCESS_KEY_ID`: the public key used to authenticate to S3
+- `ACCESS_SECRET_KEY`: the corresponding secret key
+- `ACCESS_SESSION_TOKEN`: (optional) a temporary session token, if required
+
+These credentials must be stored securely in a Kubernetes secret:
+
+```sh
+kubectl create secret generic aws-creds \
+ --from-literal=ACCESS_KEY_ID= \
+ --from-literal=ACCESS_SECRET_KEY=
+# --from-literal=ACCESS_SESSION_TOKEN= # if required
+```
+
+The credentials will be encrypted at rest if your Kubernetes environment
+supports it.
+
+You can then reference the secret in your `ObjectStore` definition:
+
+```yaml
+apiVersion: barmancloud.cnpg.io/v1
+kind: ObjectStore
+metadata:
+ name: aws-store
+spec:
+ configuration:
+ destinationPath: "s3://BUCKET_NAME/path/to/folder"
+ s3Credentials:
+ accessKeyId:
+ name: aws-creds
+ key: ACCESS_KEY_ID
+ secretAccessKey:
+ name: aws-creds
+ key: ACCESS_SECRET_KEY
+ [...]
+```
+
+### IAM Role for Service Account (IRSA)
+
+To use IRSA with EKS, configure the service account of the PostgreSQL cluster
+with the appropriate annotation:
+
+```yaml
+apiVersion: postgresql.cnpg.io/v1
+kind: Cluster
+metadata:
+ [...]
+spec:
+ serviceAccountTemplate:
+ metadata:
+ annotations:
+ eks.amazonaws.com/role-arn: arn:[...]
+ [...]
+```
+
+### S3 Lifecycle Policy
+
+Barman Cloud uploads backup files to S3 but does not modify or delete them afterward.
+To enhance data durability and protect against accidental or malicious loss,
+it's recommended to implement the following best practices:
+
+- Enable object versioning
+- Enable object locking to prevent objects from being deleted or overwritten
+ for a defined period or indefinitely (this provides an additional layer of
+ protection against accidental deletion and ransomware attacks)
+- Set lifecycle rules to expire current versions a few days after your Barman
+ retention window
+- Expire non-current versions after a longer period
+
+These strategies help you safeguard backups without requiring broad delete
+permissions, ensuring both security and compliance with minimal operational
+overhead.
+
+
+### S3-Compatible Storage Providers
+
+You can use S3-compatible services like **MinIO**, **Linode (Akamai) Object Storage**,
+or **DigitalOcean Spaces** by specifying a custom `endpointURL`.
+
+Example with Linode (Akamai) Object Storage (`us-east1`):
+
+```yaml
+apiVersion: barmancloud.cnpg.io/v1
+kind: ObjectStore
+metadata:
+ name: linode-store
+spec:
+ configuration:
+ destinationPath: "s3://BUCKET_NAME/"
+ endpointURL: "https://us-east1.linodeobjects.com"
+ s3Credentials:
+ [...]
+ [...]
+```
+
+Example with DigitalOcean Spaces (SFO3, path-style):
+
+```yaml
+apiVersion: barmancloud.cnpg.io/v1
+kind: ObjectStore
+metadata:
+ name: digitalocean-store
+spec:
+ configuration:
+ destinationPath: "s3://BUCKET_NAME/path/to/folder"
+ endpointURL: "https://sfo3.digitaloceanspaces.com"
+ s3Credentials:
+ [...]
+ [...]
+```
+
+### Using Object Storage with a Private CA
+
+For object storage services (e.g., MinIO) that use HTTPS with certificates
+signed by a private CA, set the `endpointCA` field in the `ObjectStore`
+definition. Unless you already have it, create a Kubernetes `Secret` with the
+CA bundle:
+
+```sh
+kubectl create secret generic my-ca-secret --from-file=ca.crt
+```
+
+Then reference it:
+
+```yaml
+apiVersion: barmancloud.cnpg.io/v1
+kind: ObjectStore
+metadata:
+ name: minio-store
+spec:
+ configuration:
+ endpointURL:
+ endpointCA:
+ name: my-ca-secret
+ key: ca.crt
+ [...]
+```
+
+
+:::note
+If you want `ConfigMaps` and `Secrets` to be **automatically** reloaded by
+instances, you can add a label with the key `cnpg.io/reload` to the
+`Secrets`/`ConfigMaps`. Otherwise, you will have to reload the instances using the
+`kubectl cnpg reload` subcommand.
+:::
+
+---
+
+## Azure Blob Storage
+
+[Azure Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/)
+is Microsoft’s cloud-based object storage solution.
+
+Barman Cloud supports the following authentication methods:
+
+- [Connection String](https://learn.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string)
+- Storage Account Name + [Access Key](https://learn.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage)
+- Storage Account Name + [SAS Token](https://learn.microsoft.com/en-us/azure/storage/blobs/sas-service-create)
+- [Azure AD Workload Identity](https://azure.github.io/azure-workload-identity/docs/introduction.html)
+
+### Azure AD Workload Identity
+
+This method avoids storing credentials in Kubernetes via the
+`.spec.configuration.inheritFromAzureAD` option:
+
+```yaml
+apiVersion: barmancloud.cnpg.io/v1
+kind: ObjectStore
+metadata:
+ name: azure-store
+spec:
+ configuration:
+ destinationPath: ""
+ azureCredentials:
+ inheritFromAzureAD: true
+ [...]
+```
+
+### Access Key, SAS Token, or Connection String
+
+Store credentials in a Kubernetes secret:
+
+```sh
+kubectl create secret generic azure-creds \
+ --from-literal=AZURE_STORAGE_ACCOUNT= \
+ --from-literal=AZURE_STORAGE_KEY= \
+ --from-literal=AZURE_STORAGE_SAS_TOKEN= \
+ --from-literal=AZURE_STORAGE_CONNECTION_STRING=
+```
+
+Then reference the required keys in your `ObjectStore`:
+
+```yaml
+apiVersion: barmancloud.cnpg.io/v1
+kind: ObjectStore
+metadata:
+ name: azure-store
+spec:
+ configuration:
+ destinationPath: ""
+ azureCredentials:
+ connectionString:
+ name: azure-creds
+ key: AZURE_CONNECTION_STRING
+ storageAccount:
+ name: azure-creds
+ key: AZURE_STORAGE_ACCOUNT
+ storageKey:
+ name: azure-creds
+ key: AZURE_STORAGE_KEY
+ storageSasToken:
+ name: azure-creds
+ key: AZURE_STORAGE_SAS_TOKEN
+ [...]
+```
+
+For Azure Blob, the destination path format is:
+
+```
+://..core.windows.net//
+```
+
+### Azure-Compatible Providers
+
+If you're using a different implementation (e.g., Azurite or emulator):
+
+```
+://:///
+```
+
+---
+
+## Google Cloud Storage
+
+[Google Cloud Storage](https://cloud.google.com/storage/) is supported with two
+authentication modes:
+
+- **GKE Workload Identity** (recommended inside Google Kubernetes Engine)
+- **Service Account JSON key** via the `GOOGLE_APPLICATION_CREDENTIALS` environment variable
+
+### GKE Workload Identity
+
+Use the [Workload Identity authentication](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity)
+when running in GKE:
+
+1. Set `googleCredentials.gkeEnvironment` to `true` in the `ObjectStore`
+ resource
+2. Annotate the `serviceAccountTemplate` in the `Cluster` resource with the GCP
+ service account
+
+For example, in the `ObjectStore` resource:
+
+```yaml
+apiVersion: barmancloud.cnpg.io/v1
+kind: ObjectStore
+metadata:
+ name: google-store
+spec:
+ configuration:
+ destinationPath: "gs:///"
+ googleCredentials:
+ gkeEnvironment: true
+```
+
+And in the `Cluster` resource:
+
+```yaml
+apiVersion: postgresql.cnpg.io/v1
+kind: Cluster
+spec:
+ serviceAccountTemplate:
+ metadata:
+ annotations:
+ iam.gke.io/gcp-service-account: [...].iam.gserviceaccount.com
+```
+
+### Service Account JSON Key
+
+Follow Google’s [authentication setup](https://cloud.google.com/docs/authentication/getting-started),
+then:
+
+```sh
+kubectl create secret generic backup-creds --from-file=gcsCredentials=gcs_credentials_file.json
+```
+
+```yaml
+apiVersion: barmancloud.cnpg.io/v1
+kind: ObjectStore
+metadata:
+ name: google-store
+spec:
+ configuration:
+ destinationPath: "gs:///"
+ googleCredentials:
+ applicationCredentials:
+ name: backup-creds
+ key: gcsCredentials
+ [...]
+```
+
+:::important
+This authentication method generates a JSON file within the container
+with all the credentials required to access your Google Cloud Storage
+bucket. As a result, if someone gains access to the `Pod`, they will also have
+write permissions to the bucket.
+:::
+
+---
+
+
+## MinIO Gateway
+
+MinIO Gateway can proxy requests to cloud object storage providers like S3 or GCS.
+For more information, refer to [MinIO official documentation](https://docs.min.io/).
+
+### Setup
+
+Create MinIO access credentials:
+
+```sh
+kubectl create secret generic minio-creds \
+ --from-literal=MINIO_ACCESS_KEY= \
+ --from-literal=MINIO_SECRET_KEY=
+```
+
+:::note
+Cloud Object Storage credentials will be used only by MinIO Gateway in this
+case.
+:::
+
+Expose MinIO Gateway via `ClusterIP`:
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: minio-gateway-service
+spec:
+ type: ClusterIP
+ ports:
+ - port: 9000
+ targetPort: 9000
+ protocol: TCP
+ selector:
+ app: minio
+```
+
+Here follows an excerpt of an example of deployment relaying to S3:
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+[...]
+spec:
+ containers:
+ - name: minio
+ image: minio/minio:RELEASE.2020-06-03T22-13-49Z
+ args: ["gateway", "s3"]
+ ports:
+ - containerPort: 9000
+ env:
+ - name: MINIO_ACCESS_KEY
+ valueFrom:
+ secretKeyRef:
+ name: minio-creds
+ key: MINIO_ACCESS_KEY
+ - name: MINIO_SECRET_KEY
+ valueFrom:
+ secretKeyRef:
+ name: minio-creds
+ key: MINIO_SECRET_KEY
+ - name: AWS_ACCESS_KEY_ID
+ valueFrom:
+ secretKeyRef:
+ name: aws-creds
+ key: ACCESS_KEY_ID
+ - name: AWS_SECRET_ACCESS_KEY
+ valueFrom:
+ secretKeyRef:
+ name: aws-creds
+ key: ACCESS_SECRET_KEY
+# Uncomment the below section if session token is required
+# - name: AWS_SESSION_TOKEN
+# valueFrom:
+# secretKeyRef:
+# name: aws-creds
+# key: ACCESS_SESSION_TOKEN
+```
+
+Proceed by configuring MinIO Gateway service as the `endpointURL` in the
+`ObjectStore` definition, then choose a bucket name to replace `BUCKET_NAME`:
+
+```yaml
+apiVersion: barmancloud.cnpg.io/v1
+kind: ObjectStore
+metadata:
+ name: minio-store
+spec:
+ configuration:
+ destinationPath: s3://BUCKET_NAME/
+ endpointURL: http://minio-gateway-service:9000
+ s3Credentials:
+ accessKeyId:
+ name: minio-creds
+ key: MINIO_ACCESS_KEY
+ secretAccessKey:
+ name: minio-creds
+ key: MINIO_SECRET_KEY
+ [...]
+```
+
+:::important
+Verify on `s3://BUCKET_NAME/` the presence of archived WAL files before
+proceeding with a backup.
+:::
+
+---
diff --git a/web/versioned_docs/version-0.4.1/parameters.md b/web/versioned_docs/version-0.4.1/parameters.md
new file mode 100644
index 0000000..ca0cd2b
--- /dev/null
+++ b/web/versioned_docs/version-0.4.1/parameters.md
@@ -0,0 +1,19 @@
+---
+sidebar_position: 100
+---
+
+# Parameters
+
+
+
+The following parameters are available for the Barman Cloud Plugin:
+
+- `barmanObjectName`: references the `ObjectStore` resource to be used by the
+ plugin.
+- `serverName`: Specifies the server name in the object store.
+
+:::important
+The `serverName` parameter in the `ObjectStore` resource is retained solely for
+API compatibility with the in-tree `barmanObjectStore` and must always be left empty.
+When needed, use the `serverName` plugin parameter in the Cluster configuration instead.
+:::
diff --git a/web/versioned_docs/version-0.4.1/plugin-barman-cloud.v1.md b/web/versioned_docs/version-0.4.1/plugin-barman-cloud.v1.md
new file mode 100644
index 0000000..552dc27
--- /dev/null
+++ b/web/versioned_docs/version-0.4.1/plugin-barman-cloud.v1.md
@@ -0,0 +1,105 @@
+# API Reference
+
+## Packages
+- [barmancloud.cnpg.io/v1](#barmancloudcnpgiov1)
+
+
+## barmancloud.cnpg.io/v1
+
+Package v1 contains API Schema definitions for the barmancloud v1 API group
+
+### Resource Types
+- [ObjectStore](#objectstore)
+
+
+
+#### InstanceSidecarConfiguration
+
+
+
+InstanceSidecarConfiguration defines the configuration for the sidecar that runs in the instance pods.
+
+
+
+_Appears in:_
+- [ObjectStoreSpec](#objectstorespec)
+
+| Field | Description | Required | Default | Validation |
+| --- | --- | --- | --- | --- |
+| `env` _[EnvVar](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#envvar-v1-core) array_ | The environment to be explicitly passed to the sidecar | | | |
+| `retentionPolicyIntervalSeconds` _integer_ | The retentionCheckInterval defines the frequency at which the
system checks and enforces retention policies. | | 1800 | |
+| `resources` _[ResourceRequirements](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#resourcerequirements-v1-core)_ | Resources define cpu/memory requests and limits for the sidecar that runs in the instance pods. | | | |
+
+
+#### ObjectStore
+
+
+
+ObjectStore is the Schema for the objectstores API.
+
+
+
+
+
+| Field | Description | Required | Default | Validation |
+| --- | --- | --- | --- | --- |
+| `apiVersion` _string_ | `barmancloud.cnpg.io/v1` | True | | |
+| `kind` _string_ | `ObjectStore` | True | | |
+| `metadata` _[ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | True | | |
+| `spec` _[ObjectStoreSpec](#objectstorespec)_ | Specification of the desired behavior of the ObjectStore.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status | True | | |
+| `status` _[ObjectStoreStatus](#objectstorestatus)_ | Most recently observed status of the ObjectStore. This data may not be up to
date. Populated by the system. Read-only.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status | | | |
+
+
+#### ObjectStoreSpec
+
+
+
+ObjectStoreSpec defines the desired state of ObjectStore.
+
+
+
+_Appears in:_
+- [ObjectStore](#objectstore)
+
+| Field | Description | Required | Default | Validation |
+| --- | --- | --- | --- | --- |
+| `configuration` _[BarmanObjectStoreConfiguration](https://pkg.go.dev/github.com/cloudnative-pg/barman-cloud/pkg/api#BarmanObjectStoreConfiguration)_ | The configuration for the barman-cloud tool suite | True | | |
+| `retentionPolicy` _string_ | RetentionPolicy is the retention policy to be used for backups
and WALs (i.e. '60d'). The retention policy is expressed in the form
of `XXu` where `XX` is a positive integer and `u` is in `[dwm]` -
days, weeks, months. | | | Pattern: `^[1-9][0-9]*[dwm]$`
|
+| `instanceSidecarConfiguration` _[InstanceSidecarConfiguration](#instancesidecarconfiguration)_ | The configuration for the sidecar that runs in the instance pods | | | |
+
+
+#### ObjectStoreStatus
+
+
+
+ObjectStoreStatus defines the observed state of ObjectStore.
+
+
+
+_Appears in:_
+- [ObjectStore](#objectstore)
+
+| Field | Description | Required | Default | Validation |
+| --- | --- | --- | --- | --- |
+| `serverRecoveryWindow` _object (keys:string, values:[RecoveryWindow](#recoverywindow))_ | ServerRecoveryWindow maps each server to its recovery window | True | | |
+
+
+#### RecoveryWindow
+
+
+
+RecoveryWindow represents the time span between the first
+recoverability point and the last successful backup of a PostgreSQL
+server, defining the period during which data can be restored.
+
+
+
+_Appears in:_
+- [ObjectStoreStatus](#objectstorestatus)
+
+| Field | Description | Required | Default | Validation |
+| --- | --- | --- | --- | --- |
+| `firstRecoverabilityPoint` _[Time](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#time-v1-meta)_ | The first recoverability point in a PostgreSQL server refers to
the earliest point in time to which the database can be
restored. | True | | |
+| `lastSuccussfulBackupTime` _[Time](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#time-v1-meta)_ | The last successful backup time | True | | |
+
+
diff --git a/web/versioned_docs/version-0.4.1/retention.md b/web/versioned_docs/version-0.4.1/retention.md
new file mode 100644
index 0000000..fefbd08
--- /dev/null
+++ b/web/versioned_docs/version-0.4.1/retention.md
@@ -0,0 +1,38 @@
+---
+sidebar_position: 60
+---
+
+# Retention Policies
+
+
+
+The Barman Cloud Plugin supports **automated cleanup of obsolete backups** via
+retention policies, configured in the `.spec.retentionPolicy` field of the
+`ObjectStore` resource.
+
+:::note
+This feature uses the `barman-cloud-backup-delete` command with the
+`--retention-policy "RECOVERY WINDOW OF {{ value }} {{ unit }}"` syntax.
+:::
+
+#### Example: 30-Day Retention Policy
+
+```yaml
+apiVersion: barmancloud.cnpg.io/v1
+kind: ObjectStore
+metadata:
+ name: my-store
+spec:
+ [...]
+ retentionPolicy: "30d"
+````
+
+:::note
+A **recovery window retention policy** ensures the cluster can be restored to
+any point in time between the calculated *Point of Recoverability* (PoR) and
+the latest WAL archive. The PoR is defined as `current time - recovery window`.
+The **first valid backup** is the most recent backup completed before the PoR.
+Backups older than that are marked as *obsolete* and deleted after the next
+backup completes.
+:::
+
diff --git a/web/versioned_docs/version-0.4.1/usage.md b/web/versioned_docs/version-0.4.1/usage.md
new file mode 100644
index 0000000..dcff072
--- /dev/null
+++ b/web/versioned_docs/version-0.4.1/usage.md
@@ -0,0 +1,258 @@
+---
+sidebar_position: 30
+---
+
+# Using the Barman Cloud Plugin
+
+
+
+After [installing the plugin](installation.mdx) in the same namespace as the
+CloudNativePG operator, enabling your PostgreSQL cluster to use the Barman
+Cloud Plugin involves just a few steps:
+
+- Defining the object store containing your WAL archive and base backups, using
+ your preferred [provider](object_stores.md)
+- Instructing the Postgres cluster to use the Barman Cloud Plugin
+
+From that moment, you’ll be able to issue on-demand backups or define a backup
+schedule, as well as rely on the object store for recovery operations.
+
+The rest of this page details each step, using MinIO as object store provider.
+
+## Defining the `ObjectStore`
+
+An `ObjectStore` resource must be created for each object store used in your
+PostgreSQL architecture. Here's an example configuration using MinIO:
+
+```yaml
+apiVersion: barmancloud.cnpg.io/v1
+kind: ObjectStore
+metadata:
+ name: minio-store
+spec:
+ configuration:
+ destinationPath: s3://backups/
+ endpointURL: http://minio:9000
+ s3Credentials:
+ accessKeyId:
+ name: minio
+ key: ACCESS_KEY_ID
+ secretAccessKey:
+ name: minio
+ key: ACCESS_SECRET_KEY
+ wal:
+ compression: gzip
+```
+
+The `.spec.configuration` schema follows the same format as the
+[in-tree barman-cloud support](https://pkg.go.dev/github.com/cloudnative-pg/barman-cloud/pkg/api#BarmanObjectStoreConfiguration).
+Refer to [the CloudNativePG documentation](https://cloudnative-pg.io/documentation/preview/backup_barmanobjectstore/)
+for additional details.
+
+:::important
+The `serverName` parameter in the `ObjectStore` resource is retained solely for
+API compatibility with the in-tree `barmanObjectStore` and must always be left empty.
+When needed, use the `serverName` plugin parameter in the Cluster configuration instead.
+:::
+
+## Configuring WAL Archiving
+
+Once the `ObjectStore` is defined, you can configure your PostgreSQL cluster
+to archive WALs by referencing the store in the `.spec.plugins` section:
+
+```yaml
+apiVersion: postgresql.cnpg.io/v1
+kind: Cluster
+metadata:
+ name: cluster-example
+spec:
+ instances: 3
+ imagePullPolicy: Always
+ plugins:
+ - name: barman-cloud.cloudnative-pg.io
+ isWALArchiver: true
+ parameters:
+ barmanObjectName: minio-store
+ storage:
+ size: 1Gi
+```
+
+This configuration enables both WAL archiving and data directory backups.
+
+## Performing a Base Backup
+
+Once WAL archiving is enabled, the cluster is ready for backups. To issue an
+on-demand backup, use the following configuration with the plugin method:
+
+```yaml
+apiVersion: postgresql.cnpg.io/v1
+kind: Backup
+metadata:
+ name: backup-example
+spec:
+ cluster:
+ name: cluster-example
+ method: plugin
+ pluginConfiguration:
+ name: barman-cloud.cloudnative-pg.io
+```
+
+:::note
+You can apply the same concept to the `ScheduledBackup` resource.
+:::
+
+## Restoring a Cluster
+
+To restore a cluster from an object store, create a new `Cluster` resource that
+references the store containing the backup. Below is an example configuration:
+
+```yaml
+apiVersion: postgresql.cnpg.io/v1
+kind: Cluster
+metadata:
+ name: cluster-restore
+spec:
+ instances: 3
+ imagePullPolicy: IfNotPresent
+ bootstrap:
+ recovery:
+ source: source
+ externalClusters:
+ - name: source
+ plugin:
+ name: barman-cloud.cloudnative-pg.io
+ parameters:
+ barmanObjectName: minio-store
+ serverName: cluster-example
+ storage:
+ size: 1Gi
+```
+
+:::important
+The above configuration does **not** enable WAL archiving for the restored cluster.
+:::
+
+To enable WAL archiving for the restored cluster, include the `.spec.plugins`
+section alongside the `externalClusters.plugin` section, as shown below:
+
+```yaml
+apiVersion: postgresql.cnpg.io/v1
+kind: Cluster
+metadata:
+ name: cluster-restore
+spec:
+ instances: 3
+ imagePullPolicy: IfNotPresent
+ bootstrap:
+ recovery:
+ source: source
+ plugins:
+ - name: barman-cloud.cloudnative-pg.io
+ isWALArchiver: true
+ parameters:
+ # Backup Object Store (push, read-write)
+ barmanObjectName: minio-store-bis
+ externalClusters:
+ - name: source
+ plugin:
+ name: barman-cloud.cloudnative-pg.io
+ parameters:
+ # Recovery Object Store (pull, read-only)
+ barmanObjectName: minio-store
+ serverName: cluster-example
+ storage:
+ size: 1Gi
+```
+
+The same object store may be used for both transaction log archiving and
+restoring a cluster, or you can configure separate stores for these purposes.
+
+## Configuring Replica Clusters
+
+You can set up a distributed topology by combining the previously defined
+configurations with the `.spec.replica` section. Below is an example of how to
+define a replica cluster:
+
+```yaml
+apiVersion: postgresql.cnpg.io/v1
+kind: Cluster
+metadata:
+ name: cluster-dc-a
+spec:
+ instances: 3
+ primaryUpdateStrategy: unsupervised
+
+ storage:
+ storageClass: csi-hostpath-sc
+ size: 1Gi
+
+ plugins:
+ - name: barman-cloud.cloudnative-pg.io
+ isWALArchiver: true
+ parameters:
+ barmanObjectName: minio-store-a
+
+ replica:
+ self: cluster-dc-a
+ primary: cluster-dc-a
+ source: cluster-dc-b
+
+ externalClusters:
+ - name: cluster-dc-a
+ plugin:
+ name: barman-cloud.cloudnative-pg.io
+ parameters:
+ barmanObjectName: minio-store-a
+
+ - name: cluster-dc-b
+ plugin:
+ name: barman-cloud.cloudnative-pg.io
+ parameters:
+ barmanObjectName: minio-store-b
+```
+
+## Configuring the plugin instance sidecar
+
+The Barman Cloud Plugin runs as a sidecar container next to each PostgreSQL
+instance pod. It manages backup, WAL archiving, and restore processes.
+
+Configuration comes from multiple `ObjectStore` resources:
+
+1. The one referenced in the
+ `.spec.plugins` section of the `Cluster`. This is the
+ object store used for WAL archiving and base backups.
+2. The one referenced in the external cluster
+ used in the `.spec.replica.source` section of the `Cluster`. This is
+ used by the log-shipping designated primary to get the WAL files.
+3. The one referenced in the
+ `.spec.bootstrap.recovery.source` section of the `Cluster`. Used by
+ the initial recovery job to create the cluster from an existing backup.
+
+You can fine-tune sidecar behavior in the `.spec.instanceSidecarConfiguration`
+of your ObjectStore. These settings apply to all PostgreSQL instances that use
+this object store. Any updates take effect at the next `Cluster` reconciliation,
+and could generate a rollout of the `Cluster`.
+
+```yaml
+apiVersion: barmancloud.cnpg.io/v1
+kind: ObjectStore
+metadata:
+ name: minio-store
+spec:
+ configuration:
+ # [...]
+ instanceSidecarConfiguration:
+ retentionPolicyIntervalSeconds: 1800
+ resources:
+ requests:
+ memory: "XXX"
+ cpu: "YYY"
+ limits:
+ memory: "XXX"
+ cpu: "YYY"
+```
+
+:::note
+If more than one `ObjectStore` applies, the `instanceSidecarConfiguration` of
+the one set in `.spec.plugins` has priority.
+:::
diff --git a/web/versioned_sidebars/version-0.4.1-sidebars.json b/web/versioned_sidebars/version-0.4.1-sidebars.json
new file mode 100644
index 0000000..1fd014a
--- /dev/null
+++ b/web/versioned_sidebars/version-0.4.1-sidebars.json
@@ -0,0 +1,8 @@
+{
+ "docs": [
+ {
+ "type": "autogenerated",
+ "dirName": "."
+ }
+ ]
+}
diff --git a/web/versions.json b/web/versions.json
index 4be2bd9..552da38 100644
--- a/web/versions.json
+++ b/web/versions.json
@@ -1,4 +1,5 @@
[
+ "0.4.1",
"0.4.0",
"0.3.0"
]