Compare commits

...

10 Commits

Author SHA1 Message Date
Armando Ruocco
6ab7764dfc
Merge 6a55a361a3 into 604fb9c430 2026-01-14 19:42:52 +01:00
Nils Vogels
604fb9c430
docs: Update documentation to use correct reference
Some checks failed
Deploy Docusaurus to GitHub Pages / build (push) Failing after 2s
Deploy Docusaurus to GitHub Pages / deploy (push) Has been skipped
release-please / release-please (push) Failing after 2s
Signed-off-by: Nils Vogels <n.vogels@aves-it.nl>
Signed-off-by: Marco Nenciarini <marco.nenciarini@enterprisedb.com>
2026-01-14 19:39:15 +01:00
renovate[bot]
fa546eae05
feat(deps): update barman-cloud to v3.17.0 (#702)
Some checks failed
release-please / release-please (push) Failing after 3s
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Marco Nenciarini <marco.nenciarini@enterprisedb.com>
2026-01-14 13:39:57 +01:00
renovate[bot]
ad8a1767a7
chore(deps): update golangci/golangci-lint docker tag to v2.8.0 (#721)
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Signed-off-by: Marco Nenciarini <marco.nenciarini@enterprisedb.com>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Marco Nenciarini <marco.nenciarini@enterprisedb.com>
2026-01-14 12:20:09 +01:00
renovate[bot]
e5eb03e181
chore(deps): update all sagikazarmark daggerverse dependencies to 5dcc7e4 (#728)
Some checks failed
Barman Base Image / build (push) Failing after 2s
Deploy Docusaurus to GitHub Pages / build (push) Failing after 2s
Deploy Docusaurus to GitHub Pages / deploy (push) Has been skipped
release-please / release-please (push) Failing after 2s
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-13 18:04:08 +01:00
renovate[bot]
e943923f8f
chore(deps): refresh pip-compile outputs (#704)
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-13 18:03:43 +01:00
renovate[bot]
4a637d7c58
fix(deps): update all non-major go dependencies (#719)
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-13 16:14:41 +01:00
renovate[bot]
24fbc01a33
chore(deps): lock file maintenance (#714)
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-13 16:14:04 +01:00
Marco Nenciarini
6a55a361a3 test: replace sleep-based test with deterministic channel verification
The cleanup routine test used time.Sleep() without actually verifying
the goroutine stopped. Added a done channel to provide deterministic
verification of goroutine termination.

Signed-off-by: Marco Nenciarini <marco.nenciarini@enterprisedb.com>
2025-12-23 17:06:00 +01:00
Armando Ruocco
62b579101f fix: prevent memory leak by periodically cleaning up expired cache entries
Signed-off-by: Armando Ruocco <armando.ruocco@enterprisedb.com>
2025-12-23 17:06:00 +01:00
15 changed files with 903 additions and 521 deletions

View File

@ -19,9 +19,9 @@ tasks:
desc: Run golangci-lint
env:
# renovate: datasource=git-refs depName=golangci-lint lookupName=https://github.com/sagikazarmark/daggerverse currentValue=main
DAGGER_GOLANGCI_LINT_SHA: 6133ad18e131b891d4723b8e25d69f5de077b472
DAGGER_GOLANGCI_LINT_SHA: 5dcc7e4c4cd5ed230046955f42e27f2166545155
# renovate: datasource=docker depName=golangci/golangci-lint versioning=semver
GOLANGCI_LINT_VERSION: v2.7.2
GOLANGCI_LINT_VERSION: v2.8.0
cmds:
- >
GITHUB_REF= dagger -sc "github.com/sagikazarmark/daggerverse/golangci-lint@${DAGGER_GOLANGCI_LINT_SHA}
@ -486,7 +486,7 @@ tasks:
IMAGE_VERSION: '{{regexReplaceAll "(\\d+)/merge" .GITHUB_REF_NAME "pr-${1}"}}'
env:
# renovate: datasource=git-refs depName=kustomize lookupName=https://github.com/sagikazarmark/daggerverse currentValue=main
DAGGER_KUSTOMIZE_SHA: 6133ad18e131b891d4723b8e25d69f5de077b472
DAGGER_KUSTOMIZE_SHA: 5dcc7e4c4cd5ed230046955f42e27f2166545155
cmds:
- >
dagger -s call -m https://github.com/sagikazarmark/daggerverse/kustomize@${DAGGER_KUSTOMIZE_SHA}
@ -516,7 +516,7 @@ tasks:
- GITHUB_TOKEN
env:
# renovate: datasource=git-refs depName=gh lookupName=https://github.com/sagikazarmark/daggerverse
DAGGER_GH_SHA: 6133ad18e131b891d4723b8e25d69f5de077b472
DAGGER_GH_SHA: 5dcc7e4c4cd5ed230046955f42e27f2166545155
preconditions:
- sh: "[[ {{.GITHUB_REF}} =~ 'refs/tags/v.*' ]]"
msg: not a tag, failing

View File

@ -36,7 +36,7 @@ RUN --mount=type=cache,target=/go/pkg/mod --mount=type=cache,target=/root/.cache
# Use plugin-barman-cloud-base to get the dependencies.
# pip will build everything inside /usr, so we copy every file into a new
# destination that will then be copied into the distroless container
FROM ghcr.io/cloudnative-pg/plugin-barman-cloud-base:3.16.2-202512221525 AS pythonbuilder
FROM ghcr.io/cloudnative-pg/plugin-barman-cloud-base:3.17.0-202601131704 AS pythonbuilder
# Prepare a new /usr/ directory with the files we'll need in the final image
RUN mkdir /new-usr/ && \
cp -r --parents /usr/local/lib/ /usr/lib/*-linux-gnu/ /usr/local/bin/ \

View File

@ -4,9 +4,9 @@
#
# pip-compile --allow-unsafe --generate-hashes --output-file=sidecar-requirements.txt --strip-extras sidecar-requirements.in
#
azure-core==1.37.0 \
--hash=sha256:7064f2c11e4b97f340e8e8c6d923b822978be3016e46b7bc4aa4b337cfb48aee \
--hash=sha256:b3abe2c59e7d6bb18b38c275a5029ff80f98990e7c90a5e646249a56630fcc19
azure-core==1.38.0 \
--hash=sha256:8194d2682245a3e4e3151a667c686464c3786fed7918b394d035bdcd61bb5993 \
--hash=sha256:ab0c9b2cd71fecb1842d52c965c95285d3cfb38902f6766e4a471f1cd8905335
# via
# azure-identity
# azure-storage-blob
@ -14,31 +14,27 @@ azure-identity==1.25.1 \
--hash=sha256:87ca8328883de6036443e1c37b40e8dc8fb74898240f61071e09d2e369361456 \
--hash=sha256:e9edd720af03dff020223cd269fa3a61e8f345ea75443858273bcb44844ab651
# via barman
azure-storage-blob==12.27.1 \
--hash=sha256:65d1e25a4628b7b6acd20ff7902d8da5b4fde8e46e19c8f6d213a3abc3ece272 \
--hash=sha256:a1596cc4daf5dac9be115fcb5db67245eae894cf40e4248243754261f7b674a6
azure-storage-blob==12.28.0 \
--hash=sha256:00fb1db28bf6a7b7ecaa48e3b1d5c83bfadacc5a678b77826081304bd87d6461 \
--hash=sha256:e7d98ea108258d29aa0efbfd591b2e2075fa1722a2fae8699f0b3c9de11eff41
# via barman
barman==3.17.0 \
--hash=sha256:07b033da14e72f103de44261c31bd0c3169bbb2e4de3481c6bb3510e9870d38e \
--hash=sha256:d6618990a6dbb31af3286d746a278a038534b7e3cc617c2b379ef7ebdeb7ed5a
# via -r sidecar-requirements.in
boto3==1.42.14 \
--hash=sha256:a5d005667b480c844ed3f814a59f199ce249d0f5669532a17d06200c0a93119c \
--hash=sha256:bfcc665227bb4432a235cb4adb47719438d6472e5ccbf7f09512046c3f749670
boto3==1.42.26 \
--hash=sha256:0fbcf1922e62d180f3644bc1139425821b38d93c1e6ec27409325d2ae86131aa \
--hash=sha256:f116cfbe7408e0a9153da363f134d2f1b5008f17ee86af104f0ce59a62be1833
# via barman
botocore==1.42.14 \
--hash=sha256:cf5bebb580803c6cfd9886902ca24834b42ecaa808da14fb8cd35ad523c9f621 \
--hash=sha256:efe89adfafa00101390ec2c371d453b3359d5f9690261bc3bd70131e0d453e8e
botocore==1.42.26 \
--hash=sha256:1c8855e3e811f015d930ccfe8751d4be295aae0562133d14b6f0b247cd6fd8d3 \
--hash=sha256:71171c2d09ac07739f4efce398b15a4a8bc8769c17fb3bc99625e43ed11ad8b7
# via
# boto3
# s3transfer
cachetools==6.2.4 \
--hash=sha256:69a7a52634fed8b8bf6e24a050fb60bff1c9bd8f6d24572b99c32d4e71e62a51 \
--hash=sha256:82c5c05585e70b6ba2d3ae09ea60b79548872185d2f24ae1f2709d37299fd607
# via google-auth
certifi==2025.11.12 \
--hash=sha256:97de8790030bbd5c2d96b7ec782fc2f7820ef8dba6db909ccf95449f2d062d4b \
--hash=sha256:d8ab5478f2ecd78af242878415affce761ca6bc54a22a27e026d7c25357c3316
certifi==2026.1.4 \
--hash=sha256:9943707519e4add1115f44c2bc244f782c0249876bf51b6599fee1ffbedd685c \
--hash=sha256:ac726dd470482006e014ad384921ed6438c457018f4b3d204aea4281258b2120
# via requests
cffi==2.0.0 \
--hash=sha256:00bdf7acc5f795150faa6957054fbbca2439db2f775ce831222b66f192f03beb \
@ -438,15 +434,15 @@ cryptography==46.0.3 \
# azure-storage-blob
# msal
# pyjwt
google-api-core==2.28.1 \
--hash=sha256:2b405df02d68e68ce0fbc138559e6036559e685159d148ae5861013dc201baf8 \
--hash=sha256:4021b0f8ceb77a6fb4de6fde4502cecab45062e66ff4f2895169e0b35bc9466c
google-api-core==2.29.0 \
--hash=sha256:84181be0f8e6b04006df75ddfe728f24489f0af57c96a529ff7cf45bc28797f7 \
--hash=sha256:d30bc60980daa36e314b5d5a3e5958b0200cb44ca8fa1be2b614e932b75a3ea9
# via
# google-cloud-core
# google-cloud-storage
google-auth==2.45.0 \
--hash=sha256:82344e86dc00410ef5382d99be677c6043d72e502b625aa4f4afa0bdacca0f36 \
--hash=sha256:90d3f41b6b72ea72dd9811e765699ee491ab24139f34ebf1ca2b9cc0c38708f3
google-auth==2.47.0 \
--hash=sha256:833229070a9dfee1a353ae9877dcd2dec069a8281a4e72e72f77d4a70ff945da \
--hash=sha256:c516d68336bfde7cf0da26aab674a36fedcf04b37ac4edd59c597178760c3498
# via
# google-api-core
# google-cloud-core
@ -591,17 +587,17 @@ proto-plus==1.27.0 \
--hash=sha256:1baa7f81cf0f8acb8bc1f6d085008ba4171eaf669629d1b6d1673b21ed1c0a82 \
--hash=sha256:873af56dd0d7e91836aee871e5799e1c6f1bda86ac9a983e0bb9f0c266a568c4
# via google-api-core
protobuf==6.33.2 \
--hash=sha256:1f8017c48c07ec5859106533b682260ba3d7c5567b1ca1f24297ce03384d1b4f \
--hash=sha256:2981c58f582f44b6b13173e12bb8656711189c2a70250845f264b877f00b1913 \
--hash=sha256:56dc370c91fbb8ac85bc13582c9e373569668a290aa2e66a590c2a0d35ddb9e4 \
--hash=sha256:7109dcc38a680d033ffb8bf896727423528db9163be1b6a02d6a49606dcadbfe \
--hash=sha256:7636aad9bb01768870266de5dc009de2d1b936771b38a793f73cbbf279c91c5c \
--hash=sha256:87eb388bd2d0f78febd8f4c8779c79247b26a5befad525008e49a6955787ff3d \
--hash=sha256:8cd7640aee0b7828b6d03ae518b5b4806fdfc1afe8de82f79c3454f8aef29872 \
--hash=sha256:b5d3b5625192214066d99b2b605f5783483575656784de223f00a8d00754fc0e \
--hash=sha256:d9b19771ca75935b3a4422957bc518b0cecb978b31d1dd12037b088f6bcc0e43 \
--hash=sha256:fc2a0e8b05b180e5fc0dd1559fe8ebdae21a27e81ac77728fb6c42b12c7419b4
protobuf==6.33.4 \
--hash=sha256:0f12ddbf96912690c3582f9dffb55530ef32015ad8e678cd494312bd78314c4f \
--hash=sha256:1fe3730068fcf2e595816a6c34fe66eeedd37d51d0400b72fabc848811fdc1bc \
--hash=sha256:2fe67f6c014c84f655ee06f6f66213f9254b3a8b6bda6cda0ccd4232c73c06f0 \
--hash=sha256:3df850c2f8db9934de4cf8f9152f8dc2558f49f298f37f90c517e8e5c84c30e9 \
--hash=sha256:757c978f82e74d75cba88eddec479df9b99a42b31193313b75e492c06a51764e \
--hash=sha256:8f11ffae31ec67fc2554c2ef891dcb561dae9a2a3ed941f9e134c2db06657dbc \
--hash=sha256:918966612c8232fc6c24c78e1cd89784307f5814ad7506c308ee3cf86662850d \
--hash=sha256:955478a89559fa4568f5a81dce77260eabc5c686f9e8366219ebd30debf06aa6 \
--hash=sha256:c7c64f259c618f0bef7bee042075e390debbf9682334be2b67408ec7c1c09ee6 \
--hash=sha256:dc2e61bca3b10470c1912d166fe0af67bfc20eb55971dcef8dfa48ce14f0ed91
# via
# google-api-core
# googleapis-common-protos
@ -672,9 +668,9 @@ typing-extensions==4.15.0 \
# azure-core
# azure-identity
# azure-storage-blob
urllib3==2.6.2 \
--hash=sha256:016f9c98bb7e98085cb2b4b17b87d2c702975664e4f060c6532e64d1c1a5e797 \
--hash=sha256:ec21cddfe7724fc7cb4ba4bea7aa8e2ef36f607a4bab81aa6ce42a13dc3f03dd
urllib3==2.6.3 \
--hash=sha256:1b62b6884944a57dbe321509ab94fd4d3b307075e0c2eae991ac71ee15ad38ed \
--hash=sha256:bf272323e553dfb2e87d9bfd225ca7b0f467b919d7bbd355436d3fd37cb0acd4
# via
# botocore
# requests

4
go.mod
View File

@ -12,8 +12,8 @@ require (
github.com/cloudnative-pg/cnpg-i v0.3.1
github.com/cloudnative-pg/cnpg-i-machinery v0.4.2
github.com/cloudnative-pg/machinery v0.3.3
github.com/onsi/ginkgo/v2 v2.27.3
github.com/onsi/gomega v1.38.3
github.com/onsi/ginkgo/v2 v2.27.5
github.com/onsi/gomega v1.39.0
github.com/spf13/cobra v1.10.2
github.com/spf13/viper v1.21.0
google.golang.org/grpc v1.78.0

8
go.sum
View File

@ -163,10 +163,10 @@ github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f h1:y5//uYreIhSUg3J1GEMiLbxo1LJaP8RfCpH6pymGZus=
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw=
github.com/onsi/ginkgo/v2 v2.27.3 h1:ICsZJ8JoYafeXFFlFAG75a7CxMsJHwgKwtO+82SE9L8=
github.com/onsi/ginkgo/v2 v2.27.3/go.mod h1:ArE1D/XhNXBXCBkKOLkbsb2c81dQHCRcF5zwn/ykDRo=
github.com/onsi/gomega v1.38.3 h1:eTX+W6dobAYfFeGC2PV6RwXRu/MyT+cQguijutvkpSM=
github.com/onsi/gomega v1.38.3/go.mod h1:ZCU1pkQcXDO5Sl9/VVEGlDyp+zm0m1cmeG5TOzLgdh4=
github.com/onsi/ginkgo/v2 v2.27.5 h1:ZeVgZMx2PDMdJm/+w5fE/OyG6ILo1Y3e+QX4zSR0zTE=
github.com/onsi/ginkgo/v2 v2.27.5/go.mod h1:ArE1D/XhNXBXCBkKOLkbsb2c81dQHCRcF5zwn/ykDRo=
github.com/onsi/gomega v1.39.0 h1:y2ROC3hKFmQZJNFeGAMeHZKkjBL65mIZcvrLQBF9k6Q=
github.com/onsi/gomega v1.39.0/go.mod h1:ZCU1pkQcXDO5Sl9/VVEGlDyp+zm0m1cmeG5TOzLgdh4=
github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4=
github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=

View File

@ -36,6 +36,9 @@ import (
// DefaultTTLSeconds is the default TTL in seconds of cache entries
const DefaultTTLSeconds = 10
// DefaultCleanupIntervalSeconds is the default interval in seconds for cache cleanup
const DefaultCleanupIntervalSeconds = 30
type cachedEntry struct {
entry client.Object
fetchUnixTime int64
@ -51,16 +54,28 @@ type ExtendedClient struct {
client.Client
cachedObjects []cachedEntry
mux *sync.Mutex
cleanupInterval time.Duration
cleanupDone chan struct{} // Signals when cleanup routine exits
}
// NewExtendedClient returns an extended client capable of caching secrets on the 'Get' operation
// NewExtendedClient returns an extended client capable of caching secrets on the 'Get' operation.
// It starts a background goroutine that periodically cleans up expired cache entries.
// The cleanup routine will stop when the provided context is cancelled.
func NewExtendedClient(
ctx context.Context,
baseClient client.Client,
) client.Client {
return &ExtendedClient{
ec := &ExtendedClient{
Client: baseClient,
mux: &sync.Mutex{},
cleanupInterval: DefaultCleanupIntervalSeconds * time.Second,
cleanupDone: make(chan struct{}),
}
// Start the background cleanup routine
go ec.startCleanupRoutine(ctx)
return ec
}
func (e *ExtendedClient) isObjectCached(obj client.Object) bool {
@ -208,3 +223,55 @@ func (e *ExtendedClient) Patch(
return e.Client.Patch(ctx, obj, patch, opts...)
}
// startCleanupRoutine periodically removes expired entries from the cache.
// It runs until the context is cancelled.
func (e *ExtendedClient) startCleanupRoutine(ctx context.Context) {
defer close(e.cleanupDone)
contextLogger := log.FromContext(ctx).WithName("extended_client_cleanup")
ticker := time.NewTicker(e.cleanupInterval)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
contextLogger.Debug("stopping cache cleanup routine")
return
case <-ticker.C:
// Check context before cleanup to avoid unnecessary work during shutdown
if ctx.Err() != nil {
return
}
e.cleanupExpiredEntries(ctx)
}
}
}
// cleanupExpiredEntries removes all expired entries from the cache.
func (e *ExtendedClient) cleanupExpiredEntries(ctx context.Context) {
contextLogger := log.FromContext(ctx).WithName("extended_client_cleanup")
e.mux.Lock()
defer e.mux.Unlock()
initialCount := len(e.cachedObjects)
if initialCount == 0 {
return
}
// Create a new slice with only non-expired entries
validEntries := make([]cachedEntry, 0, initialCount)
for _, entry := range e.cachedObjects {
if !entry.isExpired() {
validEntries = append(validEntries, entry)
}
}
removedCount := initialCount - len(validEntries)
if removedCount > 0 {
e.cachedObjects = validEntries
contextLogger.Debug("cleaned up expired cache entries",
"removedCount", removedCount,
"remainingCount", len(validEntries))
}
}

View File

@ -20,6 +20,7 @@ SPDX-License-Identifier: Apache-2.0
package client
import (
"context"
"time"
corev1 "k8s.io/api/core/v1"
@ -59,6 +60,7 @@ var _ = Describe("ExtendedClient Get", func() {
extendedClient *ExtendedClient
secretInClient *corev1.Secret
objectStore *barmancloudv1.ObjectStore
cancelCtx context.CancelFunc
)
BeforeEach(func() {
@ -79,7 +81,14 @@ var _ = Describe("ExtendedClient Get", func() {
baseClient := fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(secretInClient, objectStore).Build()
extendedClient = NewExtendedClient(baseClient).(*ExtendedClient)
ctx, cancel := context.WithCancel(context.Background())
cancelCtx = cancel
extendedClient = NewExtendedClient(ctx, baseClient).(*ExtendedClient)
})
AfterEach(func() {
// Cancel the context to stop the cleanup routine
cancelCtx()
})
It("returns secret from cache if not expired", func(ctx SpecContext) {
@ -164,3 +173,141 @@ var _ = Describe("ExtendedClient Get", func() {
Expect(objectStore.GetResourceVersion()).To(Equal("from cache"))
})
})
var _ = Describe("ExtendedClient Cache Cleanup", func() {
var (
extendedClient *ExtendedClient
cancelCtx context.CancelFunc
)
BeforeEach(func() {
baseClient := fake.NewClientBuilder().
WithScheme(scheme).
Build()
ctx, cancel := context.WithCancel(context.Background())
cancelCtx = cancel
extendedClient = NewExtendedClient(ctx, baseClient).(*ExtendedClient)
})
AfterEach(func() {
cancelCtx()
})
It("cleans up expired entries", func(ctx SpecContext) {
// Add some expired entries
expiredSecret1 := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Namespace: "default",
Name: "expired-secret-1",
},
}
expiredSecret2 := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Namespace: "default",
Name: "expired-secret-2",
},
}
validSecret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Namespace: "default",
Name: "valid-secret",
},
}
// Add expired entries (2 minutes ago)
addToCache(extendedClient, expiredSecret1, time.Now().Add(-2*time.Minute).Unix())
addToCache(extendedClient, expiredSecret2, time.Now().Add(-2*time.Minute).Unix())
// Add valid entry (just now)
addToCache(extendedClient, validSecret, time.Now().Unix())
Expect(extendedClient.cachedObjects).To(HaveLen(3))
// Trigger cleanup
extendedClient.cleanupExpiredEntries(ctx)
// Only the valid entry should remain
Expect(extendedClient.cachedObjects).To(HaveLen(1))
Expect(extendedClient.cachedObjects[0].entry.GetName()).To(Equal("valid-secret"))
})
It("does nothing when all entries are valid", func(ctx SpecContext) {
validSecret1 := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Namespace: "default",
Name: "valid-secret-1",
},
}
validSecret2 := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Namespace: "default",
Name: "valid-secret-2",
},
}
addToCache(extendedClient, validSecret1, time.Now().Unix())
addToCache(extendedClient, validSecret2, time.Now().Unix())
Expect(extendedClient.cachedObjects).To(HaveLen(2))
// Trigger cleanup
extendedClient.cleanupExpiredEntries(ctx)
// Both entries should remain
Expect(extendedClient.cachedObjects).To(HaveLen(2))
})
It("does nothing when cache is empty", func(ctx SpecContext) {
Expect(extendedClient.cachedObjects).To(BeEmpty())
// Trigger cleanup
extendedClient.cleanupExpiredEntries(ctx)
Expect(extendedClient.cachedObjects).To(BeEmpty())
})
It("removes all entries when all are expired", func(ctx SpecContext) {
expiredSecret1 := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Namespace: "default",
Name: "expired-secret-1",
},
}
expiredSecret2 := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Namespace: "default",
Name: "expired-secret-2",
},
}
addToCache(extendedClient, expiredSecret1, time.Now().Add(-2*time.Minute).Unix())
addToCache(extendedClient, expiredSecret2, time.Now().Add(-2*time.Minute).Unix())
Expect(extendedClient.cachedObjects).To(HaveLen(2))
// Trigger cleanup
extendedClient.cleanupExpiredEntries(ctx)
Expect(extendedClient.cachedObjects).To(BeEmpty())
})
It("stops cleanup routine when context is cancelled", func() {
// Create a new client with a short cleanup interval for testing
baseClient := fake.NewClientBuilder().
WithScheme(scheme).
Build()
ctx, cancel := context.WithCancel(context.Background())
ec := NewExtendedClient(ctx, baseClient).(*ExtendedClient)
ec.cleanupInterval = 10 * time.Millisecond
// Cancel the context immediately
cancel()
// Verify the cleanup routine actually stops by waiting for the done channel
select {
case <-ec.cleanupDone:
// Success: cleanup routine exited as expected
case <-time.After(1 * time.Second):
Fail("cleanup routine did not stop within timeout")
}
})
})

View File

@ -84,7 +84,7 @@ func Start(ctx context.Context) error {
return err
}
customCacheClient := extendedclient.NewExtendedClient(mgr.GetClient())
customCacheClient := extendedclient.NewExtendedClient(ctx, mgr.GetClient())
if err := mgr.Add(&CNPGI{
Client: customCacheClient,

View File

@ -353,30 +353,31 @@ func reconcilePodSpec(
sidecarTemplate corev1.Container,
config sidecarConfiguration,
) error {
envs := []corev1.EnvVar{
{
envs := make([]corev1.EnvVar, 0, 5+len(config.env))
envs = append(envs,
corev1.EnvVar{
Name: "NAMESPACE",
Value: cluster.Namespace,
},
{
corev1.EnvVar{
Name: "CLUSTER_NAME",
Value: cluster.Name,
},
{
corev1.EnvVar{
// TODO: should we really use this one?
// should we mount an emptyDir volume just for that?
Name: "SPOOL_DIRECTORY",
Value: "/controller/wal-restore-spool",
},
{
corev1.EnvVar{
Name: "CUSTOM_CNPG_GROUP",
Value: cluster.GetObjectKind().GroupVersionKind().Group,
},
{
corev1.EnvVar{
Name: "CUSTOM_CNPG_VERSION",
Value: cluster.GetObjectKind().GroupVersionKind().Version,
},
}
)
envs = append(envs, config.env...)

View File

@ -206,7 +206,7 @@ When a backup fails, follow these steps in order:
plugins:
- name: barman-cloud.cloudnative-pg.io
parameters:
barmanObjectStore: <your-objectstore-name>
barmanObjectName: <your-objectstore-name>
```
c. **Check plugin deployment is running**:

View File

@ -206,7 +206,7 @@ When a backup fails, follow these steps in order:
plugins:
- name: barman-cloud.cloudnative-pg.io
parameters:
barmanObjectStore: <your-objectstore-name>
barmanObjectName: <your-objectstore-name>
```
c. **Check plugin deployment is running**:

View File

@ -206,7 +206,7 @@ When a backup fails, follow these steps in order:
plugins:
- name: barman-cloud.cloudnative-pg.io
parameters:
barmanObjectStore: <your-objectstore-name>
barmanObjectName: <your-objectstore-name>
```
c. **Check plugin deployment is running**:

View File

@ -206,7 +206,7 @@ When a backup fails, follow these steps in order:
plugins:
- name: barman-cloud.cloudnative-pg.io
parameters:
barmanObjectStore: <your-objectstore-name>
barmanObjectName: <your-objectstore-name>
```
c. **Check plugin deployment is running**:

View File

@ -206,7 +206,7 @@ When a backup fails, follow these steps in order:
plugins:
- name: barman-cloud.cloudnative-pg.io
parameters:
barmanObjectStore: <your-objectstore-name>
barmanObjectName: <your-objectstore-name>
```
c. **Check plugin deployment is running**:

File diff suppressed because it is too large Load Diff