From 82366b9ac8371d2bed7f4bce2b4ddc8798359677 Mon Sep 17 00:00:00 2001 From: Pierluigi Lenoci Date: Wed, 6 Nov 2024 10:50:44 +0100 Subject: [PATCH] Added node-deletion-delay-timeout and node-deletion-batcher-interval to FAQ.md and as chart example Signed-off-by: Pierluigi Lenoci --- charts/cluster-autoscaler/Chart.yaml | 2 +- charts/cluster-autoscaler/values.yaml | 2 ++ cluster-autoscaler/FAQ.md | 2 ++ 3 files changed, 5 insertions(+), 1 deletion(-) diff --git a/charts/cluster-autoscaler/Chart.yaml b/charts/cluster-autoscaler/Chart.yaml index bc5aab925d0e..6a2836800349 100644 --- a/charts/cluster-autoscaler/Chart.yaml +++ b/charts/cluster-autoscaler/Chart.yaml @@ -11,4 +11,4 @@ name: cluster-autoscaler sources: - https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler type: application -version: 9.43.2 +version: 9.43.3 diff --git a/charts/cluster-autoscaler/values.yaml b/charts/cluster-autoscaler/values.yaml index 6e10673ae855..14742c0ff49e 100644 --- a/charts/cluster-autoscaler/values.yaml +++ b/charts/cluster-autoscaler/values.yaml @@ -192,6 +192,8 @@ extraArgs: # scale-down-delay-after-delete: 0s # scale-down-delay-after-failure: 3m # scale-down-unneeded-time: 10m + # node-deletion-delay-timeout: 2m + # node-deletion-batcher-interval: 0s # skip-nodes-with-system-pods: true # balancing-ignore-label_1: first-label-to-ignore # balancing-ignore-label_2: second-label-to-ignore diff --git a/cluster-autoscaler/FAQ.md b/cluster-autoscaler/FAQ.md index cc0c6bfbb7b2..c806303591aa 100644 --- a/cluster-autoscaler/FAQ.md +++ b/cluster-autoscaler/FAQ.md @@ -944,6 +944,8 @@ The following startup parameters are supported for cluster autoscaler: | `scale-down-non-empty-candidates-count` | Maximum number of non empty nodes considered in one iteration as candidates for scale down with drain
Lower value means better CA responsiveness but possible slower scale down latency
Higher value can affect CA performance with big clusters (hundreds of nodes)
Set to non positive value to turn this heuristic off - CA will not limit the number of nodes it considers." | 30 | `scale-down-candidates-pool-ratio` | A ratio of nodes that are considered as additional non empty candidates for
scale down when some candidates from previous iteration are no longer valid
Lower value means better CA responsiveness but possible slower scale down latency
Higher value can affect CA performance with big clusters (hundreds of nodes)
Set to 1.0 to turn this heuristics off - CA will take all nodes as additional candidates. | 0.1 | `scale-down-candidates-pool-min-count` | Minimum number of nodes that are considered as additional non empty candidates
for scale down when some candidates from previous iteration are no longer valid.
When calculating the pool size for additional candidates we take
`max(#nodes * scale-down-candidates-pool-ratio, scale-down-candidates-pool-min-count)` | 50 +| `node-deletion-delay-timeout` | Maximum time CA waits for removing delay-deletion.cluster-autoscaler.kubernetes.io/ annotations before deleting the node. | 2 minutes +| `node-deletion-batcher-interval` | How long CA ScaleDown gather nodes to delete them in batch. | 0 seconds | `scan-interval` | How often cluster is reevaluated for scale up or down | 10 seconds | `max-nodes-total` | Maximum number of nodes in all node groups. Cluster autoscaler will not grow the cluster beyond this number. | 0 | `cores-total` | Minimum and maximum number of cores in cluster, in the format \:\. Cluster autoscaler will not scale the cluster beyond these numbers. | 320000