1
0
mirror of https://github.com/kubernetes-sigs/descheduler.git synced 2026-01-26 05:14:13 +01:00

Compare commits

...

286 Commits

Author SHA1 Message Date
Kubernetes Prow Robot
70fa1c0b76 Merge pull request #989 from damemi/1.25-update-rolebinding
[release-1.25] Update helm chart rolebinding to use events.k8s.io
2022-10-17 14:53:18 -07:00
Mike Dame
cf7777e8c5 Update helm chart rolebinding to use events.k8s.io 2022-10-17 13:15:15 +00:00
Kubernetes Prow Robot
7ccb7ec675 Merge pull request #964 from damemi/0.25.1-helm-chart
Update Helm chart to v0.25.1
2022-09-27 08:01:51 -07:00
Mike Dame
ca9cd92557 Update Helm chart to v0.25.1 2022-09-27 14:36:55 +00:00
Kubernetes Prow Robot
425197466c Merge pull request #963 from damemi/0.25.1-updates
[release-1.25] Doc updates for v0.25.1
2022-09-27 07:15:51 -07:00
Mike Dame
ea4304f429 Prep doc updates for v0.25.1 2022-09-27 13:48:14 +00:00
Kubernetes Prow Robot
cda52e52fd Merge pull request #962 from knelasevero/release-1.25-issue-960
backport 7349b39 (issue 960) into release-1.25
2022-09-27 05:15:51 -07:00
Vlastimil Holer
3998a0246b includeSoftConstraints not being respected for TopologySpreadConstraint
Issue #960

Signed-off-by: Vlastimil Holer <vh@fortrabbit.com>
2022-09-27 13:50:10 +02:00
Kubernetes Prow Robot
59834cf8a7 Merge pull request #936 from pravarag/update-helm-chart-v1.25
Update helm chart version to v1.25.0
2022-09-14 09:03:00 -07:00
Kubernetes Prow Robot
82ed18fd2b Merge pull request #947 from eminaktas/metric-label-fix
feat: change DeschedulerVersion and GitVersion labels
2022-09-12 12:23:05 -07:00
eminaktas
2c17af79f4 feat: change DeschedulerVersion and GitVersion labels
This commit changes build_info metric labels
- AppVersion label will show major+minor version
  for example 0.24.1
  minor version numbers and commit hash

Signed-off-by: eminaktas <eminaktas34@gmail.com>
2022-09-12 21:36:40 +03:00
Kubernetes Prow Robot
72bf50fde6 Merge pull request #929 from knelasevero/ev-filter-plugin
Add new DefaultEvictor plugin with args
2022-09-12 09:21:24 -07:00
Lucas Severo Alves
f47c2c4407 add new preevectionfilter plugin with args 2022-09-12 16:56:21 +02:00
Kubernetes Prow Robot
16619fcf44 Merge pull request #931 from a7i/amir/v1beta1
remove TODO comments for cronjob v1beta1 support
2022-09-12 05:07:25 -07:00
Kubernetes Prow Robot
0317be1b76 Merge pull request #935 from pravarag/update-docs-1.25-release
Update docs & manifests for v0.25.0
2022-09-08 06:49:45 -07:00
Kubernetes Prow Robot
d8bac08592 Merge pull request #945 from gallowaystorm/patch-1
feat: add RemovePodsHavingTooManyRestarts to values.yaml
2022-09-07 18:22:06 -07:00
Storm Galloway
d14df1fedf feat: add RemovePodsHavingTooManyRestarts to yaml
This does the following:
1. Enables RemovePodsHavingTooManyRestarts when using Helm by default (it is not currently)
2. Adds RemovePodsHavingTooManyRestarts to the values.yaml for clearer configs
2022-09-07 14:42:35 -05:00
Kubernetes Prow Robot
8a769603a6 Merge pull request #928 from a7i/podlifetime-states-version-clarification
clarify which version PodLifeTime introduced states parameter and deprecated podStatusPhases
2022-09-07 09:40:37 -07:00
Kubernetes Prow Robot
137a6b999f Merge pull request #943 from gallowaystorm/patch-1
Add RemovePodsViolatingTopologySpreadConstraint to values.yaml
2022-09-06 19:58:37 -07:00
Storm Galloway
334b4bb12c Add RemovePodsViolatingTopologySpreadConstraint to values.yaml
- add RemovePodsViolatingTopologySpreadConstraint yaml to values.yaml to make chart config clearer
2022-09-06 12:47:06 -05:00
Kubernetes Prow Robot
5a2a180f17 Merge pull request #938 from a7i/remove-kubectl-dep
remove dependency on kubectl
2022-09-06 09:33:52 -07:00
Amir Alavi
1265b4c325 remove dependency on kubectl
Signed-off-by: Amir Alavi <amiralavi7@gmail.com>
2022-09-06 10:47:22 -04:00
Kubernetes Prow Robot
ea8e648cfb Merge pull request #933 from a7i/1.25-rc.0
Bump to k8s 1.25
2022-09-06 07:30:56 -07:00
Amir Alavi
e8fae9a3b7 remove pod security policy; additional policy/v1beta1 cleanup; use informers for descheduler unit tests
update go to 1.19 and helm kubernetes cluster to 1.25
bump -rc.0 to 1.25 GA
bump k8s utils library
bump golang-ci
use go 1.19 for helm github action
upgrade kubectl from 0.20 to 0.25

Signed-off-by: Amir Alavi <amiralavi7@gmail.com>
2022-09-04 10:30:40 -04:00
JaneLiuL
c9b0fbe467 Bump to k8s 1.25-rc.0 2022-09-03 09:57:56 -04:00
Pravar Agrawal
66694bb767 update helm chart version to v1.25.0 2022-09-03 00:00:16 +05:30
Pravar Agrawal
e68ceb2273 Update docs & manifests for v0.25.0 2022-09-02 23:51:21 +05:30
Amir Alavi
dcb81f65a9 remove TODO comments for cronjob v1beta1 support 2022-08-30 15:53:21 -04:00
Amir Alavi
face080485 clarify which version PodLifeTime introduced states parameter and deprecated podStatusPhases
Signed-off-by: Amir Alavi <amiralavi7@gmail.com>
2022-08-27 14:59:14 -04:00
Kubernetes Prow Robot
1eade5bf91 Merge pull request #922 from jklaw90/remove-plugin
removing dupe plugin interface check
2022-08-25 12:48:08 -07:00
Julian Lawrence
bfcd310a16 removing dupe plugin interface check 2022-08-24 16:12:24 -07:00
Kubernetes Prow Robot
70df89601a Merge pull request #910 from JaneLiuL/master
bring lownodeutilization and highnodeutilization to plugin
2022-08-17 05:41:13 -07:00
JaneLiuL
680e650706 bring lownodeutilization and highnodeutilization to plugin 2022-08-17 17:30:46 +08:00
Kubernetes Prow Robot
b743b2d5f7 Merge pull request #903 from knelasevero/migrate-podantiaffinity
Migrate RemovePodsViolatingInterPodAntiAffinity into a plugin
2022-08-16 07:59:08 -07:00
Kubernetes Prow Robot
cfc5d0c24a Merge pull request #916 from ingvagabund/skip-fitsRequest-for-current-node
NodeFit: do not check whether node fitsRequest when a pod is already assigned to the node
2022-08-16 07:31:08 -07:00
Kubernetes Prow Robot
ddd145c69a Merge pull request #898 from JaneLiuL/security-gh
add security scan into gh-actions
2022-08-16 05:47:08 -07:00
Jan Chaloupka
d99bdfffc8 NodeFit: do not check whether node fitsRequest when a pod is already assigned to the node 2022-08-16 13:38:11 +02:00
Lucas Severo Alves
a2dd86ac3b Migrate RemovePodsViolatingInterPodAntiAffinity into a plugin 2022-08-16 12:29:25 +02:00
JaneLiuL
50676622de add security scan into gh-actions 2022-08-16 09:36:46 +08:00
Kubernetes Prow Robot
fa3ddc6fee Merge pull request #908 from jklaw90/migrate-RemovePodsViolatingTopologySpreadConstraint
RemovePodsViolatingTopologySpreadConstraint Plugin
2022-08-15 14:25:55 -07:00
Julian Lawrence
674bf4655d migrate plugin - pods violating topologyspread
updated to remove older params
2022-08-15 08:23:04 -07:00
Kubernetes Prow Robot
6d4abe88ca Merge pull request #913 from a7i/migrate-PodLifeTime-to-plugin
Migrate PodLifeTime to plugin
2022-08-15 07:14:14 -07:00
Amir Alavi
d4ff3aef61 Migrate PodLifeTime to plugin 2022-08-15 08:54:42 -04:00
Kubernetes Prow Robot
83c4f5d526 Merge pull request #912 from ingvagabund/container-engine
CONTAINER_ENGINE to override the default docker engine
2022-08-11 18:46:43 -07:00
Jan Chaloupka
d1a9190c50 CONTAINER_ENGINE to override the default docker engine 2022-08-11 16:09:46 +02:00
Kubernetes Prow Robot
a1d4770634 Merge pull request #911 from knelasevero/local-ct-install
introduce ct for local helm install test
2022-08-11 06:10:58 -07:00
Lucas Severo Alves
ba85e794b2 introduce ct for local helm install test 2022-08-10 18:01:42 +02:00
Kubernetes Prow Robot
0a50d5a7da Merge pull request #892 from JaneLiuL/master
bring removeduplicates to plugin
2022-08-10 02:40:30 -07:00
Kubernetes Prow Robot
2de4e23425 Merge pull request #906 from a7i/node-affinity-use-existing-validation
NodeAffinity plugin to use the existing validation methods
2022-08-10 02:02:31 -07:00
JaneLiuL
3474725176 bring removeduplicates to plugin 2022-08-10 15:02:28 +08:00
Amir Alavi
27fa7a70a1 NodeAffinity plugin to use the existing validation methods 2022-08-09 13:38:33 -04:00
Kubernetes Prow Robot
ccfaeb2275 Merge pull request #902 from BinacsLee/binacs/migrate-removepodshavingtoomanyrestarts-to-plugin
Migrate RemovePodsHavingTooManyRestarts to plugin
2022-08-09 09:10:36 -07:00
BinacsLee
d798e7d204 Migrate RemovePodsHavingTooManyRestarts to plugin 2022-08-09 22:05:36 +08:00
Kubernetes Prow Robot
788e9f86bd Merge pull request #860 from knelasevero/migrate-node-afinity-to-plugin
Migrate RemovePodsViolatingNodeAffinity to plugin
2022-08-09 06:14:51 -07:00
Lucas Severo Alves
0c3bf7f957 Migrate RemovePodsViolatingNodeAffinity into a plugin 2022-08-09 14:05:51 +02:00
Kubernetes Prow Robot
349453264e Merge pull request #904 from knelasevero/add-helm-test-step-1
add helm ct install.
2022-08-09 03:54:51 -07:00
Lucas Severo Alves
e9c23fe42f add helm ct install. First step, see https://github.com/kubernetes-sigs/descheduler/pull/895#issuecomment-1203608848 2022-08-08 21:11:20 +02:00
Kubernetes Prow Robot
27ed7d15b9 Merge pull request #899 from a7i/separate-args-validations
separate args validation for better reuse
2022-08-08 08:46:19 -07:00
Amir Alavi
55d4ed479c separate args validation for better reuse 2022-08-05 10:46:04 -04:00
Kubernetes Prow Robot
d109ea64d0 Merge pull request #861 from a7i/migrate-RemoveFailedPods-to-plugin
Migrate RemoveFailedPods to plugin
2022-08-04 07:49:46 -07:00
Amir Alavi
330def2e56 Migrate RemoveFailedPods to plugin 2022-08-02 23:30:49 -04:00
Kubernetes Prow Robot
9880ed5372 Merge pull request #896 from ingvagabund/update-owners
Remove emeritus_approvers from reviewers
2022-08-02 07:31:45 -07:00
Jan Chaloupka
d4ecff5ba4 Remove emeritus_approvers from reviewers 2022-08-02 12:46:23 +02:00
Kubernetes Prow Robot
46e712163a Merge pull request #888 from knelasevero/fix-memory-leak-shutdown-broadcaster
fix: events memory leak. Using new events implementation and take recorder out of EvictPod
2022-08-01 05:14:28 -07:00
Kubernetes Prow Robot
0d1d485850 Merge pull request #894 from ingvagabund/remove-migrated-node-taint-strategy
Remove RemovePodsViolatingNodeTaints strategy already migrated into a plugin
2022-07-28 07:47:11 -07:00
Jan Chaloupka
1294106a22 Remove RemovePodsViolatingNodeTaints strategy already migrated into a plugin 2022-07-28 16:34:42 +02:00
Lucas Severo Alves
0aa233415e use new events implementation and take recorder out of EvictPod 2022-07-28 15:18:21 +02:00
Kubernetes Prow Robot
0d3ff8a84f Merge pull request #857 from ingvagabund/migrate-RemovePodsViolatingNodeTaints-to-plugin
Migrate RemovePodsViolatingNodeTaints to a plugin
2022-07-26 05:59:10 -07:00
Jan Chaloupka
704f6d4496 Migrate RemovePodsViolatingNodeTaints into a plugin 2022-07-21 20:52:24 +02:00
Kubernetes Prow Robot
c699dd1ccc Merge pull request #885 from damemi/evict-options
Add EvictOptions struct to EvictPod()
2022-07-20 10:08:35 -07:00
Mike Dame
d5e66ab62e Add EvictOptions struct to EvictPod() 2022-07-20 16:52:44 +00:00
Kubernetes Prow Robot
3a486f1a79 Merge pull request #882 from iijimakazuyuki/master
Add default lease resource name in Helm chart's ClusterRole
2022-07-11 07:15:49 -07:00
Kubernetes Prow Robot
6e69a10396 Merge pull request #846 from ingvagabund/evictor-interface
Prepare pod evictor for the descheduling framework plugin
2022-07-09 11:27:46 -07:00
Kubernetes Prow Robot
d78994fe6d Merge pull request #883 from a7i/approver-a7i
code approvers: add a7i
2022-07-08 08:23:46 -07:00
Amir Alavi
9ef87b9937 code approvers: add a7i 2022-07-08 09:09:42 -04:00
Kazuyuki Iijima
8b849106ed Add default lease resource name in ClusterRole
Signed-off-by: Kazuyuki Iijima <iijimakazuyuki@gmail.com>
2022-07-08 21:45:17 +09:00
Kubernetes Prow Robot
2ea0a2e1c0 Merge pull request #876 from iijimakazuyuki/master
Use lease resource name from Helm value
2022-07-08 04:43:47 -07:00
Kazuyuki Iijima
329c357834 Use lease resource name from Helm value
Signed-off-by: Kazuyuki Iijima <iijimakazuyuki@gmail.com>
2022-07-08 00:25:39 +09:00
Kubernetes Prow Robot
8072a8c82e Merge pull request #871 from knelasevero/fix-chart-path
fix: chart path can't be relative
2022-07-07 07:11:34 -07:00
Kubernetes Prow Robot
7a7393f5ff Merge pull request #872 from JaneLiuL/master
fix log-file and log-dir issue
2022-07-07 06:57:35 -07:00
Lucas Severo Alves
df65157a3b disable vcs maintainer check 2022-07-06 20:09:07 +02:00
JaneLiuL
754f8c9def fix log-file and log-dir issue 2022-07-06 15:50:43 +08:00
Lucas Severo Alves
d75e9e8c4e fix comment space lint issue 2022-07-04 18:26:28 +02:00
Lucas Severo Alves
2cd79c6816 chart path can´t be relative 2022-07-04 18:19:09 +02:00
Kubernetes Prow Robot
aff9a0ba06 Merge pull request #836 from a7i/balancedomains-belowavg
TopologySpreadConstraint: only evaluate nodes below ideal avg when balancing domains
2022-07-03 18:57:22 -07:00
Kubernetes Prow Robot
e1a10c36de Merge pull request #854 from knelasevero/improve-helm-setup
Improving helm setup
2022-07-01 11:03:23 -07:00
Kubernetes Prow Robot
d8897635b0 Merge pull request #834 from a7i/podlifetime-container-state
PodLifeTime: support container states PodInitializing and ContainerCreating
2022-06-28 07:59:58 -07:00
Amir Alavi
abf5752260 PodLifeTime: add States field and deprecate PodStatusPhases 2022-06-25 15:13:18 -04:00
Amir Alavi
934fffb669 RemovePodsViolatingTopologySpreadConstraint: test case to cover tainted nodes and eviction loop 2022-06-20 20:53:30 -04:00
Amir Alavi
7a5e67d462 topologyspreadconstraint_test: ensure specific pods were evicted 2022-06-20 19:21:58 -04:00
Amir Alavi
469bde0a01 TopologySpreadConstraint: only evaluate nodes below ideal avg when balancing domains 2022-06-20 18:42:34 -04:00
Jan Chaloupka
c838614b6c EvictPod: stop returning an error
When an error is returned a strategy either stops completely or starts
processing another node. Given the error can be a transient error or
only one of the limits can get exceeded it is fair to just skip a
pod that failed eviction and proceed to the next instead.

In order to optimize the processing and stop earlier, it is more
practical to implement a check which will say when a limit was
exceeded.
2022-06-17 10:12:57 +02:00
Jan Chaloupka
cc49f9fcc2 Drop node parameter from EvictPod
The method uses the node object to only get the node name.
The node name can be retrieved from the pod object.

Some strategies might try to evict a pod in Pending state which
does not have the .spec.nodeName field set. Thus, skipping
the test for the node limit.
2022-06-17 10:10:25 +02:00
Amir Alavi
4e710cdf3b PodLifeTime: support container states PodInitializing and ContainerCreating 2022-06-16 21:17:49 -04:00
Jan Chaloupka
d5ee855221 Pass the strategy name into evictor through context 2022-06-16 16:32:13 +02:00
Kubernetes Prow Robot
b2418ef481 Merge pull request #847 from ingvagabund/split-pod-evictor-and-evictor-filter
Split pod evictor and evictor filter
2022-06-16 07:22:49 -07:00
Kubernetes Prow Robot
1f1aad335a Merge pull request #856 from a7i/pod-sort-age-random
remove random creation timestamp from pod sort by age test
2022-06-16 01:18:49 -07:00
Kubernetes Prow Robot
627d219dab Merge pull request #852 from knelasevero/existing-contraints
Check existing constraints before assigning
2022-06-15 06:07:17 -07:00
Lucas Severo Alves
30c972e49e change namespaceTopologySpreadConstraints from map to slice 2022-06-15 14:54:56 +02:00
Amir Alavi
a7cfb25e9b remove random creation timestamp from pod sort by age test 2022-06-14 23:28:58 -04:00
Lucas Severo Alves
45e1cdbd01 WIP: improving helm setup 2022-06-14 17:08:00 +02:00
Lucas Severo Alves
dad3db3187 Check existing constraints before assigning 2022-06-14 10:43:59 +02:00
Jan Chaloupka
d2130747d8 Split pod evictor and evictor filter 2022-06-13 18:48:05 +02:00
Kubernetes Prow Robot
84c8d1ca03 Merge pull request #843 from damemi/docs-readme-1.24.1
Update docs, manifests for 0.24.1 on master
2022-06-08 06:58:24 -07:00
Mike Dame
5dfd54e500 Docs and readme updates 2022-06-08 13:35:22 +00:00
Kubernetes Prow Robot
7550fba2fa Merge pull request #840 from a7i/nodefit-docs
nodeFit: fix docs spacing for placement of the field
2022-06-07 12:28:27 -07:00
Amir Alavi
038b6e1ca7 nodeFit: fix docs spacing for placement of the field 2022-06-07 11:59:30 -04:00
Kubernetes Prow Robot
98a946dea7 Merge pull request #833 from a7i/podlifetime-sort-age
PodLifeTime: sort pods by creation timestamp
2022-06-07 01:23:49 -07:00
Amir Alavi
871a10344e e2e: use kubernetes utils pointer library 2022-06-06 22:05:32 -04:00
Amir Alavi
311d75223f PodLifeTime: sort pods by creation timestamp 2022-06-06 21:49:24 -04:00
Kubernetes Prow Robot
33807ed8e4 Merge pull request #830 from a7i/revert-805-cleanup/lownodeutilization
Revert "cleanup lownodeutilization code"
2022-06-01 08:51:03 -07:00
Amir Alavi
3cc0a68f13 lownodeutilization: clarify comments and variable naming for underutilized vs. overutilized 2022-06-01 11:39:38 -04:00
Amir Alavi
8e1d35cb3c Revert "cleanup lownodeutilization code" 2022-06-01 11:28:09 -04:00
Kubernetes Prow Robot
59c4904ddc Merge pull request #805 from xiaoanyunfei/cleanup/lownodeutilization
cleanup lownodeutilization code
2022-06-01 07:45:02 -07:00
Kubernetes Prow Robot
c5604c760d Merge pull request #825 from damemi/cloudbuild-timeout
Increase cloudbuild timeout to 25 minutes
2022-06-01 06:47:02 -07:00
Mike Dame
f769296243 Increase cloudbuild timeout to 25 minutes 2022-05-31 18:51:53 +00:00
Kubernetes Prow Robot
8972bd9bf0 Merge pull request #823 from damemi/fix-version-cmd
Fix version command to parse helm chart tags
2022-05-31 11:14:53 -07:00
Kubernetes Prow Robot
873381197b Merge pull request #821 from damemi/test-version-updates
Update helm tests util versions and release guide
2022-05-31 10:44:52 -07:00
Mike Dame
af45591c25 Fix version command to parse helm chart tags 2022-05-31 17:39:08 +00:00
Mike Dame
17e986418f Update helm tests util versions and release guide 2022-05-31 17:05:14 +00:00
Kubernetes Prow Robot
5a9e65833f Merge pull request #818 from damemi/release-guide-updates
Update release guide docs
2022-05-26 08:21:24 -07:00
Mike Dame
725ca47bda Update release guide docs 2022-05-25 15:36:58 +00:00
Kubernetes Prow Robot
f39058af1c Merge pull request #813 from stephan2012/bugfix/leader-election-chart-812
Arguments must be strings, not bool or number
2022-05-25 07:12:44 -07:00
Kubernetes Prow Robot
332d61dba8 Merge pull request #814 from stephan2012/bugfix/missing-keys-803
Add podAnnotations and podLabels to values and docs
2022-05-24 09:38:06 -07:00
Stephan Austermühle
3cbae5e72b Fix type error for the leader election
Also, add the missing update verb in the ClusterRole and adds required
time units to leaseDuration, renewDeadline, retryPeriod in the Chart
example.
2022-05-24 18:11:18 +02:00
Stephan Austermühle
d8a609a6e7 Add more precise description 2022-05-24 18:07:58 +02:00
Stephan Austermühle
f0fa4c0cc0 Add podAnnotations and podLabels to values and docs 2022-05-24 10:02:16 +02:00
Kubernetes Prow Robot
e61823c299 Merge pull request #809 from damemi/CVE-2022-27191
bump: golang.org/x/crypto
2022-05-23 21:39:10 -07:00
Mike Dame
14b83e6cc5 bump: golang.org/x/crypto 2022-05-23 21:17:27 +00:00
sunxiaofei
5e3b825427 cleanup lownodeutilization code 2022-05-23 17:20:35 +08:00
Kubernetes Prow Robot
15794ba00d Merge pull request #801 from KohlsTechnology/bump-go-1.18
Bump To Go 1.18.2
2022-05-18 23:54:07 -07:00
Sean Malloy
e494a5817e Bump To Go 1.18.2
The main k/k repo was updated to Go 1.18.2 for the
k8s v1.24.0 release. See below PR for reference.

https://github.com/kubernetes/kubernetes/pull/110044
2022-05-18 09:36:32 -05:00
Kubernetes Prow Robot
eb0be65687 Merge pull request #796 from JaneLiuL/master
Update helm chart version to v0.24
2022-05-16 15:06:19 -07:00
JaneLiuL
64786460cd Update helm chart version to v0.24 2022-05-13 08:27:15 +08:00
Kubernetes Prow Robot
9c110c4004 Merge pull request #791 from JaneLiuL/master
Bump to k8s 1.24.0
2022-05-12 12:06:33 -07:00
Kubernetes Prow Robot
0eddf7f108 Merge pull request #792 from pravarag/update-docs-1.24
Update Docs and Manifests for v0.24.0
2022-05-12 11:31:15 -07:00
Kubernetes Prow Robot
3c8d6c4d53 Merge pull request #795 from damemi/update-e2e
Update e2e test versions
2022-05-12 09:29:14 -07:00
Mike Dame
6e84d0a6ba React to removal of offensive language
https://github.com/kubernetes/kubeadm/issues/2200 went into effect in 1.24, so
e2es broke without the update.
2022-05-12 15:35:07 +00:00
Mike Dame
fb1df468ad golint fix 2022-05-12 14:21:34 +00:00
Mike Dame
ac4d576df8 Update e2e test versions 2022-05-12 14:16:53 +00:00
Pravar Agrawal
314ad65b04 Update docs and manifests for v0.24.0 2022-05-04 22:08:49 +05:30
JaneLiuL
969a618933 Bump to k8s 1.24.0 2022-05-04 10:17:47 +08:00
Kubernetes Prow Robot
028f205e8c Merge pull request #790 from ingvagabund/636
Added request considerations to NodeFit Feature [#636 follow up]
2022-05-03 19:09:16 -07:00
Jan Chaloupka
3eca2782d4 Addressing review comments
Both LowNode and HighNode utilization strategies evict only as many pods
as there's free resources on other nodes. Thus, the resource fit test
is always true by definition.
2022-04-28 18:54:54 +02:00
RyanDevlin
16eb9063b6 NodeFit parameter now considers pod requests 2022-04-28 10:16:52 +02:00
Kubernetes Prow Robot
eac3b4b54a Merge pull request #788 from ryan4yin/master
fix: incorrect yaml indentation in readme
2022-04-26 06:46:53 -07:00
Ryan Yin
d08cea731a fix: incorrect indentation 2022-04-26 06:05:12 +08:00
Kubernetes Prow Robot
0fc5ba9316 Merge pull request #787 from JaneLiuL/master
bump to k8s 1.24-rc.0
2022-04-25 12:05:43 -07:00
JaneLiuL
ecbd10afe2 bump to k8s 1.24-rc.0 2022-04-21 09:11:04 +08:00
Kubernetes Prow Robot
e5ed0540f2 Merge pull request #779 from pravarag/user-docs-typo
Fix missing param in user-guide for PodLifeTime strategy
2022-04-11 01:44:06 -07:00
Pravar Agrawal
4e972a7602 fix missing param in user-guide 2022-04-07 10:02:26 +05:30
Kubernetes Prow Robot
ae20b5b034 Merge pull request #732 from eminaktas/feature/metric-scape
feat: Add metric scrape configs in Helm Chart
2022-03-30 07:06:27 -07:00
Kubernetes Prow Robot
406e3ed5b3 Merge pull request #771 from dineshbhor/fix-highnodeutilization-node-sorting
Sort nodes in ascending order for HighNodeUtilization
2022-03-29 02:58:47 -07:00
dineshbhor
7589aaf00b Sort nodes in ascending order for HighNodeUtilization 2022-03-29 17:54:18 +09:00
eminaktas
ca90b53913 feat: Add metric scrape configs in Helm Chart
Signed-off-by: eminaktas <emin.aktas@trendyol.com>
2022-03-28 23:41:56 +03:00
Kubernetes Prow Robot
238eebeaca Merge pull request #722 from Dentrax/feature/leaderelection
feat(leaderelection): impl leader election for HA Deployment
2022-03-28 09:39:23 -07:00
Kubernetes Prow Robot
cf59d08193 Merge pull request #751 from HelmutLety/redo_#473
feat: Add DeviationThreshold Paramter for LowNodeUtilization, (Previous attempt - #473 )
2022-03-28 03:53:24 -07:00
HelmutLety
2ea65e69dc feat(LowNodeUtilization): useDeviationThresholds, redo of #473
[751]: normalize Percentage in nodeutilization and clean the tests
2022-03-28 12:35:01 +02:00
Kubernetes Prow Robot
7f6a2a69b0 Merge pull request #777 from JacobHenner/support-taint-exclusions
Add RemovePodsViolatingNodeTaints taint exclusion
2022-03-28 02:47:23 -07:00
Jacob Henner
ac3362149b Add RemovePodsViolatingNodeTaints taint exclusion
Add taint exclusion to RemovePodsViolatingNodeTaints. This permits node
taints to be ignored by allowing users to specify ignored taint keys or
ignored taint key=value pairs.
2022-03-27 13:48:40 -04:00
Furkan
0a52af9ab8 feat(leaderelection): impl leader election
Signed-off-by: Furkan <furkan.turkal@trendyol.com>
Signed-off-by: eminaktas <eminaktas34@gmail.com>
Co-authored-by: Emin <emin.aktas@trendyol.com>
Co-authored-by: Yasin <yasintaha.erol@trendyol.com>
2022-03-25 14:33:14 +03:00
Kubernetes Prow Robot
07bbdc61c4 Merge pull request #762 from ingvagabund/nodeutilization-refactor
Promote NodeUsage to NodeInfo, evaluate thresholds separately
2022-03-15 17:33:48 -07:00
Kubernetes Prow Robot
17595fdcfc Merge pull request #764 from ingvagabund/taints-prefer-no-scheduler
RemovePodsViolatingNodeTaints: optionally include PreferNoSchedule taint
2022-03-14 17:36:10 -07:00
Jan Chaloupka
285523f0d9 RemovePodsViolatingNodeTaints: optionally include PreferNoSchedule taint 2022-03-14 16:46:03 +01:00
Kubernetes Prow Robot
c55a897599 Merge pull request #759 from JaneLiuL/master
OWNERS: add janeliul as a reviewer
2022-03-11 10:29:07 -08:00
Jan Chaloupka
52ff50f2d1 Promote NodeUsage to NodeInfo, evaluate thresholds separately 2022-03-11 13:52:37 +01:00
Jan Chaloupka
8ebf3fb323 nodeutilization: move node resource threshold value computation under a separate function 2022-03-11 12:46:11 +01:00
Kubernetes Prow Robot
0e0ae8df90 Merge pull request #761 from ingvagabund/TestTooManyRestarts-II
[e2e] TestTooManyRestarts: check if container status is set before accessing
2022-03-11 02:29:06 -08:00
Jan Chaloupka
bd3daa82d3 [e2e] TestTooManyRestarts: check if container status is set before accessing 2022-03-11 10:35:49 +01:00
Kubernetes Prow Robot
60a15f0392 Merge pull request #760 from ingvagabund/TestTooManyRestarts
[e2e] TestTooManyRestarts: check err and len before accessing pod items
2022-03-11 01:09:07 -08:00
Jan Chaloupka
d98cb84568 [e2e] TestTooManyRestarts: check err and len before accessing pod items 2022-03-11 09:45:05 +01:00
Kubernetes Prow Robot
6ab01eca63 Merge pull request #758 from hiroyaonoe/add-doc-about-max-no-of-pods-to-evict-per-namespace-policy
Update docs for maxNoOfPodsToEvictPerNamespace
2022-03-10 11:25:21 -08:00
Kubernetes Prow Robot
584ac2d604 Merge pull request #757 from prune998/prune/taint-logs
add conflicting taint to the logs
2022-03-10 05:37:35 -08:00
prune
448dc4784c add conflicting taint to the logs
log when count mismatch


simplified logic to log blocking taints
2022-03-10 08:05:42 -05:00
JaneLiuL
3ca77e7a3d OWNERS: add janeliul as a reviewer 2022-03-08 07:48:11 +08:00
Hiroya Onoe
01e7015b97 Update docs for maxNoOfPodsToEvictPerNamespace 2022-03-07 16:21:04 +09:00
Kubernetes Prow Robot
fd5a8c7d78 Merge pull request #739 from JaneLiuL/master
Share links to all descheduler ehnacements proposals in the project repo
2022-03-02 09:55:14 -08:00
Kubernetes Prow Robot
43148ecd0c Merge pull request #740 from JaneLiuL/doc-npd
fix doc about NPD description
2022-03-01 09:59:55 -08:00
Kubernetes Prow Robot
16501978dc Merge pull request #748 from damemi/update-v0.23.1
Update manifests and doc for v0.23.1
2022-03-01 07:47:46 -08:00
Mike Dame
1b4e48b006 Update manifests and doc for v0.23.1 2022-02-28 19:06:50 +00:00
Kubernetes Prow Robot
da6a3e063f Merge pull request #744 from antonio-te/master
Update golang image
2022-02-28 10:41:46 -08:00
Antonio Gurgel
5784c0cc04 Update golang image
1.17.3 is affected by CVE-2021-44716.
2022-02-28 07:22:26 -08:00
JaneLiuL
254a3a9ec1 Share links to all descheduler ehnacements proposals in the project repository 2022-02-26 12:27:35 +08:00
JaneLiuL
328c695141 fix doc about NPD description 2022-02-26 12:23:33 +08:00
Kubernetes Prow Robot
3ab0268c5a Merge pull request #733 from JaneLiuL/master
remove MostRequestedPriority from doc since already deprecated
2022-02-24 04:32:32 -08:00
Jane Liu L
cd8dbdd1e2 remove MostRequestedPriority from doc since already deprecated 2022-02-24 09:00:36 +08:00
Kubernetes Prow Robot
54c50c5390 Merge pull request #731 from jklaw90/fix-ctx-cron
Bugfix: Cronjob ctx cancel
2022-02-22 11:35:18 -08:00
Julian Lawrence
a2cbc25397 updated to handle cronjob flow 2022-02-22 08:52:06 -08:00
Kubernetes Prow Robot
bd81f6436e Merge pull request #708 from damemi/utilization-values-readme
Clarify resource calculations in NodeUtilization strategy Readmes
2022-02-22 04:47:46 -08:00
Kubernetes Prow Robot
30be19b04e Merge pull request #715 from eminaktas/values-fix
fix: Remove deprecated parameters from cmdOptions and add the parameters under policy
2022-02-18 05:08:23 -08:00
Kubernetes Prow Robot
3c251fb09d Merge pull request #726 from jklaw90/log-eviction-node
Eviction Logs
2022-02-15 04:08:03 -08:00
Julian Lawrence
224e2b078f updated logs to help with debugging 2022-02-14 18:27:53 -08:00
Kubernetes Prow Robot
dd80d60f4f Merge pull request #716 from eminaktas/imagepullsecret
fix: add imagePullSecrets for deployment resource
2022-02-14 05:27:29 -08:00
Kubernetes Prow Robot
e88837a349 Merge pull request #704 from ingvagabund/update-chart-readme
Update charts README to reflect the new parameters
2022-02-11 14:23:46 -08:00
Kubernetes Prow Robot
5901f8af1b Merge pull request #697 from a7i/code-reviewer
OWNERS: add a7i as a reviewer
2022-02-11 08:14:23 -08:00
Kubernetes Prow Robot
0d1704a192 Merge pull request #717 from JaneLiuL/release-1.23.1
[release-1.23.1] Update helm chart version to v0.23.1
2022-02-08 04:34:54 -08:00
JaneLiuL
c5878b18c6 Update helm chart version to v0.23.1 2022-02-08 20:21:57 +08:00
emin.aktas
ff1954b32e fix: add imagePullSecrets for deployment resource
Signed-off-by: emin.aktas <eminaktas34@gmail.com>
Co-authored-by: yasintahaerol <yasintahaerol@gmail.com>
Co-authored-by: Dentrax <furkan.turkal@trendyol.com>
2022-02-07 18:05:18 +03:00
emin.aktas
4c8040bbaf fix: Remove deprecated parameters from cmdOptions and add the parameters under policy 2022-02-07 15:14:55 +03:00
Kubernetes Prow Robot
deaa314492 Merge pull request #712 from JaneLiuL/helm
fix helmchart fail to watch namespace issue
2022-02-06 10:36:51 -08:00
Jane Liu L
9c653a2274 fix helmchart fail to watch namespace issue 2022-02-04 18:34:21 +08:00
Kubernetes Prow Robot
8d37557743 Merge pull request #709 from damemi/update-helm-23
Update helm chart version to v0.23
2022-02-03 12:10:58 -08:00
Mike Dame
5081ad84b5 Update helm chart version to v0.23 2022-02-03 14:57:18 -05:00
Mike Dame
c51c066cd1 Clarify resource calculations in NodeUtilization strategy Readmes
This adds text explaining the resource calculation in LowNodeUtilization and HighNodeUtilization
2022-01-30 12:59:47 -05:00
Kubernetes Prow Robot
afb1d75ce1 Merge pull request #660 from martin-magakian/features/add_affinity_option
Adding 'affinity' support to run 'descheduler' in CronJob or Deployment
2022-01-27 05:56:27 -08:00
Jan Chaloupka
90e6174fdd Update charts README to reflect the new parameters 2022-01-27 14:46:15 +01:00
Kubernetes Prow Robot
8e3ef9a6b3 Merge pull request #694 from sharkannon/master
Updates to include annotations to the service account
2022-01-27 05:42:26 -08:00
Kubernetes Prow Robot
778a18c550 Merge pull request #700 from jklaw90/root-ctx
Use the root context cancellation
2022-01-27 05:08:25 -08:00
Julian Lawrence
1a98a566b3 adding cancelation from sigint sigterm 2022-01-25 00:10:09 -08:00
Kubernetes Prow Robot
a643c619c9 Merge pull request #699 from ingvagabund/evict-pods-report-metrics-indendent-of-the-dry-mode
Evictor: report successful eviction independently of the dry-mode
2022-01-20 14:16:29 -08:00
Jan Chaloupka
203388ff1a Evictor: report successful eviction independently of the dry-mode
Dry mode currently does not report metrics when the eviction succeeds
2022-01-20 21:23:19 +01:00
Kubernetes Prow Robot
2844f80a35 Merge pull request #677 from ingvagabund/accumulated-eviction
Use a fake client when evicting pods by individual strategies to accumulate the evictions
2022-01-20 08:15:52 -08:00
Jan Chaloupka
901a16ecbc Do not collect the metrics when the metrics server is not enabled 2022-01-20 17:04:15 +01:00
Jan Chaloupka
e0f086ff85 Use a fake client when evicting pods by individual strategies to accumulate the evictions
Currently, when the descheduler is running with the --dry-run on, no strategy actually
evicts a pod so every strategy always starts with a complete list of
pods. E.g. when the PodLifeTime strategy evicts few pods, the RemoveDuplicatePods
strategy still takes into account even the pods eliminated by the PodLifeTime
strategy. Which does not correspond to the real case scenarios as the
same pod can be evicted multiple times. Instead, use a fake client and
evict/delete the pods from its cache so the strategies evict each pod
at most once as it would be normally done in a real cluster.
2022-01-20 17:04:05 +01:00
Amir Alavi
0251935268 OWNERS: add a7i as a reviewer 2022-01-18 09:14:44 -05:00
Stephen Herd
8752a28025 Merge branch 'kubernetes-sigs-master' 2022-01-13 12:52:36 -08:00
Stephen Herd
24884c7568 Rebase from master 2022-01-13 12:52:06 -08:00
Kubernetes Prow Robot
175f648045 Merge pull request #695 from a7i/liveness-template
make livenessprobe consistent across manifests
2022-01-12 13:37:40 -08:00
Amir Alavi
f50a3fa119 make livenessprobe consistent across manifests; make helm chart configurable via values.yaml 2022-01-12 11:49:17 -05:00
Kubernetes Prow Robot
551eced42a Merge pull request #688 from babygoat/evict-failed-without-ownerrefs
feat: support eviction of failed bare pods
2022-01-11 12:31:15 -08:00
Stephen Herd
3635a8171c Updates to include annotations to the service account, needed for things such as Workload Identity in Google Cloud 2022-01-11 11:55:05 -08:00
Kubernetes Prow Robot
796f347305 Merge pull request #692 from jklaw90/sliding-until
NonSlidingUntil for deployment
2022-01-11 06:21:16 -08:00
Kubernetes Prow Robot
13abbe7f09 Merge pull request #693 from developer-guy/patch-1
Update NOTES.txt
2022-01-10 05:11:13 -08:00
Kubernetes Prow Robot
e4df54d2d1 Merge pull request #685 from JaneLiuL/master
add  liveness probe
2022-01-10 04:29:12 -08:00
Jane Liu L
c38f617e40 add liveness probe 2022-01-10 09:56:53 +08:00
Kubernetes Prow Robot
e6551564c4 Merge pull request #691 from RyanDevlin/waitForNodes
Eliminated race condition in E2E tests
2022-01-07 06:16:30 -08:00
Batuhan Apaydın
3a991dd50c Update NOTES.txt
Signed-off-by: Batuhan Apaydın <batuhan.apaydin@trendyol.com>
Co-authored-by: Furkan Türkal <furkan.turkal@trendyol.com>
Co-authored-by: Emin Aktaş <emin.aktas@trendyol.com>
Co-authored-by: Necatican Yıldırım <necatican.yildirim@trendyol.com>
Co-authored-by: Fatih Sarhan <fatih.sarhan@trendyol.com>
2022-01-07 13:42:00 +03:00
Julian Lawrence
77cb406052 updated until -> sliding until 2022-01-06 12:55:10 -08:00
RyanDevlin
921a5680ab Eliminated race condition in E2E tests 2022-01-06 09:36:13 -05:00
babygoat
1529180d70 feat: support eviction of failed bare pods
This patch adds the policy(evictFailedBarePods) to allow the failed
pods without ownerReferences to be evicted. For backward compatibility,
disable the policy by default. Address #644.
2022-01-06 01:07:41 +08:00
Kubernetes Prow Robot
2d9143d129 Merge pull request #687 from jklaw90/error-comment
Comment update for metrics
2022-01-04 06:52:52 -08:00
Kubernetes Prow Robot
e9c0833b6f Merge pull request #689 from ingvagabund/run-hack-update-generated-conversions-sh
run ./hack/update-* scripts
2022-01-04 06:34:52 -08:00
Jan Chaloupka
8462cf56d7 run ./hack/update-* scripts 2022-01-04 09:37:01 +01:00
Julian Lawrence
a60d6a527d updated comment to reflect actual value 2021-12-29 10:56:10 -08:00
Kubernetes Prow Robot
2b23694704 Merge pull request #682 from jklaw90/chart-labels
commonLabels value for chart
2021-12-26 06:15:15 -08:00
Julian Lawrence
d0a95bee2f fixed default value for common labels 2021-12-20 08:24:30 -08:00
Julian Lawrence
57a910f5d1 adding commonLabels value 2021-12-18 23:31:52 -08:00
Kubernetes Prow Robot
ccaedde183 Merge pull request #661 from kirecek/enhc/include-pod-reason
Add pod.Status.Reason to the list of reasons
2021-12-17 13:11:55 -08:00
Erik Jankovič
2020642b6f chore: add pod.Status.Reason to the list of reasons
Signed-off-by: Erik Jankovič <erik.jankovic@gmail.com>
2021-12-17 18:37:53 +01:00
Kubernetes Prow Robot
96ff5d2dd9 Merge pull request #680 from ingvagabund/klog-output-stdout
Set the klog output to stdout by default
2021-12-16 05:31:18 -08:00
Jan Chaloupka
d8718d7db3 Set the klog output to stdout by default
Also, one needs to set --logtostderr=false to properly log into the stdout
2021-12-16 11:22:40 +01:00
Kubernetes Prow Robot
1e5165ba9f Merge pull request #670 from autumn0207/improve_pod_eviction_metrics
Add node name label to the counter metric for evicted pods
2021-12-16 01:49:18 -08:00
autumn0207
8e74f8bd77 improve pod eviction metrics 2021-12-16 17:06:22 +08:00
Kubernetes Prow Robot
2424928019 Merge pull request #667 from damemi/1.23-rc.0
bump: k8s to 1.23
2021-12-15 06:56:20 -08:00
Jan Chaloupka
e6314d2c7e Init the klog directly
Since 3948cb8d1b (diff-465167b08358906be13f9641d4798c6e8ad0790395e045af8ace4d08223fa922R78)
the klog verbosity level gets always overriden.
2021-12-15 09:23:20 -05:00
Kubernetes Prow Robot
271ee3c7e3 Merge pull request #678 from a7i/golangci-fix
fix: install golangci using from the golangci repo
2021-12-15 02:20:19 -08:00
Amir Alavi
e58686c142 fix: install golangci using from the golangci repo 2021-12-14 13:18:19 -05:00
Kubernetes Prow Robot
0b2c10d6ce Merge pull request #673 from Garrybest/pr_pod_cache
list pods assigned to a node by pod informer cache
2021-12-14 01:32:04 -08:00
Garrybest
cac3b9185b reform all test files
Signed-off-by: Garrybest <garrybest@foxmail.com>
2021-12-11 19:43:16 +08:00
Mike Dame
94888e653c Move klog initialization to cli.Run() 2021-12-10 12:00:11 -05:00
Mike Dame
936578b238 Update k8s version in helm test 2021-12-10 10:14:47 -05:00
Mike Dame
4fa7bf978c run hack/update-generated-deep-copies.sh 2021-12-10 10:02:39 -05:00
Mike Dame
2f7c496944 React to 1.23 bump
Logging validation functions changed in upstream commit
54ecfcdac8.
This uses the new function name.
2021-12-10 10:02:26 -05:00
Mike Dame
5fe3ca86ff bump: k8s to 1.23 2021-12-10 10:02:14 -05:00
Garrybest
0ff8ecb41e reform all strategies by using getPodsAssignedToNode
Signed-off-by: Garrybest <garrybest@foxmail.com>
2021-12-10 19:28:51 +08:00
Garrybest
08ed129a07 reform ListPodsOnANode by using pod informer and indexer
Signed-off-by: Garrybest <garrybest@foxmail.com>
2021-12-10 19:25:20 +08:00
Kubernetes Prow Robot
49ad197dfc Merge pull request #658 from JaneLiuL/master
Add maxNoOfPodsToEvictPerNamespace policy
2021-12-03 01:50:27 -08:00
Kubernetes Prow Robot
82201d0e48 Add maxNoOfPodsToEvictPerNamespace policy 2021-12-03 10:58:37 +08:00
Kubernetes Prow Robot
2b95332e8c Merge pull request #665 from spiffxp/use-k8s-infra-for-gcb-image
images: use k8s-staging-test-infra/gcb-docker-gcloud
2021-11-30 13:59:01 -08:00
Aaron Crickenberger
e8ed62e540 images: use k8s-staging-test-infra/gcb-docker-gcloud 2021-11-30 13:12:18 -08:00
Kubernetes Prow Robot
e5725de7bb Merge pull request #664 from stpabhi/dev
fix typo minPodLifeTimeSeconds
2021-11-30 08:10:56 -08:00
Abhilash Pallerlamudi
c47e811937 fix typo minPodLifeTimeSeconds
Signed-off-by: Abhilash Pallerlamudi <stp.abhi@gmail.com>
2021-11-29 17:51:40 -08:00
Kubernetes Prow Robot
e0bac4c371 Merge pull request #662 from ingvagabund/drop-deprecated-flags
Drop deprecated flags
2021-11-29 08:43:23 -08:00
Jan Chaloupka
73a7adf572 Drop deprecated flags 2021-11-29 17:12:59 +01:00
Kubernetes Prow Robot
5cf381a817 Merge pull request #663 from ingvagabund/bump-go-to-1.17
Bump go version in go.mod to go1.17
2021-11-29 08:01:23 -08:00
Jan Chaloupka
4603182320 Bump go version in go.mod to go1.17 2021-11-29 16:49:35 +01:00
Martin Magakian
ad207775ff Adding 'affinity' support to run 'descheduler' in CronJob or Deployment 2021-11-18 11:08:36 +01:00
Kubernetes Prow Robot
d0f11a41c0 Merge pull request #639 from JaneLiuL/master
Ignore Pods With Deletion Timestamp
2021-11-15 08:44:49 -08:00
Jane Liu L
c7524705b3 Ignore Pods With Deletion Timestamp 2021-11-10 09:32:11 +08:00
Kubernetes Prow Robot
50f9513cbb Merge pull request #642 from wking/clarify-RemovePodsHavingTooManyRestarts
README: Clarify podRestartThreshold applying to the sum over containers
2021-10-13 03:15:49 -07:00
W. Trevor King
6fd80ba29c README: Clarify podRestartThreshold applying to the sum over containers
calcContainerRestarts sums over containers.  The new language makes
that clear, avoiding potential confusion vs. an altenative that looked
for pods where a single container had passed the configured threshold.
For example, with three containers with 50 restarts and a threshold of
100, the actual "sum over containers" logic makes that pod a candidate
for descheduling, but the "largest single container restart count"
hypothetical would not have made it a candidate.

Also shifts labelSelector into the parameter table, because when it
was added in 29ade13ce7 (README and e2e-testcase add for
labelSelector, 2021-03-02, #510), it landed a few lines too high.
2021-10-07 14:51:26 -07:00
Kubernetes Prow Robot
5b557941fa Merge pull request #627 from JaneLiuL/master
Add E2E test case cover duplicatepods strategy
2021-10-01 00:01:22 -07:00
Kubernetes Prow Robot
c6229934a0 Merge pull request #637 from KohlsTechnology/helm-suspend-docs
Document suspend helm chart configuration option
2021-09-30 23:47:22 -07:00
Sean Malloy
ed28eaeccc Document suspend helm chart configuration option 2021-09-30 23:30:10 -05:00
Kubernetes Prow Robot
3be910c238 Merge pull request #621 from uthark/oatamanenko/deleted
Ignore pods being deleted
2021-09-30 21:21:22 -07:00
Kubernetes Prow Robot
d96dd6da2d Merge pull request #632 from a7i/amir/failedpods-crash
RemoveFailedPods: guard against nil descheduler strategy (e.g. in case of default that loads all strategies)
2021-09-29 01:02:49 -07:00
Amir Alavi
f7c26ef41f e2e tests for RemoveFailedPods strategy
Fix priority class default
2021-09-26 20:39:32 -04:00
Jane Liu L
57ad9cc91b Add E2E test case cover tooManyRestarts strategy 2021-09-26 09:10:17 +08:00
Kubernetes Prow Robot
926339594d Merge pull request #622 from yutachaos/feature/added_suspend_parameter
Added support for cronjob suspend
2021-09-22 09:18:01 -07:00
Amir Alavi
1ba53ad68c e2e TestTopologySpreadConstraint: ensure pods are running before checking for topology spread across domains 2021-09-20 18:18:47 -04:00
Amir Alavi
6eb37ce079 RemoveFailedPods: guard against nil descheduler strategy (e.g. in case of default that loads all strategies) 2021-09-20 11:20:54 -04:00
Kubernetes Prow Robot
54d660eee0 Merge pull request #629 from chenkaiyue/fix-node-affinity-test
fix duplicate code in node_affinity_test.go
2021-09-16 01:59:46 -07:00
yutachaos
cf219fbfae Added helm chart suspend parameter
Signed-off-by: yutachaos <18604471+yutachaos@users.noreply.github.com>
2021-09-16 14:32:09 +09:00
kaiyuechen
d1d9ea0c48 fix duplicate code in node_affinity_test.go 2021-09-16 10:39:52 +08:00
Oleg Atamanenko
4448d9c670 Ignore pods being deleted 2021-09-15 00:05:51 -07:00
Kubernetes Prow Robot
3909f3acae Merge pull request #623 from damemi/release-1.22
Update Helm chart version to 0.22.0
2021-09-08 13:13:56 -07:00
2297 changed files with 162983 additions and 55067 deletions

6
.github/ci/ct.yaml vendored Normal file
View File

@@ -0,0 +1,6 @@
chart-dirs:
- charts
helm-extra-args: "--timeout=5m"
check-version-increment: false
helm-extra-set-args: "--set=kind=Deployment"
target-branch: master

67
.github/workflows/helm.yaml vendored Normal file
View File

@@ -0,0 +1,67 @@
name: Helm
on:
push:
branches:
- master
- release-*
paths:
- 'charts/**'
- '.github/workflows/helm.yaml'
pull_request:
paths:
- 'charts/**'
- '.github/workflows/helm.yaml'
jobs:
lint-and-test:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Helm
uses: azure/setup-helm@v2.1
with:
version: v3.9.2
- uses: actions/setup-python@v3.1.2
with:
python-version: 3.7
- uses: actions/setup-go@v3
with:
go-version: '1.19.0'
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.2.1
with:
version: v3.7.0
- name: Run chart-testing (list-changed)
id: list-changed
run: |
changed=$(ct list-changed --config=.github/ci/ct.yaml)
if [[ -n "$changed" ]]; then
echo "::set-output name=changed::true"
fi
- name: Run chart-testing (lint)
run: ct lint --config=.github/ci/ct.yaml --validate-maintainers=false
# Need a multi node cluster so descheduler runs until evictions
- name: Create multi node Kind cluster
run: make kind-multi-node
# helm-extra-set-args only available after ct 3.6.0
- name: Run chart-testing (install)
run: ct install --config=.github/ci/ct.yaml
- name: E2E after chart install
env:
KUBERNETES_VERSION: "v1.25.0"
KIND_E2E: true
SKIP_INSTALL: true
run: make test-e2e

47
.github/workflows/security.yaml vendored Normal file
View File

@@ -0,0 +1,47 @@
name: "Security"
on:
push:
branches:
- main
- master
- release-*
schedule:
- cron: '30 1 * * 0'
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
security-events: write
strategy:
fail-fast: false
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Build image
run: |
IMAGE_REPO=${HELM_IMAGE_REPO:-descheduler}
IMAGE_TAG=${HELM_IMAGE_TAG:-security-test}
VERSION=security-test make image
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: 'descheduler:security-test'
format: 'sarif'
exit-code: '0'
severity: 'CRITICAL,HIGH'
output: 'trivy-results.sarif'
- name: Upload Trivy scan results to GitHub Security tab
uses: github/codeql-action/upload-sarif@v2
with:
sarif_file: 'trivy-results.sarif'
exit-code: '0'

3
.gitignore vendored
View File

@@ -4,4 +4,5 @@ vendordiff.patch
.idea/ .idea/
*.code-workspace *.code-workspace
.vscode/ .vscode/
kind kind
bin/

View File

@@ -11,7 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
FROM golang:1.16.7 FROM golang:1.19.0
WORKDIR /go/src/sigs.k8s.io/descheduler WORKDIR /go/src/sigs.k8s.io/descheduler
COPY . . COPY . .

View File

@@ -14,8 +14,10 @@
.PHONY: test .PHONY: test
export CONTAINER_ENGINE ?= docker
# VERSION is based on a date stamp plus the last commit # VERSION is based on a date stamp plus the last commit
VERSION?=v$(shell date +%Y%m%d)-$(shell git describe --tags --match "v*") VERSION?=v$(shell date +%Y%m%d)-$(shell git describe --tags)
BRANCH?=$(shell git branch --show-current) BRANCH?=$(shell git branch --show-current)
SHA1?=$(shell git rev-parse HEAD) SHA1?=$(shell git rev-parse HEAD)
BUILD=$(shell date +%FT%T%z) BUILD=$(shell date +%FT%T%z)
@@ -24,7 +26,7 @@ ARCHS = amd64 arm arm64
LDFLAGS=-ldflags "-X ${LDFLAG_LOCATION}.version=${VERSION} -X ${LDFLAG_LOCATION}.buildDate=${BUILD} -X ${LDFLAG_LOCATION}.gitbranch=${BRANCH} -X ${LDFLAG_LOCATION}.gitsha1=${SHA1}" LDFLAGS=-ldflags "-X ${LDFLAG_LOCATION}.version=${VERSION} -X ${LDFLAG_LOCATION}.buildDate=${BUILD} -X ${LDFLAG_LOCATION}.gitbranch=${BRANCH} -X ${LDFLAG_LOCATION}.gitsha1=${SHA1}"
GOLANGCI_VERSION := v1.30.0 GOLANGCI_VERSION := v1.49.0
HAS_GOLANGCI := $(shell ls _output/bin/golangci-lint 2> /dev/null) HAS_GOLANGCI := $(shell ls _output/bin/golangci-lint 2> /dev/null)
# REGISTRY is the container registry to push # REGISTRY is the container registry to push
@@ -60,36 +62,36 @@ build.arm64:
CGO_ENABLED=0 GOOS=linux GOARCH=arm64 go build ${LDFLAGS} -o _output/bin/descheduler sigs.k8s.io/descheduler/cmd/descheduler CGO_ENABLED=0 GOOS=linux GOARCH=arm64 go build ${LDFLAGS} -o _output/bin/descheduler sigs.k8s.io/descheduler/cmd/descheduler
dev-image: build dev-image: build
docker build -f Dockerfile.dev -t $(IMAGE) . $(CONTAINER_ENGINE) build -f Dockerfile.dev -t $(IMAGE) .
image: image:
docker build --build-arg VERSION="$(VERSION)" --build-arg ARCH="amd64" -t $(IMAGE) . $(CONTAINER_ENGINE) build --build-arg VERSION="$(VERSION)" --build-arg ARCH="amd64" -t $(IMAGE) .
image.amd64: image.amd64:
docker build --build-arg VERSION="$(VERSION)" --build-arg ARCH="amd64" -t $(IMAGE)-amd64 . $(CONTAINER_ENGINE) build --build-arg VERSION="$(VERSION)" --build-arg ARCH="amd64" -t $(IMAGE)-amd64 .
image.arm: image.arm:
docker build --build-arg VERSION="$(VERSION)" --build-arg ARCH="arm" -t $(IMAGE)-arm . $(CONTAINER_ENGINE) build --build-arg VERSION="$(VERSION)" --build-arg ARCH="arm" -t $(IMAGE)-arm .
image.arm64: image.arm64:
docker build --build-arg VERSION="$(VERSION)" --build-arg ARCH="arm64" -t $(IMAGE)-arm64 . $(CONTAINER_ENGINE) build --build-arg VERSION="$(VERSION)" --build-arg ARCH="arm64" -t $(IMAGE)-arm64 .
push: image push: image
gcloud auth configure-docker gcloud auth configure-docker
docker tag $(IMAGE) $(IMAGE_GCLOUD) $(CONTAINER_ENGINE) tag $(IMAGE) $(IMAGE_GCLOUD)
docker push $(IMAGE_GCLOUD) $(CONTAINER_ENGINE) push $(IMAGE_GCLOUD)
push-all: image.amd64 image.arm image.arm64 push-all: image.amd64 image.arm image.arm64
gcloud auth configure-docker gcloud auth configure-docker
for arch in $(ARCHS); do \ for arch in $(ARCHS); do \
docker tag $(IMAGE)-$${arch} $(IMAGE_GCLOUD)-$${arch} ;\ $(CONTAINER_ENGINE) tag $(IMAGE)-$${arch} $(IMAGE_GCLOUD)-$${arch} ;\
docker push $(IMAGE_GCLOUD)-$${arch} ;\ $(CONTAINER_ENGINE) push $(IMAGE_GCLOUD)-$${arch} ;\
done done
DOCKER_CLI_EXPERIMENTAL=enabled docker manifest create $(IMAGE_GCLOUD) $(addprefix --amend $(IMAGE_GCLOUD)-, $(ARCHS)) DOCKER_CLI_EXPERIMENTAL=enabled $(CONTAINER_ENGINE) manifest create $(IMAGE_GCLOUD) $(addprefix --amend $(IMAGE_GCLOUD)-, $(ARCHS))
for arch in $(ARCHS); do \ for arch in $(ARCHS); do \
DOCKER_CLI_EXPERIMENTAL=enabled docker manifest annotate --arch $${arch} $(IMAGE_GCLOUD) $(IMAGE_GCLOUD)-$${arch} ;\ DOCKER_CLI_EXPERIMENTAL=enabled $(CONTAINER_ENGINE) manifest annotate --arch $${arch} $(IMAGE_GCLOUD) $(IMAGE_GCLOUD)-$${arch} ;\
done done
DOCKER_CLI_EXPERIMENTAL=enabled docker manifest push $(IMAGE_GCLOUD) ;\ DOCKER_CLI_EXPERIMENTAL=enabled $(CONTAINER_ENGINE) manifest push $(IMAGE_GCLOUD) ;\
clean: clean:
rm -rf _output rm -rf _output
@@ -131,17 +133,28 @@ verify-gen:
lint: lint:
ifndef HAS_GOLANGCI ifndef HAS_GOLANGCI
curl -sfL https://install.goreleaser.com/github.com/golangci/golangci-lint.sh | sh -s -- -b ./_output/bin ${GOLANGCI_VERSION} curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b ./_output/bin ${GOLANGCI_VERSION}
endif endif
./_output/bin/golangci-lint run ./_output/bin/golangci-lint run
lint-chart: ensure-helm-install # helm
helm lint ./charts/descheduler
test-helm: ensure-helm-install
./test/run-helm-tests.sh
ensure-helm-install: ensure-helm-install:
ifndef HAS_HELM ifndef HAS_HELM
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 && chmod 700 ./get_helm.sh && ./get_helm.sh curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 && chmod 700 ./get_helm.sh && ./get_helm.sh
endif endif
lint-chart: ensure-helm-install
helm lint ./charts/descheduler
build-helm:
helm package ./charts/descheduler --dependency-update --destination ./bin/chart
test-helm: ensure-helm-install
./test/run-helm-tests.sh
kind-multi-node:
kind create cluster --name kind --config ./hack/kind_config.yaml --wait 2m
ct-helm:
./hack/verify-chart.sh

6
OWNERS
View File

@@ -2,14 +2,14 @@ approvers:
- damemi - damemi
- ingvagabund - ingvagabund
- seanmalloy - seanmalloy
- a7i
reviewers: reviewers:
- aveshagarwal
- k82cn
- ravisantoshgudimetla
- damemi - damemi
- seanmalloy - seanmalloy
- ingvagabund - ingvagabund
- lixiang233 - lixiang233
- a7i
- janeliul
emeritus_approvers: emeritus_approvers:
- aveshagarwal - aveshagarwal
- k82cn - k82cn

162
README.md
View File

@@ -50,6 +50,8 @@ Table of Contents
- [Node Fit filtering](#node-fit-filtering) - [Node Fit filtering](#node-fit-filtering)
- [Pod Evictions](#pod-evictions) - [Pod Evictions](#pod-evictions)
- [Pod Disruption Budget (PDB)](#pod-disruption-budget-pdb) - [Pod Disruption Budget (PDB)](#pod-disruption-budget-pdb)
- [High Availability](#high-availability)
- [Configure HA Mode](#configure-ha-mode)
- [Metrics](#metrics) - [Metrics](#metrics)
- [Compatibility Matrix](#compatibility-matrix) - [Compatibility Matrix](#compatibility-matrix)
- [Getting Involved and Contributing](#getting-involved-and-contributing) - [Getting Involved and Contributing](#getting-involved-and-contributing)
@@ -103,17 +105,17 @@ See the [resources | Kustomize](https://kubectl.docs.kubernetes.io/references/ku
Run As A Job Run As A Job
``` ```
kustomize build 'github.com/kubernetes-sigs/descheduler/kubernetes/job?ref=v0.22.0' | kubectl apply -f - kustomize build 'github.com/kubernetes-sigs/descheduler/kubernetes/job?ref=v0.25.1' | kubectl apply -f -
``` ```
Run As A CronJob Run As A CronJob
``` ```
kustomize build 'github.com/kubernetes-sigs/descheduler/kubernetes/cronjob?ref=v0.22.0' | kubectl apply -f - kustomize build 'github.com/kubernetes-sigs/descheduler/kubernetes/cronjob?ref=v0.25.1' | kubectl apply -f -
``` ```
Run As A Deployment Run As A Deployment
``` ```
kustomize build 'github.com/kubernetes-sigs/descheduler/kubernetes/deployment?ref=v0.22.0' | kubectl apply -f - kustomize build 'github.com/kubernetes-sigs/descheduler/kubernetes/deployment?ref=v0.25.1' | kubectl apply -f -
``` ```
## User Guide ## User Guide
@@ -132,6 +134,8 @@ The policy includes a common configuration that applies to all the strategies:
| `evictSystemCriticalPods` | `false` | [Warning: Will evict Kubernetes system pods] allows eviction of pods with any priority, including system pods like kube-dns | | `evictSystemCriticalPods` | `false` | [Warning: Will evict Kubernetes system pods] allows eviction of pods with any priority, including system pods like kube-dns |
| `ignorePvcPods` | `false` | set whether PVC pods should be evicted or ignored | | `ignorePvcPods` | `false` | set whether PVC pods should be evicted or ignored |
| `maxNoOfPodsToEvictPerNode` | `nil` | maximum number of pods evicted from each node (summed through all strategies) | | `maxNoOfPodsToEvictPerNode` | `nil` | maximum number of pods evicted from each node (summed through all strategies) |
| `maxNoOfPodsToEvictPerNamespace` | `nil` | maximum number of pods evicted from each namespace (summed through all strategies) |
| `evictFailedBarePods` | `false` | allow eviction of pods without owner references and in failed phase |
As part of the policy, the parameters associated with each strategy can be configured. As part of the policy, the parameters associated with each strategy can be configured.
See each strategy for details on available parameters. See each strategy for details on available parameters.
@@ -142,6 +146,7 @@ See each strategy for details on available parameters.
apiVersion: "descheduler/v1alpha1" apiVersion: "descheduler/v1alpha1"
kind: "DeschedulerPolicy" kind: "DeschedulerPolicy"
nodeSelector: prod=dev nodeSelector: prod=dev
evictFailedBarePods: false
evictLocalStoragePods: true evictLocalStoragePods: true
evictSystemCriticalPods: true evictSystemCriticalPods: true
maxNoOfPodsToEvictPerNode: 40 maxNoOfPodsToEvictPerNode: 40
@@ -216,6 +221,17 @@ These thresholds, `thresholds` and `targetThresholds`, could be tuned as per you
strategy evicts pods from `overutilized nodes` (those with usage above `targetThresholds`) to `underutilized nodes` strategy evicts pods from `overutilized nodes` (those with usage above `targetThresholds`) to `underutilized nodes`
(those with usage below `thresholds`), it will abort if any number of `underutilized nodes` or `overutilized nodes` is zero. (those with usage below `thresholds`), it will abort if any number of `underutilized nodes` or `overutilized nodes` is zero.
Additionally, the strategy accepts a `useDeviationThresholds` parameter.
If that parameter is set to `true`, the thresholds are considered as percentage deviations from mean resource usage.
`thresholds` will be deducted from the mean among all nodes and `targetThresholds` will be added to the mean.
A resource consumption above (resp. below) this window is considered as overutilization (resp. underutilization).
**NOTE:** Node resource consumption is determined by the requests and limits of pods, not actual usage.
This approach is chosen in order to maintain consistency with the kube-scheduler, which follows the same
design for scheduling pods onto nodes. This means that resource usage as reported by Kubelet (or commands
like `kubectl top`) may differ from the calculated consumption, due to these components reporting
actual usage metrics. Implementing metrics-based descheduling is currently TODO for the project.
**Parameters:** **Parameters:**
|Name|Type| |Name|Type|
@@ -223,6 +239,7 @@ strategy evicts pods from `overutilized nodes` (those with usage above `targetTh
|`thresholds`|map(string:int)| |`thresholds`|map(string:int)|
|`targetThresholds`|map(string:int)| |`targetThresholds`|map(string:int)|
|`numberOfNodes`|int| |`numberOfNodes`|int|
|`useDeviationThresholds`|bool|
|`thresholdPriority`|int (see [priority filtering](#priority-filtering))| |`thresholdPriority`|int (see [priority filtering](#priority-filtering))|
|`thresholdPriorityClassName`|string (see [priority filtering](#priority-filtering))| |`thresholdPriorityClassName`|string (see [priority filtering](#priority-filtering))|
|`nodeFit`|bool (see [node fit filtering](#node-fit-filtering))| |`nodeFit`|bool (see [node fit filtering](#node-fit-filtering))|
@@ -263,10 +280,10 @@ under utilized frequently or for a short period of time. By default, `numberOfNo
### HighNodeUtilization ### HighNodeUtilization
This strategy finds nodes that are under utilized and evicts pods from the nodes in the hope that these pods will be This strategy finds nodes that are under utilized and evicts pods from the nodes in the hope that these pods will be
scheduled compactly into fewer nodes. Used in conjunction with node auto-scaling, this strategy is intended to help scheduled compactly into fewer nodes. Used in conjunction with node auto-scaling, this strategy is intended to help
trigger down scaling of under utilized nodes. trigger down scaling of under utilized nodes.
This strategy **must** be used with the scheduler strategy `MostRequestedPriority`. The parameters of this strategy are This strategy **must** be used with the scheduler scoring strategy `MostAllocated`. The parameters of this strategy are
configured under `nodeResourceUtilizationThresholds`. configured under `nodeResourceUtilizationThresholds`.
The under utilization of nodes is determined by a configurable threshold `thresholds`. The threshold The under utilization of nodes is determined by a configurable threshold `thresholds`. The threshold
@@ -283,6 +300,12 @@ strategy evicts pods from `underutilized nodes` (those with usage below `thresho
so that they can be recreated in appropriately utilized nodes. so that they can be recreated in appropriately utilized nodes.
The strategy will abort if any number of `underutilized nodes` or `appropriately utilized nodes` is zero. The strategy will abort if any number of `underutilized nodes` or `appropriately utilized nodes` is zero.
**NOTE:** Node resource consumption is determined by the requests and limits of pods, not actual usage.
This approach is chosen in order to maintain consistency with the kube-scheduler, which follows the same
design for scheduling pods onto nodes. This means that resource usage as reported by Kubelet (or commands
like `kubectl top`) may differ from the calculated consumption, due to these components reporting
actual usage metrics. Implementing metrics-based descheduling is currently TODO for the project.
**Parameters:** **Parameters:**
|Name|Type| |Name|Type|
@@ -397,10 +420,17 @@ pod "podA" with a toleration to tolerate a taint ``key=value:NoSchedule`` schedu
node. If the node's taint is subsequently updated/removed, taint is no longer satisfied by its pods' tolerations node. If the node's taint is subsequently updated/removed, taint is no longer satisfied by its pods' tolerations
and will be evicted. and will be evicted.
Node taints can be excluded from consideration by specifying a list of excludedTaints. If a node taint key **or**
key=value matches an excludedTaints entry, the taint will be ignored.
For example, excludedTaints entry "dedicated" would match all taints with key "dedicated", regardless of value.
excludedTaints entry "dedicated=special-user" would match taints with key "dedicated" and value "special-user".
**Parameters:** **Parameters:**
|Name|Type| |Name|Type|
|---|---| |---|---|
|`excludedTaints`|list(string)|
|`thresholdPriority`|int (see [priority filtering](#priority-filtering))| |`thresholdPriority`|int (see [priority filtering](#priority-filtering))|
|`thresholdPriorityClassName`|string (see [priority filtering](#priority-filtering))| |`thresholdPriorityClassName`|string (see [priority filtering](#priority-filtering))|
|`namespaces`|(see [namespace filtering](#namespace-filtering))| |`namespaces`|(see [namespace filtering](#namespace-filtering))|
@@ -415,6 +445,10 @@ kind: "DeschedulerPolicy"
strategies: strategies:
"RemovePodsViolatingNodeTaints": "RemovePodsViolatingNodeTaints":
enabled: true enabled: true
params:
excludedTaints:
- dedicated=special-user # exclude taints with key "dedicated" and value "special-user"
- reserved # exclude all taints with key "reserved"
```` ````
### RemovePodsViolatingTopologySpreadConstraint ### RemovePodsViolatingTopologySpreadConstraint
@@ -456,9 +490,9 @@ strategies:
This strategy makes sure that pods having too many restarts are removed from nodes. For example a pod with EBS/PD that This strategy makes sure that pods having too many restarts are removed from nodes. For example a pod with EBS/PD that
can't get the volume/disk attached to the instance, then the pod should be re-scheduled to other nodes. Its parameters can't get the volume/disk attached to the instance, then the pod should be re-scheduled to other nodes. Its parameters
include `podRestartThreshold`, which is the number of restarts at which a pod should be evicted, and `includingInitContainers`, include `podRestartThreshold`, which is the number of restarts (summed over all eligible containers) at which a pod
which determines whether init container restarts should be factored into that calculation. should be evicted, and `includingInitContainers`, which determines whether init container restarts should be factored
|`labelSelector`|(see [label filtering](#label-filtering))| into that calculation.
**Parameters:** **Parameters:**
@@ -469,6 +503,7 @@ which determines whether init container restarts should be factored into that ca
|`thresholdPriority`|int (see [priority filtering](#priority-filtering))| |`thresholdPriority`|int (see [priority filtering](#priority-filtering))|
|`thresholdPriorityClassName`|string (see [priority filtering](#priority-filtering))| |`thresholdPriorityClassName`|string (see [priority filtering](#priority-filtering))|
|`namespaces`|(see [namespace filtering](#namespace-filtering))| |`namespaces`|(see [namespace filtering](#namespace-filtering))|
|`labelSelector`|(see [label filtering](#label-filtering))|
|`nodeFit`|bool (see [node fit filtering](#node-fit-filtering))| |`nodeFit`|bool (see [node fit filtering](#node-fit-filtering))|
**Example:** **Example:**
@@ -489,19 +524,24 @@ strategies:
This strategy evicts pods that are older than `maxPodLifeTimeSeconds`. This strategy evicts pods that are older than `maxPodLifeTimeSeconds`.
You can also specify `podStatusPhases` to `only` evict pods with specific `StatusPhases`, currently this parameter is limited You can also specify `states` parameter to **only** evict pods matching the following conditions:
to `Running` and `Pending`. - [Pod Phase](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase) status of: `Running`, `Pending`
- [Container State Waiting](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-state-waiting) condition of: `PodInitializing`, `ContainerCreating`
If a value for `states` or `podStatusPhases` is not specified,
Pods in any state (even `Running`) are considered for eviction.
**Parameters:** **Parameters:**
|Name|Type| |Name|Type|Notes|
|---|---| |---|---|---|
|`maxPodLifeTimeSeconds`|int| |`maxPodLifeTimeSeconds`|int||
|`podStatusPhases`|list(string)| |`podStatusPhases`|list(string)|Deprecated in v0.25+ Use `states` instead|
|`thresholdPriority`|int (see [priority filtering](#priority-filtering))| |`states`|list(string)|Only supported in v0.25+|
|`thresholdPriorityClassName`|string (see [priority filtering](#priority-filtering))| |`thresholdPriority`|int (see [priority filtering](#priority-filtering))||
|`namespaces`|(see [namespace filtering](#namespace-filtering))| |`thresholdPriorityClassName`|string (see [priority filtering](#priority-filtering))||
|`labelSelector`|(see [label filtering](#label-filtering))| |`namespaces`|(see [namespace filtering](#namespace-filtering))||
|`labelSelector`|(see [label filtering](#label-filtering))||
**Example:** **Example:**
@@ -514,8 +554,9 @@ strategies:
params: params:
podLifeTime: podLifeTime:
maxPodLifeTimeSeconds: 86400 maxPodLifeTimeSeconds: 86400
podStatusPhases: states:
- "Pending" - "Pending"
- "PodInitializing"
``` ```
### RemoveFailedPods ### RemoveFailedPods
@@ -523,7 +564,7 @@ strategies:
This strategy evicts pods that are in failed status phase. This strategy evicts pods that are in failed status phase.
You can provide an optional parameter to filter by failed `reasons`. You can provide an optional parameter to filter by failed `reasons`.
`reasons` can be expanded to include reasons of InitContainers as well by setting the optional parameter `includingInitContainers` to `true`. `reasons` can be expanded to include reasons of InitContainers as well by setting the optional parameter `includingInitContainers` to `true`.
You can specify an optional parameter `minPodLifeTimeSeconds` to evict pods that are older than specified seconds. You can specify an optional parameter `minPodLifetimeSeconds` to evict pods that are older than specified seconds.
Lastly, you can specify the optional parameter `excludeOwnerKinds` and if a pod Lastly, you can specify the optional parameter `excludeOwnerKinds` and if a pod
has any of these `Kind`s listed as an `OwnerRef`, that pod will not be considered for eviction. has any of these `Kind`s listed as an `OwnerRef`, that pod will not be considered for eviction.
@@ -531,7 +572,7 @@ has any of these `Kind`s listed as an `OwnerRef`, that pod will not be considere
|Name|Type| |Name|Type|
|---|---| |---|---|
|`minPodLifeTimeSeconds`|uint| |`minPodLifetimeSeconds`|uint|
|`excludeOwnerKinds`|list(string)| |`excludeOwnerKinds`|list(string)|
|`reasons`|list(string)| |`reasons`|list(string)|
|`includingInitContainers`|bool| |`includingInitContainers`|bool|
@@ -556,7 +597,7 @@ strategies:
includingInitContainers: true includingInitContainers: true
excludeOwnerKinds: excludeOwnerKinds:
- "Job" - "Job"
minPodLifeTimeSeconds: 3600 minPodLifetimeSeconds: 3600
``` ```
## Filter Pods ## Filter Pods
@@ -654,7 +695,7 @@ does not exist, descheduler won't create it and will throw an error.
### Label filtering ### Label filtering
The following strategies can configure a [standard kubernetes labelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#labelselector-v1-meta) The following strategies can configure a [standard kubernetes labelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#labelselector-v1-meta)
to filter pods by their labels: to filter pods by their labels:
* `PodLifeTime` * `PodLifeTime`
@@ -702,8 +743,9 @@ The following strategies accept a `nodeFit` boolean parameter which can optimize
If set to `true` the descheduler will consider whether or not the pods that meet eviction criteria will fit on other nodes before evicting them. If a pod cannot be rescheduled to another node, it will not be evicted. Currently the following criteria are considered when setting `nodeFit` to `true`: If set to `true` the descheduler will consider whether or not the pods that meet eviction criteria will fit on other nodes before evicting them. If a pod cannot be rescheduled to another node, it will not be evicted. Currently the following criteria are considered when setting `nodeFit` to `true`:
- A `nodeSelector` on the pod - A `nodeSelector` on the pod
- Any `Tolerations` on the pod and any `Taints` on the other nodes - Any `tolerations` on the pod and any `taints` on the other nodes
- `nodeAffinity` on the pod - `nodeAffinity` on the pod
- Resource `requests` made by the pod and the resources available on other nodes
- Whether any of the other nodes are marked as `unschedulable` - Whether any of the other nodes are marked as `unschedulable`
E.g. E.g.
@@ -713,18 +755,18 @@ apiVersion: "descheduler/v1alpha1"
kind: "DeschedulerPolicy" kind: "DeschedulerPolicy"
strategies: strategies:
"LowNodeUtilization": "LowNodeUtilization":
enabled: true enabled: true
params: params:
nodeResourceUtilizationThresholds: nodeFit: true
thresholds: nodeResourceUtilizationThresholds:
"cpu" : 20 thresholds:
"memory": 20 "cpu": 20
"pods": 20 "memory": 20
targetThresholds: "pods": 20
"cpu" : 50 targetThresholds:
"memory": 50 "cpu": 50
"pods": 50 "memory": 50
nodeFit: true "pods": 50
``` ```
Note that node fit filtering references the current pod spec, and not that of it's owner. Note that node fit filtering references the current pod spec, and not that of it's owner.
@@ -739,8 +781,8 @@ Using Deployments instead of ReplicationControllers provides an automated rollou
When the descheduler decides to evict pods from a node, it employs the following general mechanism: When the descheduler decides to evict pods from a node, it employs the following general mechanism:
* [Critical pods](https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/) (with priorityClassName set to system-cluster-critical or system-node-critical) are never evicted (unless `evictSystemCriticalPods: true` is set). * [Critical pods](https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/) (with priorityClassName set to system-cluster-critical or system-node-critical) are never evicted (unless `evictSystemCriticalPods: true` is set).
* Pods (static or mirrored pods or stand alone pods) not part of an ReplicationController, ReplicaSet(Deployment), StatefulSet, or Job are * Pods (static or mirrored pods or standalone pods) not part of an ReplicationController, ReplicaSet(Deployment), StatefulSet, or Job are
never evicted because these pods won't be recreated. never evicted because these pods won't be recreated. (Standalone pods in failed status phase can be evicted by setting `evictFailedBarePods: true`)
* Pods associated with DaemonSets are never evicted. * Pods associated with DaemonSets are never evicted.
* Pods with local storage are never evicted (unless `evictLocalStoragePods: true` is set). * Pods with local storage are never evicted (unless `evictLocalStoragePods: true` is set).
* Pods with PVCs are evicted (unless `ignorePvcPods: true` is set). * Pods with PVCs are evicted (unless `ignorePvcPods: true` is set).
@@ -749,6 +791,7 @@ best effort pods are evicted before burstable and guaranteed pods.
* All types of pods with the annotation `descheduler.alpha.kubernetes.io/evict` are eligible for eviction. This * All types of pods with the annotation `descheduler.alpha.kubernetes.io/evict` are eligible for eviction. This
annotation is used to override checks which prevent eviction and users can select which pod is evicted. annotation is used to override checks which prevent eviction and users can select which pod is evicted.
Users should know how and if the pod will be recreated. Users should know how and if the pod will be recreated.
* Pods with a non-nil DeletionTimestamp are not evicted by default.
Setting `--v=4` or greater on the Descheduler will log all reasons why any pod is not evictable. Setting `--v=4` or greater on the Descheduler will log all reasons why any pod is not evictable.
@@ -757,6 +800,23 @@ Setting `--v=4` or greater on the Descheduler will log all reasons why any pod i
Pods subject to a Pod Disruption Budget(PDB) are not evicted if descheduling violates its PDB. The pods Pods subject to a Pod Disruption Budget(PDB) are not evicted if descheduling violates its PDB. The pods
are evicted by using the eviction subresource to handle PDB. are evicted by using the eviction subresource to handle PDB.
## High Availability
In High Availability mode, Descheduler starts [leader election](https://github.com/kubernetes/client-go/tree/master/tools/leaderelection) process in Kubernetes. You can activate HA mode
if you choose to deploy your application as Deployment.
Deployment starts with 1 replica by default. If you want to use more than 1 replica, you must consider
enable High Availability mode since we don't want to run descheduler pods simultaneously.
### Configure HA Mode
The leader election process can be enabled by setting `--leader-elect` in the CLI. You can also set
`--set=leaderElection.enabled=true` flag if you are using Helm.
To get best results from HA mode some additional configurations might require:
* Configure a [podAntiAffinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node) rule if you want to schedule onto a node only if that node is in the same zone as at least one already-running descheduler
* Set the replica count greater than 1
## Metrics ## Metrics
| name | type | description | | name | type | description |
@@ -776,17 +836,19 @@ v0.18 should work with k8s v1.18, v1.17, and v1.16.
Starting with descheduler release v0.18 the minor version of descheduler matches the minor version of the k8s client Starting with descheduler release v0.18 the minor version of descheduler matches the minor version of the k8s client
packages that it is compiled with. packages that it is compiled with.
Descheduler | Supported Kubernetes Version | Descheduler | Supported Kubernetes Version |
-------------|----------------------------- |-------------|------------------------------|
v0.22 | v1.22 | v0.25 | v1.25 |
v0.21 | v1.21 | v0.24 | v1.24 |
v0.20 | v1.20 | v0.23 | v1.23 |
v0.19 | v1.19 | v0.22 | v1.22 |
v0.18 | v1.18 | v0.21 | v1.21 |
v0.10 | v1.17 | v0.20 | v1.20 |
v0.4-v0.9 | v1.9+ | v0.19 | v1.19 |
v0.1-v0.3 | v1.7-v1.8 | v0.18 | v1.18 |
| v0.10 | v1.17 |
| v0.4-v0.9 | v1.9+ |
| v0.1-v0.3 | v1.7-v1.8 |
## Getting Involved and Contributing ## Getting Involved and Contributing

View File

@@ -1,7 +1,7 @@
apiVersion: v1 apiVersion: v1
name: descheduler name: descheduler
version: 0.22.0 version: 0.25.2
appVersion: 0.22.0 appVersion: 0.25.1
description: Descheduler for Kubernetes is used to rebalance clusters by evicting pods that can potentially be scheduled on better nodes. In the current implementation, descheduler does not schedule replacement of evicted pods but relies on the default scheduler for that. description: Descheduler for Kubernetes is used to rebalance clusters by evicting pods that can potentially be scheduled on better nodes. In the current implementation, descheduler does not schedule replacement of evicted pods but relies on the default scheduler for that.
keywords: keywords:
- kubernetes - kubernetes

View File

@@ -43,28 +43,45 @@ The command removes all the Kubernetes components associated with the chart and
The following table lists the configurable parameters of the _descheduler_ chart and their default values. The following table lists the configurable parameters of the _descheduler_ chart and their default values.
| Parameter | Description | Default | | Parameter | Description | Default |
| ------------------------------ | --------------------------------------------------------------------------------------------------------------------- | ------------------------------------ | |-------------------------------------|-----------------------------------------------------------------------------------------------------------------------|--------------------------------------|
| `kind` | Use as CronJob or Deployment | `CronJob` | | `kind` | Use as CronJob or Deployment | `CronJob` |
| `image.repository` | Docker repository to use | `k8s.gcr.io/descheduler/descheduler` | | `image.repository` | Docker repository to use | `k8s.gcr.io/descheduler/descheduler` |
| `image.tag` | Docker tag to use | `v[chart appVersion]` | | `image.tag` | Docker tag to use | `v[chart appVersion]` |
| `image.pullPolicy` | Docker image pull policy | `IfNotPresent` | | `image.pullPolicy` | Docker image pull policy | `IfNotPresent` |
| `imagePullSecrets` | Docker repository secrets | `[]` | | `imagePullSecrets` | Docker repository secrets | `[]` |
| `nameOverride` | String to partially override `descheduler.fullname` template (will prepend the release name) | `""` | | `nameOverride` | String to partially override `descheduler.fullname` template (will prepend the release name) | `""` |
| `fullnameOverride` | String to fully override `descheduler.fullname` template | `""` | | `fullnameOverride` | String to fully override `descheduler.fullname` template | `""` |
| `cronJobApiVersion` | CronJob API Group Version | `"batch/v1"` | | `cronJobApiVersion` | CronJob API Group Version | `"batch/v1"` |
| `schedule` | The cron schedule to run the _descheduler_ job on | `"*/2 * * * *"` | | `schedule` | The cron schedule to run the _descheduler_ job on | `"*/2 * * * *"` |
| `startingDeadlineSeconds` | If set, configure `startingDeadlineSeconds` for the _descheduler_ job | `nil` | | `startingDeadlineSeconds` | If set, configure `startingDeadlineSeconds` for the _descheduler_ job | `nil` |
| `successfulJobsHistoryLimit` | If set, configure `successfulJobsHistoryLimit` for the _descheduler_ job | `nil` | | `successfulJobsHistoryLimit` | If set, configure `successfulJobsHistoryLimit` for the _descheduler_ job | `nil` |
| `failedJobsHistoryLimit` | If set, configure `failedJobsHistoryLimit` for the _descheduler_ job | `nil` | | `failedJobsHistoryLimit` | If set, configure `failedJobsHistoryLimit` for the _descheduler_ job | `nil` |
| `deschedulingInterval` | If using kind:Deployment, sets time between consecutive descheduler executions. | `5m` | | `deschedulingInterval` | If using kind:Deployment, sets time between consecutive descheduler executions. | `5m` |
| `cmdOptions` | The options to pass to the _descheduler_ command | _see values.yaml_ | | `replicas` | The replica count for Deployment | `1` |
| `deschedulerPolicy.strategies` | The _descheduler_ strategies to apply | _see values.yaml_ | | `leaderElection` | The options for high availability when running replicated components | _see values.yaml_ |
| `priorityClassName` | The name of the priority class to add to pods | `system-cluster-critical` | | `cmdOptions` | The options to pass to the _descheduler_ command | _see values.yaml_ |
| `rbac.create` | If `true`, create & use RBAC resources | `true` | | `deschedulerPolicy.strategies` | The _descheduler_ strategies to apply | _see values.yaml_ |
| `podSecurityPolicy.create` | If `true`, create PodSecurityPolicy | `true` | | `priorityClassName` | The name of the priority class to add to pods | `system-cluster-critical` |
| `resources` | Descheduler container CPU and memory requests/limits | _see values.yaml_ | | `rbac.create` | If `true`, create & use RBAC resources | `true` |
| `serviceAccount.create` | If `true`, create a service account for the cron job | `true` | | `resources` | Descheduler container CPU and memory requests/limits | _see values.yaml_ |
| `serviceAccount.name` | The name of the service account to use, if not set and create is true a name is generated using the fullname template | `nil` | | `serviceAccount.create` | If `true`, create a service account for the cron job | `true` |
| `nodeSelector` | Node selectors to run the descheduler cronjob on specific nodes | `nil` | | `serviceAccount.name` | The name of the service account to use, if not set and create is true a name is generated using the fullname template | `nil` |
| `tolerations` | tolerations to run the descheduler cronjob on specific nodes | `nil` | | `serviceAccount.annotations` | Specifies custom annotations for the serviceAccount | `{}` |
| `podAnnotations` | Annotations to add to the descheduler Pods | `{}` |
| `podLabels` | Labels to add to the descheduler Pods | `{}` |
| `nodeSelector` | Node selectors to run the descheduler cronjob/deployment on specific nodes | `nil` |
| `service.enabled` | If `true`, create a service for deployment | `false` |
| `serviceMonitor.enabled` | If `true`, create a ServiceMonitor for deployment | `false` |
| `serviceMonitor.namespace` | The namespace where Prometheus expects to find service monitors | `nil` |
| `serviceMonitor.interval` | The scrape interval. If not set, the Prometheus default scrape interval is used | `nil` |
| `serviceMonitor.honorLabels` | Keeps the scraped data's labels when labels are on collisions with target labels. | `true` |
| `serviceMonitor.insecureSkipVerify` | Skip TLS certificate validation when scraping | `true` |
| `serviceMonitor.serverName` | Name of the server to use when validating TLS certificate | `nil` |
| `serviceMonitor.metricRelabelings` | MetricRelabelConfigs to apply to samples after scraping, but before ingestion | `[]` |
| `serviceMonitor.relabelings` | RelabelConfigs to apply to samples before scraping | `[]` |
| `affinity` | Node affinity to run the descheduler cronjob/deployment on specific nodes | `nil` |
| `tolerations` | tolerations to run the descheduler cronjob/deployment on specific nodes | `nil` |
| `suspend` | Set spec.suspend in descheduler cronjob | `false` |
| `commonLabels` | Labels to apply to all resources | `{}` |
| `livenessProbe` | Liveness probe configuration for the descheduler container | _see values.yaml_ |

View File

@@ -1 +1,7 @@
Descheduler installed as a cron job. Descheduler installed as a {{ .Values.kind }}.
{{- if eq .Values.kind "Deployment" }}
{{- if eq .Values.replicas 1.0}}
WARNING: You set replica count as 1 and workload kind as Deployment however leaderElection is not enabled. Consider enabling Leader Election for HA mode.
{{- end}}
{{- end}}

View File

@@ -42,6 +42,9 @@ app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }} {{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }} app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- if .Values.commonLabels}}
{{ toYaml .Values.commonLabels }}
{{- end }}
{{- end -}} {{- end -}}
{{/* {{/*
@@ -62,3 +65,30 @@ Create the name of the service account to use
{{ default "default" .Values.serviceAccount.name }} {{ default "default" .Values.serviceAccount.name }}
{{- end -}} {{- end -}}
{{- end -}} {{- end -}}
{{/*
Leader Election
*/}}
{{- define "descheduler.leaderElection"}}
{{- if .Values.leaderElection -}}
- --leader-elect={{ .Values.leaderElection.enabled }}
{{- if .Values.leaderElection.leaseDuration }}
- --leader-elect-lease-duration={{ .Values.leaderElection.leaseDuration }}
{{- end }}
{{- if .Values.leaderElection.renewDeadline }}
- --leader-elect-renew-deadline={{ .Values.leaderElection.renewDeadline }}
{{- end }}
{{- if .Values.leaderElection.retryPeriod }}
- --leader-elect-retry-period={{ .Values.leaderElection.retryPeriod }}
{{- end }}
{{- if .Values.leaderElection.resourceLock }}
- --leader-elect-resource-lock={{ .Values.leaderElection.resourceLock }}
{{- end }}
{{- if .Values.leaderElection.resourceName }}
- --leader-elect-resource-name={{ .Values.leaderElection.resourceName }}
{{- end }}
{{- if .Values.leaderElection.resourceNamescape }}
- --leader-elect-resource-namespace={{ .Values.leaderElection.resourceNamescape }}
{{- end -}}
{{- end }}
{{- end }}

View File

@@ -6,7 +6,7 @@ metadata:
labels: labels:
{{- include "descheduler.labels" . | nindent 4 }} {{- include "descheduler.labels" . | nindent 4 }}
rules: rules:
- apiGroups: [""] - apiGroups: ["events.k8s.io"]
resources: ["events"] resources: ["events"]
verbs: ["create", "update"] verbs: ["create", "update"]
- apiGroups: [""] - apiGroups: [""]
@@ -14,7 +14,7 @@ rules:
verbs: ["get", "watch", "list"] verbs: ["get", "watch", "list"]
- apiGroups: [""] - apiGroups: [""]
resources: ["namespaces"] resources: ["namespaces"]
verbs: ["get", "list"] verbs: ["get", "watch", "list"]
- apiGroups: [""] - apiGroups: [""]
resources: ["pods"] resources: ["pods"]
verbs: ["get", "watch", "list", "delete"] verbs: ["get", "watch", "list", "delete"]
@@ -24,11 +24,13 @@ rules:
- apiGroups: ["scheduling.k8s.io"] - apiGroups: ["scheduling.k8s.io"]
resources: ["priorityclasses"] resources: ["priorityclasses"]
verbs: ["get", "watch", "list"] verbs: ["get", "watch", "list"]
{{- if .Values.podSecurityPolicy.create }} {{- if .Values.leaderElection.enabled }}
- apiGroups: ['policy'] - apiGroups: ["coordination.k8s.io"]
resources: ['podsecuritypolicies'] resources: ["leases"]
verbs: ['use'] verbs: ["create", "update"]
resourceNames: - apiGroups: ["coordination.k8s.io"]
- {{ template "descheduler.fullname" . }} resources: ["leases"]
resourceNames: ["{{ .Values.leaderElection.resourceName | default "descheduler" }}"]
verbs: ["get", "patch", "delete"]
{{- end }} {{- end }}
{{- end -}} {{- end -}}

View File

@@ -2,6 +2,7 @@ apiVersion: v1
kind: ConfigMap kind: ConfigMap
metadata: metadata:
name: {{ template "descheduler.fullname" . }} name: {{ template "descheduler.fullname" . }}
namespace: {{ .Release.Namespace }}
labels: labels:
{{- include "descheduler.labels" . | nindent 4 }} {{- include "descheduler.labels" . | nindent 4 }}
data: data:

View File

@@ -3,10 +3,14 @@ apiVersion: {{ .Values.cronJobApiVersion | default "batch/v1" }}
kind: CronJob kind: CronJob
metadata: metadata:
name: {{ template "descheduler.fullname" . }} name: {{ template "descheduler.fullname" . }}
namespace: {{ .Release.Namespace }}
labels: labels:
{{- include "descheduler.labels" . | nindent 4 }} {{- include "descheduler.labels" . | nindent 4 }}
spec: spec:
schedule: {{ .Values.schedule | quote }} schedule: {{ .Values.schedule | quote }}
{{- if .Values.suspend }}
suspend: {{ .Values.suspend }}
{{- end }}
concurrencyPolicy: "Forbid" concurrencyPolicy: "Forbid"
{{- if .Values.startingDeadlineSeconds }} {{- if .Values.startingDeadlineSeconds }}
startingDeadlineSeconds: {{ .Values.startingDeadlineSeconds }} startingDeadlineSeconds: {{ .Values.startingDeadlineSeconds }}
@@ -37,6 +41,10 @@ spec:
nodeSelector: nodeSelector:
{{- toYaml . | nindent 12 }} {{- toYaml . | nindent 12 }}
{{- end }} {{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- with .Values.tolerations }} {{- with .Values.tolerations }}
tolerations: tolerations:
{{- toYaml . | nindent 12 }} {{- toYaml . | nindent 12 }}
@@ -65,6 +73,8 @@ spec:
- {{ $value | quote }} - {{ $value | quote }}
{{- end }} {{- end }}
{{- end }} {{- end }}
livenessProbe:
{{- toYaml .Values.livenessProbe | nindent 16 }}
resources: resources:
{{- toYaml .Values.resources | nindent 16 }} {{- toYaml .Values.resources | nindent 16 }}
securityContext: securityContext:

View File

@@ -3,10 +3,18 @@ apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
name: {{ template "descheduler.fullname" . }} name: {{ template "descheduler.fullname" . }}
namespace: {{ .Release.Namespace }}
labels: labels:
{{- include "descheduler.labels" . | nindent 4 }} {{- include "descheduler.labels" . | nindent 4 }}
spec: spec:
{{- if gt .Values.replicas 1.0}}
{{- if not .Values.leaderElection.enabled }}
{{- fail "You must set leaderElection to use more than 1 replica"}}
{{- end}}
replicas: {{ required "leaderElection required for running more than one replica" .Values.replicas }}
{{- else }}
replicas: 1 replicas: 1
{{- end }}
selector: selector:
matchLabels: matchLabels:
{{- include "descheduler.selectorLabels" . | nindent 6 }} {{- include "descheduler.selectorLabels" . | nindent 6 }}
@@ -27,6 +35,10 @@ spec:
priorityClassName: {{ .Values.priorityClassName }} priorityClassName: {{ .Values.priorityClassName }}
{{- end }} {{- end }}
serviceAccountName: {{ template "descheduler.serviceAccountName" . }} serviceAccountName: {{ template "descheduler.serviceAccountName" . }}
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 10 }}
{{- end }}
containers: containers:
- name: {{ .Chart.Name }} - name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default (printf "v%s" .Chart.AppVersion) }}" image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default (printf "v%s" .Chart.AppVersion) }}"
@@ -44,9 +56,12 @@ spec:
- {{ $value | quote }} - {{ $value | quote }}
{{- end }} {{- end }}
{{- end }} {{- end }}
{{- include "descheduler.leaderElection" . | nindent 12 }}
ports: ports:
- containerPort: 10258 - containerPort: 10258
protocol: TCP protocol: TCP
livenessProbe:
{{- toYaml .Values.livenessProbe | nindent 12 }}
resources: resources:
{{- toYaml .Values.resources | nindent 12 }} {{- toYaml .Values.resources | nindent 12 }}
securityContext: securityContext:
@@ -68,6 +83,10 @@ spec:
nodeSelector: nodeSelector:
{{- toYaml . | nindent 8 }} {{- toYaml . | nindent 8 }}
{{- end }} {{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }} {{- with .Values.tolerations }}
tolerations: tolerations:
{{- toYaml . | nindent 8 }} {{- toYaml . | nindent 8 }}

View File

@@ -1,38 +0,0 @@
{{- if .Values.podSecurityPolicy.create -}}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ template "descheduler.fullname" . }}
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default,runtime/default'
seccomp.security.alpha.kubernetes.io/defaultProfileName: 'runtime/default'
spec:
privileged: false
allowPrivilegeEscalation: false
requiredDropCapabilities:
- ALL
volumes:
- 'configMap'
- 'secret'
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
fsGroup:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
readOnlyRootFilesystem: true
{{- end -}}

View File

@@ -0,0 +1,21 @@
{{- if eq .Values.kind "Deployment" }}
{{- if eq .Values.service.enabled true }}
apiVersion: v1
kind: Service
metadata:
labels:
{{- include "descheduler.labels" . | nindent 4 }}
name: {{ template "descheduler.fullname" . }}
namespace: {{ .Release.Namespace }}
spec:
clusterIP: None
ports:
- name: http-metrics
port: 10258
protocol: TCP
targetPort: 10258
selector:
{{- include "descheduler.selectorLabels" . | nindent 4 }}
type: ClusterIP
{{- end }}
{{- end }}

View File

@@ -3,6 +3,10 @@ apiVersion: v1
kind: ServiceAccount kind: ServiceAccount
metadata: metadata:
name: {{ template "descheduler.serviceAccountName" . }} name: {{ template "descheduler.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
labels: labels:
{{- include "descheduler.labels" . | nindent 4 }} {{- include "descheduler.labels" . | nindent 4 }}
{{- if .Values.serviceAccount.annotations }}
annotations: {{ toYaml .Values.serviceAccount.annotations | nindent 4 }}
{{- end }}
{{- end -}} {{- end -}}

View File

@@ -0,0 +1,41 @@
{{- if eq .Values.kind "Deployment" }}
{{- if eq .Values.serviceMonitor.enabled true }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ template "descheduler.fullname" . }}-servicemonitor
namespace: {{ .Values.serviceMonitor.namespace | default .Release.Namespace }}
labels:
{{- include "descheduler.labels" . | nindent 4 }}
spec:
jobLabel: jobLabel
namespaceSelector:
matchNames:
- {{ .Release.Namespace }}
selector:
matchLabels:
{{- include "descheduler.selectorLabels" . | nindent 6 }}
endpoints:
- honorLabels: {{ .Values.serviceMonitor.honorLabels | default true }}
port: http-metrics
{{- if .Values.serviceMonitor.interval }}
interval: {{ .Values.serviceMonitor.interval }}
{{- end }}
scheme: https
tlsConfig:
{{- if eq .Values.serviceMonitor.insecureSkipVerify true }}
insecureSkipVerify: true
{{- end }}
{{- if .Values.serviceMonitor.serverName }}
serverName: {{ .Values.serviceMonitor.serverName }}
{{- end}}
{{- if .Values.serviceMonitor.metricRelabelings }}
metricRelabelings:
{{ tpl (toYaml .Values.serviceMonitor.metricRelabelings | indent 4) . }}
{{- end }}
{{- if .Values.serviceMonitor.relabelings }}
relabelings:
{{ tpl (toYaml .Values.serviceMonitor.relabelings | indent 4) . }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -1,29 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: descheduler-test-pod
annotations:
"helm.sh/hook": test
spec:
restartPolicy: Never
serviceAccountName: descheduler-ci
containers:
- name: descheduler-test-container
image: alpine:latest
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- All
privileged: false
runAsNonRoot: false
command: ["/bin/ash"]
args:
- -c
- >-
apk --no-cache add curl &&
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl &&
chmod +x ./kubectl &&
mv ./kubectl /usr/local/bin/kubectl &&
/usr/local/bin/kubectl get pods --namespace kube-system --token "$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" | grep "descheduler" | grep "Completed"

View File

@@ -11,7 +11,8 @@ image:
tag: "" tag: ""
pullPolicy: IfNotPresent pullPolicy: IfNotPresent
imagePullSecrets: [] imagePullSecrets:
# - name: container-registry-secret
resources: resources:
requests: requests:
@@ -24,25 +25,54 @@ resources:
nameOverride: "" nameOverride: ""
fullnameOverride: "" fullnameOverride: ""
cronJobApiVersion: "batch/v1" # Use "batch/v1beta1" for k8s version < 1.21.0. TODO(@7i) remove with 1.23 release # labels that'll be applied to all resources
commonLabels: {}
cronJobApiVersion: "batch/v1"
schedule: "*/2 * * * *" schedule: "*/2 * * * *"
#startingDeadlineSeconds: 200 suspend: false
#successfulJobsHistoryLimit: 1 # startingDeadlineSeconds: 200
#failedJobsHistoryLimit: 1 # successfulJobsHistoryLimit: 1
# failedJobsHistoryLimit: 1
# Required when running as a Deployment # Required when running as a Deployment
deschedulingInterval: 5m deschedulingInterval: 5m
# Specifies the replica count for Deployment
# Set leaderElection if you want to use more than 1 replica
# Set affinity.podAntiAffinity rule if you want to schedule onto a node
# only if that node is in the same zone as at least one already-running descheduler
replicas: 1
# Specifies whether Leader Election resources should be created
# Required when running as a Deployment
leaderElection: {}
# enabled: true
# leaseDuration: 15s
# renewDeadline: 10s
# retryPeriod: 2s
# resourceLock: "leases"
# resourceName: "descheduler"
# resourceNamescape: "kube-system"
cmdOptions: cmdOptions:
v: 3 v: 3
# evict-local-storage-pods:
# max-pods-to-evict-per-node: 10
# node-selector: "key1=value1,key2=value2"
deschedulerPolicy: deschedulerPolicy:
# nodeSelector: "key1=value1,key2=value2"
# maxNoOfPodsToEvictPerNode: 10
# maxNoOfPodsToEvictPerNamespace: 10
# ignorePvcPods: true
# evictLocalStoragePods: true
strategies: strategies:
RemoveDuplicates: RemoveDuplicates:
enabled: true enabled: true
RemovePodsHavingTooManyRestarts:
enabled: true
params:
podsHavingTooManyRestarts:
podRestartThreshold: 100
includingInitContainers: true
RemovePodsViolatingNodeTaints: RemovePodsViolatingNodeTaints:
enabled: true enabled: true
RemovePodsViolatingNodeAffinity: RemovePodsViolatingNodeAffinity:
@@ -52,6 +82,10 @@ deschedulerPolicy:
- requiredDuringSchedulingIgnoredDuringExecution - requiredDuringSchedulingIgnoredDuringExecution
RemovePodsViolatingInterPodAntiAffinity: RemovePodsViolatingInterPodAntiAffinity:
enabled: true enabled: true
RemovePodsViolatingTopologySpreadConstraint:
enabled: true
params:
includeSoftConstraints: false
LowNodeUtilization: LowNodeUtilization:
enabled: true enabled: true
params: params:
@@ -70,6 +104,25 @@ priorityClassName: system-cluster-critical
nodeSelector: {} nodeSelector: {}
# foo: bar # foo: bar
affinity: {}
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: kubernetes.io/e2e-az-name
# operator: In
# values:
# - e2e-az1
# - e2e-az2
# podAntiAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# - labelSelector:
# matchExpressions:
# - key: app.kubernetes.io/name
# operator: In
# values:
# - descheduler
# topologyKey: "kubernetes.io/hostname"
tolerations: [] tolerations: []
# - key: 'management' # - key: 'management'
# operator: 'Equal' # operator: 'Equal'
@@ -80,13 +133,47 @@ rbac:
# Specifies whether RBAC resources should be created # Specifies whether RBAC resources should be created
create: true create: true
podSecurityPolicy:
# Specifies whether PodSecurityPolicy should be created.
create: true
serviceAccount: serviceAccount:
# Specifies whether a ServiceAccount should be created # Specifies whether a ServiceAccount should be created
create: true create: true
# The name of the ServiceAccount to use. # The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname template # If not set and create is true, a name is generated using the fullname template
name: name:
# Specifies custom annotations for the serviceAccount
annotations: {}
podAnnotations: {}
podLabels: {}
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10258
scheme: HTTPS
initialDelaySeconds: 3
periodSeconds: 10
service:
enabled: false
serviceMonitor:
enabled: false
# The namespace where Prometheus expects to find service monitors.
# namespace: ""
interval: ""
# honorLabels: true
insecureSkipVerify: true
serverName: null
metricRelabelings: []
# - action: keep
# regex: 'descheduler_(build_info|pods_evicted)'
# sourceLabels: [__name__]
relabelings: []
# - sourceLabels: [__meta_kubernetes_pod_node_name]
# separator: ;
# regex: ^(.*)$
# targetLabel: nodename
# replacement: $1
# action: replace

View File

@@ -1,13 +1,13 @@
# See https://cloud.google.com/cloud-build/docs/build-config # See https://cloud.google.com/cloud-build/docs/build-config
# this must be specified in seconds. If omitted, defaults to 600s (10 mins) # this must be specified in seconds. If omitted, defaults to 600s (10 mins)
timeout: 1200s timeout: 1500s
# this prevents errors if you don't use both _GIT_TAG and _PULL_BASE_REF, # this prevents errors if you don't use both _GIT_TAG and _PULL_BASE_REF,
# or any new substitutions added in the future. # or any new substitutions added in the future.
options: options:
substitution_option: ALLOW_LOOSE substitution_option: ALLOW_LOOSE
steps: steps:
- name: 'gcr.io/k8s-testimages/gcb-docker-gcloud:v20190906-745fed4' - name: 'gcr.io/k8s-staging-test-infra/gcb-docker-gcloud:v20211118-2f2d816b90'
entrypoint: make entrypoint: make
env: env:
- DOCKER_CLI_EXPERIMENTAL=enabled - DOCKER_CLI_EXPERIMENTAL=enabled

View File

@@ -18,13 +18,14 @@ limitations under the License.
package options package options
import ( import (
"github.com/spf13/pflag" "time"
utilerrors "k8s.io/apimachinery/pkg/util/errors" "github.com/spf13/pflag"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
apiserveroptions "k8s.io/apiserver/pkg/server/options" apiserveroptions "k8s.io/apiserver/pkg/server/options"
clientset "k8s.io/client-go/kubernetes" clientset "k8s.io/client-go/kubernetes"
"k8s.io/component-base/logs" componentbaseconfig "k8s.io/component-base/config"
componentbaseoptions "k8s.io/component-base/config/options"
"sigs.k8s.io/descheduler/pkg/apis/componentconfig" "sigs.k8s.io/descheduler/pkg/apis/componentconfig"
"sigs.k8s.io/descheduler/pkg/apis/componentconfig/v1alpha1" "sigs.k8s.io/descheduler/pkg/apis/componentconfig/v1alpha1"
deschedulerscheme "sigs.k8s.io/descheduler/pkg/descheduler/scheme" deschedulerscheme "sigs.k8s.io/descheduler/pkg/descheduler/scheme"
@@ -39,7 +40,7 @@ type DeschedulerServer struct {
componentconfig.DeschedulerConfiguration componentconfig.DeschedulerConfiguration
Client clientset.Interface Client clientset.Interface
Logs *logs.Options EventClient clientset.Interface
SecureServing *apiserveroptions.SecureServingOptionsWithLoopback SecureServing *apiserveroptions.SecureServingOptionsWithLoopback
DisableMetrics bool DisableMetrics bool
} }
@@ -56,20 +57,22 @@ func NewDeschedulerServer() (*DeschedulerServer, error) {
return &DeschedulerServer{ return &DeschedulerServer{
DeschedulerConfiguration: *cfg, DeschedulerConfiguration: *cfg,
Logs: logs.NewOptions(),
SecureServing: secureServing, SecureServing: secureServing,
}, nil }, nil
} }
// Validation checks for DeschedulerServer.
func (s *DeschedulerServer) Validate() error {
var errs []error
errs = append(errs, s.Logs.Validate()...)
return utilerrors.NewAggregate(errs)
}
func newDefaultComponentConfig() (*componentconfig.DeschedulerConfiguration, error) { func newDefaultComponentConfig() (*componentconfig.DeschedulerConfiguration, error) {
versionedCfg := v1alpha1.DeschedulerConfiguration{} versionedCfg := v1alpha1.DeschedulerConfiguration{
LeaderElection: componentbaseconfig.LeaderElectionConfiguration{
LeaderElect: false,
LeaseDuration: metav1.Duration{Duration: 137 * time.Second},
RenewDeadline: metav1.Duration{Duration: 107 * time.Second},
RetryPeriod: metav1.Duration{Duration: 26 * time.Second},
ResourceLock: "leases",
ResourceName: "descheduler",
ResourceNamespace: "kube-system",
},
}
deschedulerscheme.Scheme.Default(&versionedCfg) deschedulerscheme.Scheme.Default(&versionedCfg)
cfg := componentconfig.DeschedulerConfiguration{} cfg := componentconfig.DeschedulerConfiguration{}
if err := deschedulerscheme.Scheme.Convert(&versionedCfg, &cfg, nil); err != nil { if err := deschedulerscheme.Scheme.Convert(&versionedCfg, &cfg, nil); err != nil {
@@ -80,18 +83,14 @@ func newDefaultComponentConfig() (*componentconfig.DeschedulerConfiguration, err
// AddFlags adds flags for a specific SchedulerServer to the specified FlagSet // AddFlags adds flags for a specific SchedulerServer to the specified FlagSet
func (rs *DeschedulerServer) AddFlags(fs *pflag.FlagSet) { func (rs *DeschedulerServer) AddFlags(fs *pflag.FlagSet) {
fs.StringVar(&rs.Logging.Format, "logging-format", "text", `Sets the log format. Permitted formats: "text", "json". Non-default formats don't honor these flags: --add-dir-header, --alsologtostderr, --log-backtrace-at, --log-dir, --log-file, --log-file-max-size, --logtostderr, --skip-headers, --skip-log-headers, --stderrthreshold, --log-flush-frequency.\nNon-default choices are currently alpha and subject to change without warning.`) fs.StringVar(&rs.Logging.Format, "logging-format", "text", `Sets the log format. Permitted formats: "text", "json". Non-default formats don't honor these flags: --add-dir-header, --alsologtostderr, --log-backtrace-at, --log_dir, --log_file, --log_file_max_size, --logtostderr, --skip-headers, --skip-log-headers, --stderrthreshold, --log-flush-frequency.\nNon-default choices are currently alpha and subject to change without warning.`)
fs.DurationVar(&rs.DeschedulingInterval, "descheduling-interval", rs.DeschedulingInterval, "Time interval between two consecutive descheduler executions. Setting this value instructs the descheduler to run in a continuous loop at the interval specified.") fs.DurationVar(&rs.DeschedulingInterval, "descheduling-interval", rs.DeschedulingInterval, "Time interval between two consecutive descheduler executions. Setting this value instructs the descheduler to run in a continuous loop at the interval specified.")
fs.StringVar(&rs.KubeconfigFile, "kubeconfig", rs.KubeconfigFile, "File with kube configuration.") fs.StringVar(&rs.KubeconfigFile, "kubeconfig", rs.KubeconfigFile, "File with kube configuration.")
fs.StringVar(&rs.PolicyConfigFile, "policy-config-file", rs.PolicyConfigFile, "File with descheduler policy configuration.") fs.StringVar(&rs.PolicyConfigFile, "policy-config-file", rs.PolicyConfigFile, "File with descheduler policy configuration.")
fs.BoolVar(&rs.DryRun, "dry-run", rs.DryRun, "execute descheduler in dry run mode.") fs.BoolVar(&rs.DryRun, "dry-run", rs.DryRun, "execute descheduler in dry run mode.")
// node-selector query causes descheduler to run only on nodes that matches the node labels in the query
fs.StringVar(&rs.NodeSelector, "node-selector", rs.NodeSelector, "DEPRECATED: selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2)")
// max-no-pods-to-evict limits the maximum number of pods to be evicted per node by descheduler.
fs.IntVar(&rs.MaxNoOfPodsToEvictPerNode, "max-pods-to-evict-per-node", rs.MaxNoOfPodsToEvictPerNode, "DEPRECATED: limits the maximum number of pods to be evicted per node by descheduler")
// evict-local-storage-pods allows eviction of pods that are using local storage. This is false by default.
fs.BoolVar(&rs.EvictLocalStoragePods, "evict-local-storage-pods", rs.EvictLocalStoragePods, "DEPRECATED: enables evicting pods using local storage by descheduler")
fs.BoolVar(&rs.DisableMetrics, "disable-metrics", rs.DisableMetrics, "Disables metrics. The metrics are by default served through https://localhost:10258/metrics. Secure address, resp. port can be changed through --bind-address, resp. --secure-port flags.") fs.BoolVar(&rs.DisableMetrics, "disable-metrics", rs.DisableMetrics, "Disables metrics. The metrics are by default served through https://localhost:10258/metrics. Secure address, resp. port can be changed through --bind-address, resp. --secure-port flags.")
componentbaseoptions.BindLeaderElectionFlags(&rs.LeaderElection, fs)
rs.SecureServing.AddFlags(fs) rs.SecureServing.AddFlags(fs)
} }

View File

@@ -19,8 +19,11 @@ package app
import ( import (
"context" "context"
"flag"
"io" "io"
"os/signal"
"syscall"
"k8s.io/apiserver/pkg/server/healthz"
"sigs.k8s.io/descheduler/cmd/descheduler/app/options" "sigs.k8s.io/descheduler/cmd/descheduler/app/options"
"sigs.k8s.io/descheduler/pkg/descheduler" "sigs.k8s.io/descheduler/pkg/descheduler"
@@ -30,7 +33,8 @@ import (
apiserver "k8s.io/apiserver/pkg/server" apiserver "k8s.io/apiserver/pkg/server"
"k8s.io/apiserver/pkg/server/mux" "k8s.io/apiserver/pkg/server/mux"
restclient "k8s.io/client-go/rest" restclient "k8s.io/client-go/rest"
aflag "k8s.io/component-base/cli/flag" registry "k8s.io/component-base/logs/api/v1"
_ "k8s.io/component-base/logs/json/register"
"k8s.io/component-base/metrics/legacyregistry" "k8s.io/component-base/metrics/legacyregistry"
"k8s.io/klog/v2" "k8s.io/klog/v2"
) )
@@ -48,8 +52,7 @@ func NewDeschedulerCommand(out io.Writer) *cobra.Command {
Short: "descheduler", Short: "descheduler",
Long: `The descheduler evicts pods which may be bound to less desired nodes`, Long: `The descheduler evicts pods which may be bound to less desired nodes`,
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
s.Logs.Config.Format = s.Logging.Format // s.Logs.Config.Format = s.Logging.Format
s.Logs.Apply()
// LoopbackClientConfig is a config for a privileged loopback connection // LoopbackClientConfig is a config for a privileged loopback connection
var LoopbackClientConfig *restclient.Config var LoopbackClientConfig *restclient.Config
@@ -58,37 +61,49 @@ func NewDeschedulerCommand(out io.Writer) *cobra.Command {
klog.ErrorS(err, "failed to apply secure server configuration") klog.ErrorS(err, "failed to apply secure server configuration")
return return
} }
var factory registry.LogFormatFactory
if factory == nil {
klog.ClearLogger()
} else {
log, logrFlush := factory.Create(registry.LoggingConfiguration{
Format: s.Logging.Format,
})
if err := s.Validate(); err != nil { defer logrFlush()
klog.ErrorS(err, "failed to validate server configuration") klog.SetLogger(log)
}
ctx, done := signal.NotifyContext(context.Background(), syscall.SIGINT, syscall.SIGTERM)
pathRecorderMux := mux.NewPathRecorderMux("descheduler")
if !s.DisableMetrics {
pathRecorderMux.Handle("/metrics", legacyregistry.HandlerWithReset())
}
healthz.InstallHandler(pathRecorderMux, healthz.NamedCheck("Descheduler", healthz.PingHealthz.Check))
stoppedCh, _, err := SecureServing.Serve(pathRecorderMux, 0, ctx.Done())
if err != nil {
klog.Fatalf("failed to start secure server: %v", err)
return return
} }
if !s.DisableMetrics { err = Run(ctx, s)
ctx := context.TODO()
pathRecorderMux := mux.NewPathRecorderMux("descheduler")
pathRecorderMux.Handle("/metrics", legacyregistry.HandlerWithReset())
if _, err := SecureServing.Serve(pathRecorderMux, 0, ctx.Done()); err != nil {
klog.Fatalf("failed to start secure server: %v", err)
return
}
}
err := Run(s)
if err != nil { if err != nil {
klog.ErrorS(err, "descheduler server") klog.ErrorS(err, "descheduler server")
} }
done()
// wait for metrics server to close
<-stoppedCh
}, },
} }
cmd.SetOut(out) cmd.SetOut(out)
flags := cmd.Flags() flags := cmd.Flags()
flags.SetNormalizeFunc(aflag.WordSepNormalizeFunc)
flags.AddGoFlagSet(flag.CommandLine)
s.AddFlags(flags) s.AddFlags(flags)
return cmd return cmd
} }
func Run(rs *options.DeschedulerServer) error { func Run(ctx context.Context, rs *options.DeschedulerServer) error {
return descheduler.Run(rs) return descheduler.Run(ctx, rs)
} }

View File

@@ -17,22 +17,23 @@ limitations under the License.
package main package main
import ( import (
"fmt"
"k8s.io/component-base/logs"
"os" "os"
"k8s.io/component-base/cli"
"k8s.io/klog/v2"
"sigs.k8s.io/descheduler/cmd/descheduler/app" "sigs.k8s.io/descheduler/cmd/descheduler/app"
) )
func init() {
klog.SetOutput(os.Stdout)
klog.InitFlags(nil)
}
func main() { func main() {
out := os.Stdout out := os.Stdout
cmd := app.NewDeschedulerCommand(out) cmd := app.NewDeschedulerCommand(out)
cmd.AddCommand(app.NewVersionCommand()) cmd.AddCommand(app.NewVersionCommand())
logs.InitLogs() code := cli.Run(cmd)
defer logs.FlushLogs() os.Exit(code)
if err := cmd.Execute(); err != nil {
fmt.Println(err)
os.Exit(1)
}
} }

View File

@@ -31,7 +31,7 @@ View all CLI options.
## Run Tests ## Run Tests
``` ```
GOOS=linux make dev-image GOOS=linux make dev-image
kind create cluster --config hack/kind_config.yaml make kind-multi-node
kind load docker-image <image name> kind load docker-image <image name>
kind get kubeconfig > /tmp/admin.conf kind get kubeconfig > /tmp/admin.conf
export KUBECONFIG=/tmp/admin.conf export KUBECONFIG=/tmp/admin.conf
@@ -39,17 +39,31 @@ make test-unit
make test-e2e make test-e2e
``` ```
## Run Helm Tests
Run the helm test for a particular descheduler release by setting below variables, ## Build Helm Package locally
```
HELM_IMAGE_REPO="descheduler" If you made some changes in the chart, and just want to check if templating is ok, or if the chart is buildable, you can run this command to have a package built from the `./charts` directory.
HELM_IMAGE_TAG="helm-test"
HELM_CHART_LOCATION="./charts/descheduler"
```
The helm tests runs as part of descheduler CI. But, to run it manually from the descheduler root,
``` ```
make test-helm make build-helm
```
## Lint Helm Chart locally
To check linting of your changes in the helm chart locally you can run:
```
make lint-helm
```
## Test helm changes locally with kind and ct
You will need kind and docker (or equivalent) installed. We can use ct public image to avoid installing ct and all its dependencies.
```
make kind-multi-node
make ct-helm
``` ```
### Miscellaneous ### Miscellaneous

16
docs/proposals.md Normal file
View File

@@ -0,0 +1,16 @@
# Proposals
This document walk you through about all the enhancements proposals for descheduler.
## Descheduler v1alpha2 Design Proposal
```yaml
title: Descheduler v1alpha2 Design Proposal
authors:
- "@damemi"
link:
- https://docs.google.com/document/d/1S1JCh-0F-QCJvBBG-kbmXiHAJFF8doArhDIAKbOj93I/edit#heading=h.imbp1ctnc8lx
- https://github.com/kubernetes-sigs/descheduler/issues/679
owning-sig: sig-scheduling
creation-date: 2021-05-01
status: implementable
```

View File

@@ -1,36 +1,82 @@
# Release Guide # Release Guide
## Container Image The process for publishing each Descheduler release includes a mixture of manual and automatic steps. Over
time, it would be good to automate as much of this process as possible. However, due to current limitations there
is care that must be taken to perform each manual step precisely so that the automated steps execute properly.
### Semi-automatic ## Pre-release Code Changes
1. Make sure your repo is clean by git's standards Before publishing each release, the following code updates must be made:
2. Create a release branch `git checkout -b release-1.18` (not required for patch releases)
3. Push the release branch to the descheuler repo and ensure branch protection is enabled (not required for patch releases)
4. Tag the repository from the `master` branch (from the `release-1.18` branch for a patch release) and push the tag `VERSION=v0.18.0 git tag -m $VERSION $VERSION; git push origin $VERSION`
5. Publish a draft release using the tag you just created
6. Perform the [image promotion process](https://github.com/kubernetes/k8s.io/tree/main/k8s.gcr.io#image-promoter)
7. Publish release
8. Email `kubernetes-sig-scheduling@googlegroups.com` to announce the release
### Manual - [ ] (Optional, but recommended) Bump `k8s.io` dependencies to the `-rc` tags. These tags are usually published around upstream code freeze. [Example](https://github.com/kubernetes-sigs/descheduler/pull/539)
- [ ] Bump `k8s.io` dependencies to GA tags once they are published (following the upstream release). [Example](https://github.com/kubernetes-sigs/descheduler/pull/615)
- [ ] Ensure that Go is updated to the same version as upstream. [Example](https://github.com/kubernetes-sigs/descheduler/pull/801)
- [ ] Make CI changes in [github.com/kubernetes/test-infra](https://github.com/kubernetes/test-infra) to add the new version's tests (note, this may also include a Go bump). [Example](https://github.com/kubernetes/test-infra/pull/25833)
- [ ] Update local CI versions for utils (such as golang-ci), kind, and go. [Example - e2e](https://github.com/kubernetes-sigs/descheduler/commit/ac4d576df8831c0c399ee8fff1e85469e90b8c44), [Example - helm](https://github.com/kubernetes-sigs/descheduler/pull/821)
- [ ] Update version references in docs and Readme. [Example](https://github.com/kubernetes-sigs/descheduler/pull/617)
1. Make sure your repo is clean by git's standards ## Release Process
2. Create a release branch `git checkout -b release-1.18` (not required for patch releases)
3. Push the release branch to the descheuler repo and ensure branch protection is enabled (not required for patch releases)
4. Tag the repository from the `master` branch (from the `release-1.18` branch for a patch release) and push the tag `VERSION=v0.18.0 git tag -m $VERSION $VERSION; git push origin $VERSION`
5. Checkout the tag you just created and make sure your repo is clean by git's standards `git checkout $VERSION`
6. Build and push the container image to the staging registry `VERSION=$VERSION make push-all`
7. Publish a draft release using the tag you just created
8. Perform the [image promotion process](https://github.com/kubernetes/k8s.io/tree/main/k8s.gcr.io#image-promoter)
9. Publish release
10. Email `kubernetes-sig-scheduling@googlegroups.com` to announce the release
### Notes When the above pre-release steps are complete and the release is ready to be cut, perform the following steps **in order**
It's important to create the tag on the master branch after creating the release-* branch so that the [Helm releaser-action](#helm-chart) can work. (the flowchart below demonstrates these steps):
It compares the changes in the action-triggering branch to the latest tag on that branch, so if you tag before creating the new branch there
will be nothing to compare and it will fail (creating a new release branch usually involves no code changes). For this same reason, you should **Version release**
also tag patch releases (on the release-* branch) *after* pushing changes (if those changes involve bumping the Helm chart version). 1. Create the `git tag` on `master` for the release, eg `v0.24.0`
2. Merge Helm chart version update to `master` (see [Helm chart](#helm-chart) below). [Example](https://github.com/kubernetes-sigs/descheduler/pull/709)
3. Perform the [image promotion process](https://github.com/kubernetes/k8s.io/tree/main/k8s.gcr.io#image-promoter). [Example](https://github.com/kubernetes/k8s.io/pull/3344)
4. Cut release branch from `master`, eg `release-1.24`
5. Publish release using Github's release process from the git tag you created
6. Email `kubernetes-sig-scheduling@googlegroups.com` to announce the release
**Patch release**
1. Pick relevant code change commits to the matching release branch, eg `release-1.24`
2. Create the patch tag on the release branch, eg `v0.24.1` on `release-1.24`
3. Merge Helm chart version update to release branch
4. Perform the image promotion process for the patch version
5. Publish release using Github's release process from the git tag you created
6. Email `kubernetes-sig-scheduling@googlegroups.com` to announce the release
### Flowchart
![Flowchart for major and patch releases](release-process.png)
### Image promotion process
Every merge to any branch triggers an [image build and push](https://github.com/kubernetes/test-infra/blob/c36b8e5/config/jobs/image-pushing/k8s-staging-descheduler.yaml) to a `gcr.io` repository.
These automated image builds are snapshots of the code in place at the time of every PR merge and
tagged with the latest git SHA at the time of the build. To create a final release image, the desired
auto-built image SHA is added to a [file upstream](https://github.com/kubernetes/k8s.io/blob/e9e971c/k8s.gcr.io/images/k8s-staging-descheduler/images.yaml) which
copies that image to a public registry.
Automatic builds can be monitored and re-triggered with the [`post-descheduler-push-images` job](https://prow.k8s.io/?job=post-descheduler-push-images) on prow.k8s.io.
Note that images can also be manually built and pushed using `VERSION=$VERSION make push-all` by [users with access](https://github.com/kubernetes/k8s.io/blob/fbee8f67b70304241e613a672c625ad972998ad7/groups/sig-scheduling/groups.yaml#L33-L43).
## Helm Chart
We currently use the [chart-releaser-action GitHub Action](https://github.com/helm/chart-releaser-action) to automatically
publish [Helm chart releases](https://github.com/kubernetes-sigs/descheduler/blob/022e07c/.github/workflows/release.yaml).
This action is triggered when it detects any changes to [`Chart.yaml`](https://github.com/kubernetes-sigs/descheduler/blob/022e07c27853fade6d1304adc0a6ebe02642386c/charts/descheduler/Chart.yaml) on
a `release-*` branch.
Helm chart releases are managed by a separate set of git tags that are prefixed with `descheduler-helm-chart-*`. Example git tag name is `descheduler-helm-chart-0.18.0`.
Released versions of the helm charts are stored in the `gh-pages` branch of this repo.
The major and minor version of the chart matches the descheduler major and minor versions. For example descheduler helm chart version helm-descheduler-chart-0.18.0 corresponds
to descheduler version v0.18.0. The patch version of the descheduler helm chart and the patcher version of the descheduler will not necessarily match. The patch
version of the descheduler helm chart is used to version changes specific to the helm chart.
1. Merge all helm chart changes into the master branch before the release is tagged/cut
1. Ensure that `appVersion` in file `charts/descheduler/Chart.yaml` matches the descheduler version(no `v` prefix)
2. Ensure that `version` in file `charts/descheduler/Chart.yaml` has been incremented. This is the chart version.
2. Make sure your repo is clean by git's standards
3. Follow the release-branch or patch release tagging pattern from the above section.
4. Verify the new helm artifact has been successfully pushed to the `gh-pages` branch
## Notes
The Helm releaser-action compares the changes in the action-triggering branch to the latest tag on that branch, so if you tag before creating the new branch there
will be nothing to compare and it will fail. This is why it's necessary to tag, eg, `v0.24.0` *before* making the changes to the
Helm chart version, so that there is a new diff for the action to find. (Tagging *after* making the Helm chart changes would
also work, but then the code that gets built into the promoted image will be tagged as `descheduler-helm-chart-xxx` rather than `v0.xx.0`).
See [post-descheduler-push-images dashboard](https://testgrid.k8s.io/sig-scheduling#post-descheduler-push-images) for staging registry image build job status. See [post-descheduler-push-images dashboard](https://testgrid.k8s.io/sig-scheduling#post-descheduler-push-images) for staging registry image build job status.
@@ -56,19 +102,3 @@ Pull image from the staging registry.
``` ```
docker pull gcr.io/k8s-staging-descheduler/descheduler:v20200206-0.9.0-94-ge2a23f284 docker pull gcr.io/k8s-staging-descheduler/descheduler:v20200206-0.9.0-94-ge2a23f284
``` ```
## Helm Chart
Helm chart releases are managed by a separate set of git tags that are prefixed with `descheduler-helm-chart-*`. Example git tag name is `descheduler-helm-chart-0.18.0`.
Released versions of the helm charts are stored in the `gh-pages` branch of this repo. The [chart-releaser-action GitHub Action](https://github.com/helm/chart-releaser-action)
is setup to build and push the helm charts to the `gh-pages` branch when changes are pushed to a `release-*` branch.
The major and minor version of the chart matches the descheduler major and minor versions. For example descheduler helm chart version helm-descheduler-chart-0.18.0 corresponds
to descheduler version v0.18.0. The patch version of the descheduler helm chart and the patcher version of the descheduler will not necessarily match. The patch
version of the descheduler helm chart is used to version changes specific to the helm chart.
1. Merge all helm chart changes into the master branch before the release is tagged/cut
1. Ensure that `appVersion` in file `charts/descheduler/Chart.yaml` matches the descheduler version(no `v` prefix)
2. Ensure that `version` in file `charts/descheduler/Chart.yaml` has been incremented. This is the chart version.
2. Make sure your repo is clean by git's standards
3. Follow the release-branch or patch release tagging pattern from the above section.
4. Verify the new helm artifact has been successfully pushed to the `gh-pages` branch

BIN
docs/release-process.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 121 KiB

View File

@@ -2,14 +2,19 @@
Starting with descheduler release v0.10.0 container images are available in the official k8s container registry. Starting with descheduler release v0.10.0 container images are available in the official k8s container registry.
Descheduler Version | Container Image | Architectures | Descheduler Version | Container Image | Architectures |
------------------- |-----------------------------------------------------|-------------------------| ------------------- |--------------------------------------------|-------------------------|
v0.22.0 | k8s.gcr.io/descheduler/descheduler:v0.22.0 | AMD64<br>ARM64<br>ARMv7 | v0.25.1 | k8s.gcr.io/descheduler/descheduler:v0.25.1 | AMD64<br>ARM64<br>ARMv7 |
v0.21.0 | k8s.gcr.io/descheduler/descheduler:v0.21.0 | AMD64<br>ARM64<br>ARMv7 | v0.25.0 | k8s.gcr.io/descheduler/descheduler:v0.25.0 | AMD64<br>ARM64<br>ARMv7 |
v0.20.0 | k8s.gcr.io/descheduler/descheduler:v0.20.0 | AMD64<br>ARM64 | v0.24.1 | k8s.gcr.io/descheduler/descheduler:v0.24.1 | AMD64<br>ARM64<br>ARMv7 |
v0.19.0 | k8s.gcr.io/descheduler/descheduler:v0.19.0 | AMD64 | v0.24.0 | k8s.gcr.io/descheduler/descheduler:v0.24.0 | AMD64<br>ARM64<br>ARMv7 |
v0.18.0 | k8s.gcr.io/descheduler/descheduler:v0.18.0 | AMD64 | v0.23.1 | k8s.gcr.io/descheduler/descheduler:v0.23.1 | AMD64<br>ARM64<br>ARMv7 |
v0.10.0 | k8s.gcr.io/descheduler/descheduler:v0.10.0 | AMD64 | v0.22.0 | k8s.gcr.io/descheduler/descheduler:v0.22.0 | AMD64<br>ARM64<br>ARMv7 |
v0.21.0 | k8s.gcr.io/descheduler/descheduler:v0.21.0 | AMD64<br>ARM64<br>ARMv7 |
v0.20.0 | k8s.gcr.io/descheduler/descheduler:v0.20.0 | AMD64<br>ARM64 |
v0.19.0 | k8s.gcr.io/descheduler/descheduler:v0.19.0 | AMD64 |
v0.18.0 | k8s.gcr.io/descheduler/descheduler:v0.18.0 | AMD64 |
v0.10.0 | k8s.gcr.io/descheduler/descheduler:v0.10.0 | AMD64 |
Note that multi-arch container images cannot be pulled by [kind](https://kind.sigs.k8s.io) from a registry. Therefore Note that multi-arch container images cannot be pulled by [kind](https://kind.sigs.k8s.io) from a registry. Therefore
starting with descheduler release v0.20.0 use the below process to download the official descheduler starting with descheduler release v0.20.0 use the below process to download the official descheduler
@@ -34,31 +39,52 @@ Usage:
descheduler [command] descheduler [command]
Available Commands: Available Commands:
completion generate the autocompletion script for the specified shell
help Help about any command help Help about any command
version Version of descheduler version Version of descheduler
Flags: Flags:
--add-dir-header If true, adds the file directory to the header of the log messages --add-dir-header If true, adds the file directory to the header of the log messages (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)
--alsologtostderr log to standard error as well as files --alsologtostderr log to standard error as well as files (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)
--descheduling-interval duration Time interval between two consecutive descheduler executions. Setting this value instructs the descheduler to run in a continuous loop at the interval specified. --bind-address ip The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces will be used. (default 0.0.0.0)
--dry-run execute descheduler in dry run mode. --cert-dir string The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored. (default "apiserver.local.config/certificates")
--evict-local-storage-pods DEPRECATED: enables evicting pods using local storage by descheduler --descheduling-interval duration Time interval between two consecutive descheduler executions. Setting this value instructs the descheduler to run in a continuous loop at the interval specified.
-h, --help help for descheduler --disable-metrics Disables metrics. The metrics are by default served through https://localhost:10258/metrics. Secure address, resp. port can be changed through --bind-address, resp. --secure-port flags.
--kubeconfig string File with kube configuration. --dry-run execute descheduler in dry run mode.
--log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0) -h, --help help for descheduler
--log-dir string If non-empty, write log files in this directory --http2-max-streams-per-connection int The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.
--log-file string If non-empty, use this log file --kubeconfig string File with kube configuration.
--log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800) --leader-elect Start a leader election client and gain leadership before executing the main loop. Enable this when running replicated components for high availability.
--log-flush-frequency duration Maximum number of seconds between log flushes (default 5s) --leader-elect-lease-duration duration The duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled. (default 15s)
--logtostderr log to standard error instead of files (default true) --leader-elect-renew-deadline duration The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled. (default 10s)
--max-pods-to-evict-per-node int DEPRECATED: limits the maximum number of pods to be evicted per node by descheduler --leader-elect-resource-lock string The type of resource object that is used for locking during leader election. Supported options are 'endpoints', 'configmaps', 'leases', 'endpointsleases' and 'configmapsleases'. (default "leases")
--node-selector string DEPRECATED: selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) --leader-elect-resource-name string The name of resource object that is used for locking during leader election. (default "descheduler")
--policy-config-file string File with descheduler policy configuration. --leader-elect-resource-namespace string The namespace of resource object that is used for locking during leader election. (default "kube-system")
--skip-headers If true, avoid header prefixes in the log messages --leader-elect-retry-period duration The duration the clients should wait between attempting acquisition and renewal of a leadership. This is only applicable if leader election is enabled. (default 2s)
--skip-log-headers If true, avoid headers when opening log files --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0) (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)
--stderrthreshold severity logs at or above this threshold go to stderr (default 2) --log_dir string If non-empty, write log files in this directory (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)
-v, --v Level number for the log level verbosity --log_file string If non-empty, use this log file (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging --log_file_max_size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800) (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)
--log-flush-frequency duration Maximum number of seconds between log flushes (default 5s)
--logging-format string Sets the log format. Permitted formats: "text", "json". Non-default formats don't honor these flags: --add-dir-header, --alsologtostderr, --log-backtrace-at, --log_dir, --log_file, --log_file_max_size, --logtostderr, --skip-headers, --skip-log-headers, --stderrthreshold, --log-flush-frequency.\nNon-default choices are currently alpha and subject to change without warning. (default "text")
--logtostderr log to standard error instead of files (default true) (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)
--one-output If true, only write logs to their native severity level (vs also writing to each lower severity level) (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)
--permit-address-sharing If true, SO_REUSEADDR will be used when binding the port. This allows binding to wildcard IPs like 0.0.0.0 and specific IPs in parallel, and it avoids waiting for the kernel to release sockets in TIME_WAIT state. [default=false]
--permit-port-sharing If true, SO_REUSEPORT will be used when binding the port, which allows more than one instance to bind on the same address and port. [default=false]
--policy-config-file string File with descheduler policy configuration.
--secure-port int The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all. (default 10258)
--skip-headers If true, avoid header prefixes in the log messages (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)
--skip-log-headers If true, avoid headers when opening log files (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)
--stderrthreshold severity logs at or above this threshold go to stderr (default 2) (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)
--tls-cert-file string File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
--tls-cipher-suites strings Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used.
Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384.
Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA.
--tls-min-version string Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
--tls-private-key-file string File containing the default x509 private key matching --tls-cert-file.
--tls-sni-cert-key namedCertKey A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])
-v, --v Level number for the log level verbosity
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
Use "descheduler [command] --help" for more information about a command. Use "descheduler [command] --help" for more information about a command.
``` ```
@@ -89,7 +115,8 @@ strategies:
"PodLifeTime": "PodLifeTime":
enabled: true enabled: true
params: params:
maxPodLifeTimeSeconds: 604800 # pods run for a maximum of 7 days podLifeTime:
maxPodLifeTimeSeconds: 604800 # pods run for a maximum of 7 days
``` ```
### Balance Cluster By Node Memory Utilization ### Balance Cluster By Node Memory Utilization
@@ -117,7 +144,7 @@ strategies:
#### Balance low utilization nodes #### Balance low utilization nodes
Using `HighNodeUtilization`, descheduler will rebalance the cluster based on memory by evicting pods Using `HighNodeUtilization`, descheduler will rebalance the cluster based on memory by evicting pods
from nodes with memory utilization lower than 20%. This should be used along with scheduler strategy `MostRequestedPriority`. from nodes with memory utilization lower than 20%. This should be use `NodeResourcesFit` with the `MostAllocated` scoring strategy based on these [doc](https://kubernetes.io/docs/reference/scheduling/config/#scheduling-plugins).
The evicted pods will be compacted into minimal set of nodes. The evicted pods will be compacted into minimal set of nodes.
``` ```
@@ -136,7 +163,14 @@ strategies:
Descheduler's `RemovePodsViolatingNodeTaints` strategy can be combined with Descheduler's `RemovePodsViolatingNodeTaints` strategy can be combined with
[Node Problem Detector](https://github.com/kubernetes/node-problem-detector/) and [Node Problem Detector](https://github.com/kubernetes/node-problem-detector/) and
[Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) to automatically remove [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) to automatically remove
Nodes which have problems. Node Problem Detector can detect specific Node problems and taint any Nodes which have those Nodes which have problems. Node Problem Detector can detect specific Node problems and report them to the API server.
problems. The Descheduler will then deschedule workloads from those Nodes. Finally, if the descheduled Node's resource There is a feature called TaintNodeByCondition of the node controller that takes some conditions and turns them into taints. Currently, this only works for the default node conditions: PIDPressure, MemoryPressure, DiskPressure, Ready, and some cloud provider specific conditions.
The Descheduler will then deschedule workloads from those Nodes. Finally, if the descheduled Node's resource
allocation falls below the Cluster Autoscaler's scale down threshold, the Node will become a scale down candidate allocation falls below the Cluster Autoscaler's scale down threshold, the Node will become a scale down candidate
and can be removed by Cluster Autoscaler. These three components form an autohealing cycle for Node problems. and can be removed by Cluster Autoscaler. These three components form an autohealing cycle for Node problems.
---
**NOTE**
Once [kubernetes/node-problem-detector#565](https://github.com/kubernetes/node-problem-detector/pull/565) is available in NPD, we need to update this section.
---

View File

@@ -11,4 +11,4 @@ strategies:
includingInitContainers: true includingInitContainers: true
excludeOwnerKinds: excludeOwnerKinds:
- "Job" - "Job"
minPodLifeTimeSeconds: 3600 # 1 hour minPodLifetimeSeconds: 3600 # 1 hour

View File

@@ -6,3 +6,6 @@ strategies:
params: params:
podLifeTime: podLifeTime:
maxPodLifeTimeSeconds: 604800 # 7 days maxPodLifeTimeSeconds: 604800 # 7 days
states:
- "Pending"
- "PodInitializing"

View File

@@ -4,4 +4,5 @@ strategies:
"RemovePodsViolatingTopologySpreadConstraint": "RemovePodsViolatingTopologySpreadConstraint":
enabled: true enabled: true
params: params:
nodeFit: true
includeSoftConstraints: true # Include 'ScheduleAnyways' constraints includeSoftConstraints: true # Include 'ScheduleAnyways' constraints

118
go.mod
View File

@@ -1,19 +1,115 @@
module sigs.k8s.io/descheduler module sigs.k8s.io/descheduler
go 1.16 go 1.19
require ( require (
github.com/client9/misspell v0.3.4 github.com/client9/misspell v0.3.4
github.com/spf13/cobra v1.1.3 github.com/spf13/cobra v1.4.0
github.com/spf13/pflag v1.0.5 github.com/spf13/pflag v1.0.5
k8s.io/api v0.22.0 k8s.io/api v0.25.0
k8s.io/apimachinery v0.22.0 k8s.io/apimachinery v0.25.0
k8s.io/apiserver v0.22.0 k8s.io/apiserver v0.25.0
k8s.io/client-go v0.22.0 k8s.io/client-go v0.25.0
k8s.io/code-generator v0.22.0 k8s.io/code-generator v0.25.0
k8s.io/component-base v0.22.0 k8s.io/component-base v0.25.0
k8s.io/component-helpers v0.22.0 k8s.io/component-helpers v0.25.0
k8s.io/klog/v2 v2.9.0 k8s.io/klog/v2 v2.70.1
k8s.io/kubectl v0.20.5 k8s.io/utils v0.0.0-20220823124924-e9cbc92d1a73
sigs.k8s.io/mdtoc v1.0.1 sigs.k8s.io/mdtoc v1.0.1
) )
require (
cloud.google.com/go v0.97.0 // indirect
github.com/Azure/go-autorest v14.2.0+incompatible // indirect
github.com/Azure/go-autorest/autorest v0.11.27 // indirect
github.com/Azure/go-autorest/autorest/adal v0.9.20 // indirect
github.com/Azure/go-autorest/autorest/date v0.3.0 // indirect
github.com/Azure/go-autorest/logger v0.2.1 // indirect
github.com/Azure/go-autorest/tracing v0.6.0 // indirect
github.com/BurntSushi/toml v0.3.1 // indirect
github.com/NYTimes/gziphandler v1.1.1 // indirect
github.com/PuerkitoBio/purell v1.1.1 // indirect
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/cespare/xxhash/v2 v2.1.2 // indirect
github.com/coreos/go-semver v0.3.0 // indirect
github.com/coreos/go-systemd/v22 v22.3.2 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/emicklei/go-restful/v3 v3.8.0 // indirect
github.com/evanphx/json-patch v4.12.0+incompatible // indirect
github.com/felixge/httpsnoop v1.0.1 // indirect
github.com/fsnotify/fsnotify v1.4.9 // indirect
github.com/go-logr/logr v1.2.3 // indirect
github.com/go-logr/zapr v1.2.3 // indirect
github.com/go-openapi/jsonpointer v0.19.5 // indirect
github.com/go-openapi/jsonreference v0.19.5 // indirect
github.com/go-openapi/swag v0.19.14 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang-jwt/jwt/v4 v4.2.0 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/gomarkdown/markdown v0.0.0-20200824053859-8c8b3816f167 // indirect
github.com/google/gnostic v0.5.7-v3refs // indirect
github.com/google/go-cmp v0.5.6 // indirect
github.com/google/gofuzz v1.1.0 // indirect
github.com/google/uuid v1.1.2 // indirect
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0 // indirect
github.com/grpc-ecosystem/grpc-gateway v1.16.0 // indirect
github.com/imdario/mergo v0.3.6 // indirect
github.com/inconshreveable/mousetrap v1.0.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/mailru/easyjson v0.7.6 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 // indirect
github.com/mmarkdown/mmark v2.0.40+incompatible // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/prometheus/client_golang v1.12.1 // indirect
github.com/prometheus/client_model v0.2.0 // indirect
github.com/prometheus/common v0.32.1 // indirect
github.com/prometheus/procfs v0.7.3 // indirect
go.etcd.io/etcd/api/v3 v3.5.4 // indirect
go.etcd.io/etcd/client/pkg/v3 v3.5.4 // indirect
go.etcd.io/etcd/client/v3 v3.5.4 // indirect
go.opentelemetry.io/contrib v0.20.0 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.20.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.20.0 // indirect
go.opentelemetry.io/otel v0.20.0 // indirect
go.opentelemetry.io/otel/exporters/otlp v0.20.0 // indirect
go.opentelemetry.io/otel/metric v0.20.0 // indirect
go.opentelemetry.io/otel/sdk v0.20.0 // indirect
go.opentelemetry.io/otel/sdk/export/metric v0.20.0 // indirect
go.opentelemetry.io/otel/sdk/metric v0.20.0 // indirect
go.opentelemetry.io/otel/trace v0.20.0 // indirect
go.opentelemetry.io/proto/otlp v0.7.0 // indirect
go.uber.org/atomic v1.7.0 // indirect
go.uber.org/multierr v1.6.0 // indirect
go.uber.org/zap v1.19.0 // indirect
golang.org/x/crypto v0.0.0-20220518034528-6f7dac969898 // indirect
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4 // indirect
golang.org/x/net v0.0.0-20220722155237-a158d28d115b // indirect
golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8 // indirect
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4 // indirect
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f // indirect
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 // indirect
golang.org/x/text v0.3.7 // indirect
golang.org/x/time v0.0.0-20220210224613-90d013bbcef8 // indirect
golang.org/x/tools v0.1.12 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto v0.0.0-20220502173005-c8bf987b8c21 // indirect
google.golang.org/grpc v1.47.0 // indirect
google.golang.org/protobuf v1.28.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/natefinch/lumberjack.v2 v2.0.0 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/gengo v0.0.0-20211129171323-c02415ce4185 // indirect
k8s.io/kube-openapi v0.0.0-20220803162953-67bda5d908f1 // indirect
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.32 // indirect
sigs.k8s.io/json v0.0.0-20220713155537-f223a00ba0e2 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 // indirect
sigs.k8s.io/yaml v1.2.0 // indirect
)

643
go.sum

File diff suppressed because it is too large Load Diff

View File

@@ -1,3 +1,4 @@
//go:build tools
// +build tools // +build tools
/* /*

View File

@@ -5,6 +5,6 @@ go build -o "${OS_OUTPUT_BINPATH}/deepcopy-gen" "k8s.io/code-generator/cmd/deepc
${OS_OUTPUT_BINPATH}/deepcopy-gen \ ${OS_OUTPUT_BINPATH}/deepcopy-gen \
--go-header-file "hack/boilerplate/boilerplate.go.txt" \ --go-header-file "hack/boilerplate/boilerplate.go.txt" \
--input-dirs "${PRJ_PREFIX}/pkg/apis/componentconfig,${PRJ_PREFIX}/pkg/apis/componentconfig/v1alpha1,${PRJ_PREFIX}/pkg/api,${PRJ_PREFIX}/pkg/api/v1alpha1" \ --input-dirs "${PRJ_PREFIX}/pkg/apis/componentconfig,${PRJ_PREFIX}/pkg/apis/componentconfig/v1alpha1,${PRJ_PREFIX}/pkg/api,${PRJ_PREFIX}/pkg/api/v1alpha1,${PRJ_PREFIX}/pkg/framework/plugins/defaultevictor/" \
--output-file-base zz_generated.deepcopy --output-file-base zz_generated.deepcopy

View File

@@ -23,7 +23,7 @@ DESCHEDULER_ROOT=$(dirname "${BASH_SOURCE}")/..
GO_VERSION=($(go version)) GO_VERSION=($(go version))
if [[ -z $(echo "${GO_VERSION[2]}" | grep -E 'go1.14|go1.15|go1.16') ]]; then if [[ -z $(echo "${GO_VERSION[2]}" | grep -E 'go1.17|go1.18|go1.19') ]]; then
echo "Unknown go version '${GO_VERSION[2]}', skipping gofmt." echo "Unknown go version '${GO_VERSION[2]}', skipping gofmt."
exit 1 exit 1
fi fi

1
hack/verify-chart.sh Executable file
View File

@@ -0,0 +1 @@
${CONTAINER_ENGINE:-docker} run -it --rm --network host --workdir=/data --volume ~/.kube/config:/root/.kube/config:ro --volume $(pwd):/data quay.io/helmpack/chart-testing:v3.7.0 /bin/bash -c "git config --global --add safe.directory /data; ct install --config=.github/ci/ct.yaml --helm-extra-set-args=\"--set=kind=Deployment\""

View File

@@ -20,7 +20,7 @@ go build -o "${OS_OUTPUT_BINPATH}/deepcopy-gen" "k8s.io/code-generator/cmd/deepc
${OS_OUTPUT_BINPATH}/deepcopy-gen \ ${OS_OUTPUT_BINPATH}/deepcopy-gen \
--go-header-file "hack/boilerplate/boilerplate.go.txt" \ --go-header-file "hack/boilerplate/boilerplate.go.txt" \
--input-dirs "./pkg/apis/componentconfig,./pkg/apis/componentconfig/v1alpha1,./pkg/api,./pkg/api/v1alpha1" \ --input-dirs "./pkg/apis/componentconfig,./pkg/apis/componentconfig/v1alpha1,./pkg/api,./pkg/api/v1alpha1,./pkg/framework/plugins/defaultevictor/" \
--output-file-base zz_generated.deepcopy --output-file-base zz_generated.deepcopy
popd > /dev/null 2>&1 popd > /dev/null 2>&1

View File

@@ -23,7 +23,7 @@ DESCHEDULER_ROOT=$(dirname "${BASH_SOURCE}")/..
GO_VERSION=($(go version)) GO_VERSION=($(go version))
if [[ -z $(echo "${GO_VERSION[2]}" | grep -E 'go1.14|go1.15|go1.16') ]]; then if [[ -z $(echo "${GO_VERSION[2]}" | grep -E 'go1.17|go1.18|go1.19') ]]; then
echo "Unknown go version '${GO_VERSION[2]}', skipping gofmt." echo "Unknown go version '${GO_VERSION[2]}', skipping gofmt."
exit 1 exit 1
fi fi

View File

@@ -4,7 +4,7 @@ apiVersion: rbac.authorization.k8s.io/v1
metadata: metadata:
name: descheduler-cluster-role name: descheduler-cluster-role
rules: rules:
- apiGroups: [""] - apiGroups: ["events.k8s.io"]
resources: ["events"] resources: ["events"]
verbs: ["create", "update"] verbs: ["create", "update"]
- apiGroups: [""] - apiGroups: [""]
@@ -12,7 +12,7 @@ rules:
verbs: ["get", "watch", "list"] verbs: ["get", "watch", "list"]
- apiGroups: [""] - apiGroups: [""]
resources: ["namespaces"] resources: ["namespaces"]
verbs: ["get", "list"] verbs: ["get", "watch", "list"]
- apiGroups: [""] - apiGroups: [""]
resources: ["pods"] resources: ["pods"]
verbs: ["get", "watch", "list", "delete"] verbs: ["get", "watch", "list", "delete"]
@@ -22,6 +22,13 @@ rules:
- apiGroups: ["scheduling.k8s.io"] - apiGroups: ["scheduling.k8s.io"]
resources: ["priorityclasses"] resources: ["priorityclasses"]
verbs: ["get", "watch", "list"] verbs: ["get", "watch", "list"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["create"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
resourceNames: ["descheduler"]
verbs: ["get", "patch", "delete"]
--- ---
apiVersion: v1 apiVersion: v1
kind: ServiceAccount kind: ServiceAccount
@@ -41,4 +48,3 @@ subjects:
- name: descheduler-sa - name: descheduler-sa
kind: ServiceAccount kind: ServiceAccount
namespace: kube-system namespace: kube-system

View File

@@ -1,5 +1,5 @@
--- ---
apiVersion: batch/v1 # for k8s version < 1.21.0, use batch/v1beta1 apiVersion: batch/v1
kind: CronJob kind: CronJob
metadata: