1
0
mirror of https://github.com/kubernetes-sigs/descheduler.git synced 2026-01-26 05:14:13 +01:00

Compare commits

...

289 Commits

Author SHA1 Message Date
Mike Dame
bc09f71977 bump chart to 0.1.0 2020-07-21 16:38:45 -04:00
Kubernetes Prow Robot
0894f7740c Merge pull request #343 from ingvagabund/test-cleanup
Clean e2e test so it's easier to extend it
2020-07-21 05:21:13 -07:00
Kubernetes Prow Robot
c3f07dc366 Merge pull request #351 from KohlsTechnology/helm-release-docs
Update Release Documentation For Helm Charts
2020-07-20 20:33:14 -07:00
Sean Malloy
ca8f1051eb Update Helm Chart Version To 0.18.0 2020-07-20 12:52:38 -05:00
Sean Malloy
c7692a2e9f Update Release Documentation For Helm Charts
The descheduler now has an official helm chart. This change documents the
process to release a new helm chart artifact.
2020-07-20 12:51:50 -05:00
Jan Chaloupka
53badf7b61 e2e: create dedicated ns for each e2e test 2020-07-17 18:15:53 +02:00
Jan Chaloupka
f801c5f72f e2e: code cleanup 2020-07-17 18:15:13 +02:00
Jan Chaloupka
327880ba51 e2e: separate tests for LowNodeUtilization strategy and evict annotation 2020-07-17 18:11:01 +02:00
Jan Chaloupka
7bb8b4feda e2e: create and delete RC in the same function body
So create/delete are coupled and caller of create is responsible for calling delete as well
2020-07-17 18:06:19 +02:00
Kubernetes Prow Robot
36b1e1f061 Merge pull request #298 from stevehipwell/helm-chart
Add Helm chart
2020-07-16 16:15:00 -07:00
Steve Hipwell
4507a90bb6 Add helm chart 2020-07-16 17:36:17 +01:00
Kubernetes Prow Robot
550f68306c Merge pull request #342 from KohlsTechnology/remove-travis-ci-config
Remove Travis CI Configuration
2020-07-16 07:05:41 -07:00
Sean Malloy
2b668566ce Remove Travis CI Configuration
The e2e tests are now being run through Prow. Therefore the Travis CI
configuration can now be completely removed.
2020-07-15 23:52:27 -05:00
Kubernetes Prow Robot
ee414ea366 Merge pull request #344 from ingvagabund/kind-have-version-configurable-through-env
e2e: have k8s version configurable when creating cluster through kind
2020-07-14 06:47:21 -07:00
Jan Chaloupka
5761b5d595 e2e: have k8s version configurable when creating cluster through kind 2020-07-14 15:25:02 +02:00
Kubernetes Prow Robot
07f476dfc4 Merge pull request #340 from damemi/update-e2e
Update e2e script to use Kind setup
2020-07-13 19:57:20 -07:00
Mike Dame
f5e9f07321 Move kind setup to e2e script
This moves the kind setup (previously used by Travis) to the e2e runner script
to accomodate the switch to Prow. This provides a KIND_E2E env var to specify
whether to run the tests in kind, or (by default) to run locally).
2020-07-13 15:49:41 -04:00
Kubernetes Prow Robot
05c69ee26a Merge pull request #336 from lixiang233/avoid_duplicated_append_RemoveDuplicatePods
avoid appending list multiple times in RemoveDuplicates
2020-07-09 01:32:02 -07:00
Kubernetes Prow Robot
1623e09122 Merge pull request #332 from lixiang233/fix_test_struct_lownodeutilization
Add maxPodsToEvictPerNode to LowNodeUtilization testcase struct
2020-07-09 01:22:02 -07:00
lixiang
696aa7c505 rename maxPodsToEvict to maxPodsToEvictPerNode to avoid misunderstanding 2020-07-08 15:15:48 +08:00
lixiang
c53dce0805 Add maxPodsToEvictPerNode to testcase struct 2020-07-08 15:15:38 +08:00
lixiang
cd8b5a0354 avoid appending list multiple times in RemoveDuplicates 2020-07-07 14:42:53 +08:00
Kubernetes Prow Robot
b51f24eb8e Merge pull request #333 from lixiang233/refactor_lowNodeUtilization_func
Support only one sorting strategy in lowNodeUtilization
2020-07-02 22:48:47 -07:00
lixiang
ae3b4368ee support only one sorting strategy in lowNodeUtilization 2020-07-01 09:47:35 +08:00
Kubernetes Prow Robot
61eef93618 Merge pull request #330 from paulfantom/patch-1
examples: fix typo
2020-06-29 07:27:24 -07:00
Paweł Krupa
fa0a2ec6fe examples: fix typo
Fix incorrect example causing following error in runtime:

PodsHavingTooManyRestarts thresholds not set
2020-06-28 14:28:41 +02:00
Kubernetes Prow Robot
7ece10a643 Merge pull request #328 from KohlsTechnology/go114
Update To Go 1.14.4
2020-06-25 08:26:39 -07:00
Sean Malloy
71c8eae47e Update To Go 1.14.4
The kubernetes project has been updated to use Go 1.14.4. See below pull
request.

https://github.com/kubernetes/kubernetes/pull/88638

After making the updates to Go 1.14 "make gen" no longer worked. The
file hack/tools.go had to be created to get "make gen" working with Go
1.14.
2020-06-23 21:33:28 -05:00
Kubernetes Prow Robot
267b0837dc Merge pull request #327 from farah/farah/klog-migration
Update klog to v2
2020-06-23 08:35:40 -07:00
Kubernetes Prow Robot
c713537d56 Merge pull request #322 from lixiang233/move_pod_sort
Move sortPodsBasedOnPriority to pod util
2020-06-22 12:21:40 -07:00
Kubernetes Prow Robot
e374229707 Merge pull request #325 from KohlsTechnology/issue-template
Add initial GitHub issue templates
2020-06-22 07:42:40 -07:00
Sean Malloy
f834581a8e Fix Spelling Error 2020-06-22 09:12:22 -05:00
Ali Farah
15fcde5229 Update klog to v2 2020-06-22 20:43:22 +10:00
Sean Malloy
96c5dd3941 Add initial GitHub issue templates
Initially users can open a bug report, feature request, or misc request.
Bug reports and feature requests will have the "kind" label
automatically added.
2020-06-20 00:52:23 -05:00
lixiang
65a03e76bf Add sorting to RemovePodsViolatingInterPodAntiAffinity 2020-06-19 14:11:23 +08:00
lixiang
43525f6493 Move sortPodsBasedOnPriority to podutil 2020-06-16 19:19:03 +08:00
lixiang
7457626f62 Move helper funcs to testutil 2020-06-16 19:17:28 +08:00
Kubernetes Prow Robot
08c22e8921 Merge pull request #321 from KohlsTechnology/add-event-reason
Add Pod Eviction Reason To Events
2020-06-15 08:53:57 -07:00
Sean Malloy
4ff533ec17 Add pod eviction reason to k8s events
Prior to this change the event created for every pod eviction was
identical. Instead leverage the newly added eviction reason when
creating k8s events. This makes it easier for end users to understand
why the descheduler evicted a pod when inspecting k8s events.
2020-06-10 01:13:17 -05:00
Sean Malloy
7680e3d079 Use var declaration instead of short assignmnet
This is a very minor refactor to use a var declaration for the reason
variable. A var declaration is being used because the zero value for
strings is an empty string.

https://golang.org/ref/spec#The_zero_value
2020-06-10 01:05:05 -05:00
Kubernetes Prow Robot
305801dd0e Merge pull request #319 from damemi/remove-extra-eviction-log
Remove redundant eviction log message and add "Reason" param to EvictPod
2020-06-09 11:25:19 -07:00
Mike Dame
9951b85d60 Add optional reason parameter to EvictPod 2020-06-09 13:35:51 -04:00
Mike Dame
ff21ec9432 Remove redundant eviction log message 2020-06-09 12:17:45 -04:00
Kubernetes Prow Robot
5e15d77bf2 Merge pull request #309 from damemi/change-evict-local-storage-pods
Make evictLocalStoragePods a property of PodEvictor
2020-06-08 03:53:46 -07:00
Kubernetes Prow Robot
d833c73fc4 Merge pull request #317 from ingvagabund/extend-owners
Add myself to project reviewers
2020-06-05 10:45:44 -07:00
Kubernetes Prow Robot
795a80dfb0 Merge pull request #310 from lixiang233/ignore_no_evictable_pod_node
Skip evicting when no evictable pod found on node
2020-06-05 07:41:44 -07:00
Mike Dame
f5e4acdd8a Make evictLocalStoragePods a property of PodEvictor
This is a minor refactor of PodEvictor to include evictLocalStoragePods as a
property (so that it doesn't need to be passed to each strategy function.) It
also removes ListEvictablePodsOnNode and extends ListPodsOnANode with an optional
"filter" function parameter, so that for example IsEvictable can be passed to it
and achieve the same results as the function formerly known as ListEvictablePodsOnNode.
This approach also now allows strategies to more explicitly extend their criteria of what
IsEvictable considers an evictable pod (such as NodeAffinity, which checks that the pod
can fit on any other node).
2020-06-05 10:28:48 -04:00
Jan Chaloupka
eb4c1bb355 Add myself to project reviewers
Based on https://github.com/kubernetes/community/blob/master/community-membership.md#requirements-1:

The following apply to the part of codebase for which one would be a reviewer in an OWNERS file (for repos using the bot).

> member for at least 3 months

For a couple of years now

> Primary reviewer for at least 5 PRs to the codebase

https://github.com/kubernetes-sigs/descheduler/pull/285
https://github.com/kubernetes-sigs/descheduler/pull/275
https://github.com/kubernetes-sigs/descheduler/pull/267
https://github.com/kubernetes-sigs/descheduler/pull/254
https://github.com/kubernetes-sigs/descheduler/pull/181

> Reviewed or merged at least 20 substantial PRs to the codebase

https://github.com/kubernetes-sigs/descheduler/pulls?q=is%3Apr+is%3Aclosed+assignee%3Aingvagabund

> Knowledgeable about the codebase

yes

> Sponsored by a subproject approver
> With no objections from other approvers
> Done through PR to update the OWNERS file

this PR

> May either self-nominate, be nominated by an approver in this subproject, or be nominated by a robot

self-nominating
2020-06-04 18:16:38 +02:00
Kubernetes Prow Robot
6dfa95cc87 Merge pull request #312 from lixiang233/fix_lowUtilization_golint
Pass golint check for pkg/descheduler/strategies/
2020-06-04 07:09:15 -07:00
lixiang
eb9d974a8b pass golint check for pkg/descheduler/strategies/ 2020-06-04 14:33:24 +08:00
lixiang
d41a1f4a56 Ignore evicting when no evictable pod found on node 2020-06-04 10:31:15 +08:00
Kubernetes Prow Robot
138ad556a3 Merge pull request #315 from damemi/update-travis-yml
Remove redundant tests from Travis CI
2020-06-03 11:44:11 -07:00
Mike Dame
37124e6e45 Remove redundant tests from Travis CI
These tests are now covered by k8s test-infra prow tests.
2020-06-03 14:11:08 -04:00
Kubernetes Prow Robot
bd412bf87f Merge pull request #300 from damemi/refactor-is-evictable
Add more verbose logging to IsEvictable checks
2020-05-29 10:03:16 -07:00
Kubernetes Prow Robot
6f9b31f568 Merge pull request #308 from damemi/fix-lint-for-ci
Download golangci-lint to _output/bin directory
2020-05-29 09:57:15 -07:00
Mike Dame
c858740c4f Run golangci-lint from GOPATH 2020-05-29 12:48:42 -04:00
Kubernetes Prow Robot
bfefe634a1 Merge pull request #307 from damemi/make-verify
Add parent make verify target
2020-05-29 08:01:16 -07:00
Mike Dame
7b7b9e1cd7 Add parent make verify target 2020-05-29 10:23:38 -04:00
Mike Dame
0a4b8b0a25 Add more verbose logging to IsEvictable checks 2020-05-29 10:20:14 -04:00
Kubernetes Prow Robot
f28183dcbe Merge pull request #306 from ingvagabund/make-Params-a-pointer-II
Add missing address of api.StrategyParameters
2020-05-29 05:53:16 -07:00
Jan Chaloupka
abdf79454f Add missing address of api.StrategyParameters
https://github.com/kubernetes-sigs/descheduler/pull/285 got merged before re-running the unit tests.
2020-05-29 13:34:57 +02:00
Kubernetes Prow Robot
46b570b71d Merge pull request #296 from ingvagabund/make-Params-a-pointer
Make DeschedulerStrategy.Params a pointer
2020-05-28 22:53:15 -07:00
Kubernetes Prow Robot
616a9b5f6b Merge pull request #285 from lixiang233/Ft_change_lowUtilization_break
Change break condition and thresholds validation for lowUtilization
2020-05-28 07:36:02 -07:00
Kubernetes Prow Robot
003a4cdc2b Merge pull request #302 from lixiang233/clean_up_strategy_enable_check
Clean up code in LowNodeUtilization
2020-05-28 07:16:02 -07:00
Kubernetes Prow Robot
54ea05d8bb Merge pull request #299 from damemi/node-affinity-logs
Standardize node affinity strategy logs
2020-05-28 01:52:02 -07:00
lixiang
d0eea0cabb Clean up code in LowNodeUtilization 2020-05-28 10:35:46 +08:00
Mike Dame
f0297dfe03 Standardize node affinity strategy logs 2020-05-27 16:52:16 -04:00
Kubernetes Prow Robot
ef1f36f8e4 Merge pull request #297 from ingvagabund/add-verify-gofmt
Add verify-gofmt make target
2020-05-27 07:02:40 -07:00
Jan Chaloupka
a733c95dcc Add verify-gofmt and verify-vendor make target
So e.g. CI can run 'make verify-gofmt' instead of ./hack/verify-gofmt in case the script location changes.
2020-05-27 15:08:20 +02:00
Jan Chaloupka
5a81a0661b Make DeschedulerStrategy.Params a pointer
To avoid empty params fields when serializing:
  RemoveDuplicates:
    enabled: true
    params: {}
  RemovePodsHavingTooManyRestarts:
    enabled: true
    params:
      podsHavingTooManyRestarts: {}
  RemovePodsViolatingInterPodAntiAffinity:
    enabled: true
    params: {}
  RemovePodsViolatingNodeAffinity:
    enabled: true
    params:
      nodeAffinityType:
      - requiredDuringSchedulingIgnoredDuringExecution
  RemovePodsViolatingNodeTaints:
    enabled: true
    params: {}
2020-05-27 10:31:31 +02:00
Kubernetes Prow Robot
83e04960af Merge pull request #288 from KohlsTechnology/update-support-matrix
Update Compatibility Matrix in README
2020-05-26 06:37:12 -07:00
lixiang
2a8dc69cbb Stop condition and config validation change for lowUtilization
1.Set default CPU/Mem/Pods percentage of thresholds to 100
2.Stop evicting pods if any resource ran out
3.Add thresholds verification method and limit resource percentage within [0, 100]
4.Change testcases and readme
2020-05-25 19:03:37 +08:00
Kubernetes Prow Robot
5d82d08af3 Merge pull request #291 from KohlsTechnology/update-releasedocs
Add URL For Viewing Staging Registry
2020-05-22 09:55:11 -07:00
Sean Malloy
c265825166 Add URL For Viewing Staging Registry
This newly documented URL can be used to view the descheduler staging
registry in a web browser. This is easier to browse if the gcloud
command is not available.
2020-05-22 11:33:45 -05:00
Kubernetes Prow Robot
ed0126fb63 Merge pull request #290 from KohlsTechnology/bump-install-manifests
Update Install Manifests For Release v0.18.0
2020-05-22 07:08:38 -07:00
Sean Malloy
8c7267b379 Update Install Manifests For Release v0.18.0 2020-05-21 22:54:54 -05:00
Kubernetes Prow Robot
d85ce22975 Merge pull request #289 from KohlsTechnology/update-release-docs
Update Release Guide
2020-05-20 11:46:19 -07:00
Kubernetes Prow Robot
a9091a1e37 Merge pull request #287 from ingvagabund/evict-pod-mention-dry-run-if-set
Mention pod eviction is in dry mode when set
2020-05-20 11:44:19 -07:00
Sean Malloy
d3b0ac8e06 Update Release Guide
Starting with descheduler v0.18 a release branch will be created for
each descheduler minor release. The release guide has been updated with
the steps to create release branches.
2020-05-20 11:10:42 -05:00
Sean Malloy
435674fb44 Update Compatibility Matrix in README
The matrix has been updated with the soon the be released v0.18
details. Also, clarified the descheduler and k8s version compatibility
requirements and recommendations.
2020-05-20 10:55:57 -05:00
Jan Chaloupka
eacbae72fd Mention pod eviction is in dry mode when set
Indicate whether a pod is only virtually evicted so there's no confusion
with the real eviction.
2020-05-20 10:40:58 +02:00
Kubernetes Prow Robot
34550d4b7c Merge pull request #275 from damemi/image-in-duplicates
Consider pod image in duplicates strategy
2020-05-15 06:51:38 -07:00
Mike Dame
c6ff87dbd6 Consider pod image in duplicates strategy
This increases the specificity of the RemoveDuplicates strategy by
removing pods which not only have the same owner, but who also
must have the same list of container images. This also adds a
parameter, `ExcludeOwnerKinds` to the RemoveDuplicates strategy
which accepts a list of Kinds. If a pod has any of these Kinds as
an owner, that pod is not considered for eviction.
2020-05-15 09:37:09 -04:00
Kubernetes Prow Robot
04efe65f90 Merge pull request #279 from KohlsTechnology/eviction-logs
Add Additional Details To Pod Eviction Log Messages
2020-05-14 07:20:22 -07:00
Sean Malloy
55afde6251 Remove line break in log message from PodLifeTime strategy 2020-05-14 00:36:52 -05:00
Sean Malloy
7039b6c8aa Refactor function EvictPod to only return an error
The EvictPod function previously returned a bool and an error. The
function now only returns an error. Callers can check for failure by
testing if the returned error is not nil. This aligns the EvictPod
function with idiomatic Go best practices.

Also, the function name has been changed from EvictPod to evictPod
because it is not used outside the evictions package.
2020-05-14 00:11:54 -05:00
Sean Malloy
cff984261e Log an error when EvictPod method returns a non-nil error
End users should be able to see the detailed error from the EvictPod
method when it fails. Updates all strategies to log the error. The
PodLifeTime strategy already logs this error.
2020-05-13 23:36:32 -05:00
Sean Malloy
4819ab9c69 Add namespace to pod eviction log messages
In multi-tenant environments it is useful to know which namespace a pod
was evicted from. Therefore log the namespace when evicting pods.

Also, do not log a nil error when successfully evicting pods.
2020-05-13 23:28:25 -05:00
Kubernetes Prow Robot
25336da708 Merge pull request #281 from KohlsTechnology/update-travis-ci
Update Travis CI build matrix with latest k8s point releases
2020-05-13 08:20:26 -07:00
Sean Malloy
4941f6a16b Update Travis CI build matrix with latest k8s point releases 2020-05-12 22:38:23 -05:00
Kubernetes Prow Robot
d7e93058d4 Merge pull request #280 from damemi/k8s-1.18
Update to k8s 1.18.2 dependencies
2020-05-12 13:40:21 -07:00
Mike Dame
c20a595370 Update travis.yml with new k8s version and updated kind version 2020-05-12 15:01:25 -04:00
Mike Dame
eec1104d6e React to 1.18 by adding contexts to client calls 2020-05-12 15:01:25 -04:00
Mike Dame
741b35edf5 Update to k8s 1.18.2 dependencies 2020-05-12 14:07:31 -04:00
Kubernetes Prow Robot
c01cfcf3b6 Merge pull request #274 from KohlsTechnology/pod-lifetime-strategy
Add New PodLifeTime Strategy
2020-05-08 07:29:42 -07:00
Sean Malloy
643cd472ef Add initial production use cases section to user guide 2020-05-08 09:02:24 -05:00
Sean Malloy
668d727fc2 Add VS Code File To .gitignore 2020-05-07 23:10:48 -05:00
Sean Malloy
423ee35846 Add New PodLifeTime Strategy
The new PodLifeTime descheduler strategy can be used to evict pods that
were created more than the configured number of seconds ago.

In the below example pods created more than 24 hours ago will be evicted.
````
apiVersion: "descheduler/v1alpha1"
kind: "DeschedulerPolicy"
strategies:
  "PodLifeTime":
     enabled: true
     params:
        maxPodLifeTimeSeconds: 86400
````
2020-05-07 23:10:36 -05:00
Kubernetes Prow Robot
31c7855212 Merge pull request #278 from lixiang233/fix_readme
Fix readme
2020-05-07 12:19:43 -07:00
Kubernetes Prow Robot
211f3942b6 Merge pull request #276 from damemi/toomanyrestarts-pointer
Switch PodsHavingTooManyRestarts params to pointer
2020-05-07 12:17:42 -07:00
Kubernetes Prow Robot
beae282735 Merge pull request #277 from damemi/damemi-approvers
Add damemi to approvers
2020-05-07 12:15:42 -07:00
lixiang
635348efb9 Fix readme 2020-05-07 11:47:32 +08:00
Mike Dame
fa335c782f Switch PodsHavingTooManyRestarts params to pointer 2020-05-06 13:29:05 -04:00
Mike Dame
b019a58525 Add damemi to approvers 2020-05-06 11:00:58 -04:00
Kubernetes Prow Robot
78eef6c343 Merge pull request #267 from ingvagabund/drop-DeschedulerServer-from-strategies
Drop descheduler server from strategies
2020-04-28 08:34:06 -07:00
Jan Chaloupka
cbcefb5d2f Remove options.DeschedulerServer from all strategies 2020-04-28 16:13:33 +02:00
Kubernetes Prow Robot
149085fb57 Merge pull request #240 from ingvagabund/turn-strategy-params-NodeResourceUtilizationThresholds-field-into-pointer
Turn StrategyParameters.NodeResourceUtilizationThresholds field into a pointer
2020-04-28 06:48:07 -07:00
Jan Chaloupka
991eddb691 Turn StrategyParameters.NodeResourceUtilizationThresholds field into a pointer
The field is intended to be omitempty when not set. Without a pointer the strategy
serialized into json string looks like:

```json
strategies:
  LowNodeUtilization:
    enabled: true
    params:
      nodeResourceUtilizationThresholds:
        numberOfNodes: 1
        targetThresholds:
          cpu: 50
          memory: 50
          pods: 20
        thresholds:
          cpu: 50
          memory: 50
          pods: 20
  RemoveDuplicates:
    enabled: true
    params:
      nodeResourceUtilizationThresholds: {}
  RemovePodsViolatingInterPodAntiAffinity:
    enabled: true
    params:
      nodeResourceUtilizationThresholds: {}
  RemovePodsViolatingNodeAffinity:
    enabled: true
    params:
      nodeAffinityType:
      - requiredDuringSchedulingIgnoredDuringExecution
      nodeResourceUtilizationThresholds: {}
  RemovePodsViolatingNodeTaints:
    enabled: true
    params:
      nodeResourceUtilizationThresholds: {}
```

It's preferred to have the following json string instead:
```
strategies:
  LowNodeUtilization:
    enabled: true
    params:
      nodeResourceUtilizationThresholds:
        numberOfNodes: 1
        targetThresholds:
          cpu: 50
          memory: 50
          pods: 20
        thresholds:
          cpu: 50
          memory: 50
          pods: 20
  RemoveDuplicates:
    enabled: true
  RemovePodsViolatingInterPodAntiAffinity:
    enabled: true
  RemovePodsViolatingNodeAffinity:
    enabled: true
    params:
      nodeAffinityType:
      - requiredDuringSchedulingIgnoredDuringExecution
  RemovePodsViolatingNodeTaints:
    enabled: true
```
2020-04-28 15:20:30 +02:00
Kubernetes Prow Robot
91de471376 Merge pull request #254 from damemi/toomanyrestarts
Add RemoveTooManyRestarts policy
2020-04-27 12:48:05 -07:00
Mike Dame
c2d7e22749 make gen && go mod vendor 2020-04-24 10:48:28 -04:00
Mike Dame
e7c42794a0 Add RemovePodsHavingTooManyRestarts strategy 2020-04-24 10:48:28 -04:00
Kubernetes Prow Robot
3a8dfc07ed Merge pull request #266 from ingvagabund/pod-evictor
Move maximum-pods-per-nodes-evicted logic under a single invocation
2020-04-23 16:14:07 -07:00
Jan Chaloupka
077b7f6505 Drop check for MaxNoOfPodsToEvictPerNode and invoke EvictPod wrapper instead 2020-04-22 10:33:22 +02:00
Jan Chaloupka
240fa93bc5 Move maximum-pods-per-nodes-evicted logic under a single invocation
Each strategy implements a test for checking if a maximum number of pods per node
was already evicted. The test duplicates a code that can be put
under a single invocation. Thus, reducing the number of arguments passed
to each strategy given EvicPod call can encapsulate both the check
and the invocation of the pod eviction itself.
2020-04-22 10:32:16 +02:00
Kubernetes Prow Robot
6c7f846917 Merge pull request #265 from ingvagabund/tolerations-tains-code-cleanup
Tolerations taints code cleanup
2020-04-21 14:27:52 -07:00
Jan Chaloupka
6db7c3b92c Call utils.TolerationsTolerateTaintsWithFilter directly, not through checkPodsSatisfyTolerations 2020-04-21 22:34:14 +02:00
Kubernetes Prow Robot
030267107a Merge pull request #262 from ingvagabund/order-pods-by-priority-only-over-over-utilized-nodes
Order pods by priority only over over utilized nodes
2020-04-19 14:05:38 -07:00
Jan Chaloupka
1c300a9881 Drop getNoScheduleTaints and allTaintsTolerated in favor of utils.TolerationsTolerateTaintsWithFilter 2020-04-19 22:18:49 +02:00
Jan Chaloupka
0e9b33b822 allTaintsTolerated: remove for iteration through tolerations which is already implemented in utils.TolerationsTolerateTaint 2020-04-19 22:07:34 +02:00
Jan Chaloupka
36e3d1e703 Drop local implementation of toleratesTaint in favor of k8s.io/api/core/v1.Toleration.TolerateTaint
Functionally identical implementation of toleratesTaint is already provided in k8s.io/api
2020-04-19 21:58:41 +02:00
Jan Chaloupka
f53264b613 lownodeutilization: evict best-effort pods only
Unit test refactored node utilization and pod clasification
2020-04-17 11:32:34 +02:00
Jan Chaloupka
414554ae5e lownodeutilization: make unit tests with/without priority table driven 2020-04-17 11:32:28 +02:00
Jan Chaloupka
150f945592 lownodeutilization: clasify pods of over utilized nodes only
Only over utilized nodes need clasification of pods into categories.
Thus, skipping categorizing of pods which saves computation time in cases
where the number of over utilized nodes makes less than 50% of all nodes
or their fraction.
2020-04-17 11:32:22 +02:00
Kubernetes Prow Robot
9a84afece1 Merge pull request #264 from ingvagabund/remove-pods-violating-node-affinity
readme: RemovePodsViolatingNodeAffinity: reword description of the strategy
2020-04-16 20:59:08 -07:00
Jan Chaloupka
e0c101c5ae readme: RemovePodsViolatingNodeAffinity: reword description of the strategy
Compared to https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity
the strategy kinda implements requiredDuringSchedulingRequiredDuringExecution node affinity type
for kubelets. Only addition to kubelet is the strategy checks whether is at least another node
capable of respecting the node affinity rules.

When requiredDuringSchedulingRequiredDuringExecution node affinity type is implemented in kubelet,
it's likely the strategy either gets removed or re-implemented. Stressing the relation with
requiredDuringSchedulingRequiredDuringExecution will helps consumers of descheduler
to keep in mind the kubelet will eventually take over the strategy when implemented.
2020-04-15 11:54:37 +02:00
Kubernetes Prow Robot
e3a562aea0 Merge pull request #260 from CriaHu/hyq_descheduler
fix broken link :
2020-04-07 18:01:44 -07:00
Kubernetes Prow Robot
4966e8ee08 Merge pull request #256 from damemi/fix-podfitsanynode
Fix early return from PodFitsAnyNode check
2020-04-07 08:01:43 -07:00
Kubernetes Prow Robot
d3542d5892 Merge pull request #258 from damemi/verify-deps
Add verify-vendor scripts to CI
2020-04-07 07:53:43 -07:00
Mike Dame
62d04b0fc7 go mod tidy && go mod vendor 2020-04-07 10:34:47 -04:00
Mike Dame
6ecbc85448 Add hack/{update,verify}-vendor.sh scripts 2020-04-07 10:34:47 -04:00
Cria Hu
b9b1eae6fb fix broken link :
https://git.k8s.io/community/contributors/guide/contributor-cheatsheet.md
2020-04-06 09:28:09 +08:00
Mike Dame
e95e42930d Remove early return false from PodFitsAnyNode 2020-04-02 11:53:29 -04:00
Kubernetes Prow Robot
3cabb69014 Merge pull request #255 from KohlsTechnology/go-1.13.9
Update to Go 1.13.9
2020-03-31 20:47:27 -07:00
Sean Malloy
ad8f90f177 Update to Go 1.13.9
Kubernetes releases 1.16 through 1.19 have been updated
to Go 1.13.9, so it's time to update the descheduler too.
2020-03-31 22:30:52 -05:00
Kubernetes Prow Robot
8a62cf1699 Merge pull request #246 from KohlsTechnology/go-1.13.8
Update to Go 1.13.8
2020-03-09 11:17:36 -07:00
Sean Malloy
a6b54dae99 Update to Go 1.13.8
Kubernetes releases 1.15 through 1.19 have been updated
to Go 1.13.8, so it's time to update the descheduler too.

https://github.com/kubernetes/release/issues/1134
2020-03-08 23:12:53 -05:00
Kubernetes Prow Robot
112684bcb9 Merge pull request #181 from jw-s/feature/respect-tolerations
Respect Node taints with tolerations (if exist)
2020-03-04 10:45:48 -08:00
Kubernetes Prow Robot
682e07c3cd Merge pull request #249 from ingvagabund/list-node-through-informer-in-every-iteration
List nodes through informer in every iteration
2020-03-04 10:43:48 -08:00
Jan Chaloupka
9593ce16d9 List nodes through informer in every iteration
Also, refactor the code a bit so it can be tested without checking for eviction support.
2020-03-04 16:26:13 +01:00
Kubernetes Prow Robot
2545c8b031 Merge pull request #244 from KohlsTechnology/docs-revamp
Documentation Refresh
2020-03-04 07:23:47 -08:00
Sean Malloy
c9793e7029 Clean README based on feedback from review 2020-03-03 22:07:20 -06:00
Joel Whittaker-Smith
4757132452 check for taints on low nodes and honor them 2020-03-02 14:43:11 +01:00
Sean Malloy
561b3b67b3 Clean README based on feedback from review 2020-02-28 22:38:28 -06:00
Sean Malloy
566f33e6ad Initial User and Contributor Guide Draft 2020-02-26 01:29:57 -06:00
Sean Malloy
83a75bac80 Create Structure and Links For User and Contributor Guides 2020-02-26 01:30:08 -06:00
Sean Malloy
d8772f5685 Clean up README
* Fixed small typos
* Update compatability matrix
2020-02-26 01:29:57 -06:00
Kubernetes Prow Robot
df510187d6 Merge pull request #233 from KohlsTechnology/update-release-docs
Add notes section to release documentation
2020-02-21 11:12:12 -08:00
Kubernetes Prow Robot
137b9b72e7 Merge pull request #242 from KohlsTechnology/fix-gcr-registry
Fix GCR Image Location In Job and CronJob YAML
2020-02-20 09:41:01 -08:00
Sean Malloy
09b4979673 Fix GCR Image Location In Job and CronJob YAML
The correct location for the official descheduler container image is
us.gcr.io/k8s-artifacts-prod/descheduler/descheduler:TAG_NAME.
2020-02-20 00:30:16 -06:00
Kubernetes Prow Robot
eff8185d7c Merge pull request #239 from aveshagarwal/master-fix-gen
Fix email address in docker files.
2020-02-12 07:04:52 -08:00
Avesh Agarwal
83ee94dd08 Fix email address in docker files. 2020-02-12 09:31:24 -05:00
Sean Malloy
aae52ac2ee Document gcloud command used to get container image details
The "gcloud container images describe" command is used to get the SHA256
hash of an image in the staging registry. The SHA256 hash is required
for the image promotion process.

https://github.com/kubernetes/k8s.io/tree/master/k8s.gcr.io#image-promoter
2020-02-11 22:46:27 -06:00
Sean Malloy
a7c4295c58 Add notes section to release documentation
Add a link to the test grid dashboard for automated container image builds,
and documents useful gcloud commands for woking with the staging registry.

This documentation is useful for project maintainers that are creating
releases and publishing container images. The chance of remembering
these commands and links is slim to none. Therefore documenting this
information will be very useful.
2020-02-11 22:41:35 -06:00
Kubernetes Prow Robot
7a6e095451 Merge pull request #222 from KohlsTechnology/travis-build-matrix
Run e2e Tests Against Multiple k8s Versions
2020-02-11 18:30:07 -08:00
Kubernetes Prow Robot
28c17d240d Merge pull request #237 from KohlsTechnology/issue-229
Update ClusterRole To Allow Creating Events
2020-02-11 18:28:07 -08:00
Sean Malloy
6dd91b6a22 Update ClusterRole To Allow Updating Events
Based on feedback during code review it was recommended to allow
updating events in addition to creating events. Because event proceeding
logic on the client side sometimes updates existing events instead of
creating a new one.
2020-02-09 00:17:20 -06:00
Sean Malloy
7d93551c34 Run e2e Tests Against Multiple k8s Versions
Enable a build matrix in Travis CI to run the e2e tests against
multiple k8s versions. Additionally update kind to v0.7.0 which adds
support for k8s v1.17.

Replaced command "kind get kubeconfig-path" with "kind get kubeconfig"
because the "kubeconfig-path" subcommand is not valid with kind v0.7.0.
2020-02-07 22:27:00 -06:00
Sean Malloy
853c43737d Update ClusterRole To Allow Creating Events
The descheduler creates a k8s event for each pod that it evicts. When
the code to create events was added the RBAC ClusterRole was not updated
to allow creating events. Users would see the below error in the
descheduler log when it tried to create an event.

"system:serviceaccount:kube-system:descheduler-sa" cannot create resource
"events" in API group "" in the namespace "xxxx-production"' (will not retry!)'

This change fixes this error by updating the ClusterRole to allow
creation of k8s events.
2020-02-07 22:03:14 -06:00
Kubernetes Prow Robot
d8dab9d134 Merge pull request #236 from aveshagarwal/master-fix-gen
Fix makefile for make gen, and update hack files and auto-generated files.
2020-02-07 13:57:46 -08:00
Avesh Agarwal
8dc7b475d9 Fix makefile for make gen, and Update hack files and
auto-generated files.
2020-02-07 16:32:55 -05:00
Kubernetes Prow Robot
15a971494e Merge pull request #208 from KohlsTechnology/install-docs
Streamline Deployment Docs For End Users
2020-02-07 09:27:45 -08:00
Kubernetes Prow Robot
5364e17c62 Merge pull request #215 from tambetliiv/patch-1
fix Build and Run GOPATH dir
2020-02-07 08:19:47 -08:00
Kubernetes Prow Robot
fc8c581d7a Merge pull request #226 from ankon/pr/node-unschedulable-typo
Fix typo in function name
2020-02-07 08:13:47 -08:00
Kubernetes Prow Robot
d7482bd618 Merge pull request #234 from ingvagabund/drop-critical-pod-annotation
Critical pod annotation scheduler.alpha.kubernetes.io/critical-pod has been drop, no need to set it anymore in the tests
2020-02-07 08:09:45 -08:00
Jan Chaloupka
d065f9904b Critical pod annotation scheduler.alpha.kubernetes.io/critical-pod has been drop, no need to set it anymore in the tests 2020-02-07 16:29:44 +01:00
Sean Malloy
fb8cdc10c7 Update Container Image Registry
Change the Job and CronJob YAML manifests to use container image
registry us.gcr.io/k8s-artifacts-prod/descheduler. This is the new
official location for the descheduler container image that is very close
to being all setup.
2020-02-06 23:20:33 -06:00
Sean Malloy
55cf45a6ba Streamline Deployment Docs For End Users
The k8s YAML manifests for deploying the descheduler as a k8s job were
duplicated across the "examples" and "kubernetes" directories and also
in README.md. This change consolidates the YAML manifests into the
"kubernetes" directory and simplifies the installation instructions for end
users in README.md.

Additionally a k8s CronJob has been added.
2020-02-06 23:20:33 -06:00
Kubernetes Prow Robot
e2a23f2848 Merge pull request #216 from damemi/remove-k8s-deps
Remove k8s.io/kubernetes deps and switch to go modules
2020-02-06 12:21:26 -08:00
Kubernetes Prow Robot
c567768845 Merge pull request #227 from KohlsTechnology/pod-priority-docs
Remove reference to critical-pod annotation from documentation
2020-02-06 07:53:34 -08:00
Sean Malloy
9510891f42 Remove reference to critical-pod annotation from documentation
Newer versions of k8s(>= 1.16) no longer support the critical-pod
annotation. Setting the priorityClassName is the correct way to mark a
pod as critical.

https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods
2020-02-04 23:27:25 -06:00
Kubernetes Prow Robot
73858beeea Merge pull request #223 from KohlsTechnology/bump-go-version
Update to Go 1.13.6
2020-02-04 15:59:54 -08:00
Andreas Kohn
c3346e9806 Fix typo in function name 2020-02-04 10:03:08 +01:00
Mike Dame
cc92eaa35d Run go mod vendor again to remove more deps 2020-02-03 10:05:04 -05:00
Mike Dame
816f8cb682 Fix build errors 2020-02-03 10:05:04 -05:00
Mike Dame
5f12ade97b go mod vendor 2020-02-03 10:05:04 -05:00
Mike Dame
2b68e65238 Remove old vendor, glide.lock/yaml 2020-02-03 10:05:03 -05:00
Mike Dame
86d0f3b038 go mod init/tidy 2020-02-03 10:04:58 -05:00
Mike Dame
c9f64dfe37 glide up -v 2020-02-03 10:04:58 -05:00
Mike Dame
431597dd43 Break dependency on k8s.io/kubernetes 2020-02-03 10:04:53 -05:00
Kubernetes Prow Robot
861f057d1b Merge pull request #217 from KohlsTechnology/cloud-build-config
Automate Container Image Builds Using GCB
2020-02-03 06:55:22 -08:00
Kubernetes Prow Robot
d845040d77 Merge pull request #221 from damemi/descheduling-interval
Wire --descheduling-interval to run descheduler on a loop
2020-02-03 05:53:21 -08:00
Mike Dame
30d05382b6 Wire --descheduler-interval flag to run descheduler on a loop 2020-01-30 17:17:55 -05:00
Kubernetes Prow Robot
ca02665d14 Merge pull request #218 from KohlsTechnology/update-owners
Add seanmalloy to reviewers
2020-01-30 12:10:19 -08:00
Sean Malloy
2328d89897 Update to Go 1.13.6
The kubernetes project updated to Go 1.13.6, so it's time to update the
descheduler too.

https://github.com/kubernetes/kubernetes/pull/87106
2020-01-30 01:41:36 -06:00
Sean Malloy
ac089fe5ce Document Descheduler Release Process 2020-01-29 23:32:37 -06:00
Sean Malloy
00e23dbc07 Add seanmalloy to reviewers 2020-01-27 21:21:24 -06:00
Sean Malloy
3401edab53 Automate Container Image Builds Using GCB
Adds a GCB(Google Cloud Build) configuraiton file that automates the
building and pushing of descheduler container images to the official
Google Container Registry for k8s.

Reference documentation:
https://github.com/kubernetes/test-infra/blob/master/config/jobs/image-pushing/README.md
2020-01-25 21:56:11 -06:00
Kubernetes Prow Robot
e3865fcf8e Merge pull request #173 from KohlsTechnology/new-staging-repo
Use correct GCR staging registry
2020-01-24 07:36:32 -08:00
Sean Malloy
361aa01c51 Remove backticks from Makefile 2020-01-24 09:00:54 -06:00
tambetliiv
c77f240e37 fix README gopath build
was not changed in https://github.com/kubernetes-sigs/descheduler/pull/182
2020-01-24 13:19:26 +02:00
Kubernetes Prow Robot
d9a77393cc Merge pull request #213 from mdavidsen/docs/update-examples
Update examples to support v1.16.0+
2020-01-18 07:05:35 -08:00
Marius Davidsen
44c7eb5285 Update examples to support v1.16.0+
Support for using the `scheduler.alpha.kubernetes.io/critical-pod` annotation was
deprectated in 1.13 and finally removed in 1.16.
2020-01-17 14:24:37 +01:00
Sean Malloy
462bbbbb47 Use correct GCR staging registry
The container registry now defaults to
the correct staging registry.
2020-01-09 22:16:14 -06:00
Kubernetes Prow Robot
18e3fd3de5 Merge pull request #210 from KohlsTechnology/update-kind
Update Travis With Latest Kind and Kubectl Versions
2020-01-03 02:01:41 -08:00
Sean Malloy
fdf94304d0 Update Travis With Latest Kind and Kubectl Versions
* Update to kubectl 1.17.0
* Update to kind 0.6.1
2019-12-18 21:11:11 -06:00
Kubernetes Prow Robot
ea0ba7d39a Merge pull request #209 from KohlsTechnology/go-1.13
Update To Go 1.13.5
2019-12-18 10:01:58 -08:00
Kubernetes Prow Robot
5be355d815 Merge pull request #204 from damemi/k8s-117-bump
Bump k8s deps to 1.17
2019-12-18 09:51:58 -08:00
Sean Malloy
5efec68fd3 Update To Go 1.13.5
Starting with Kubernetes release 1.17 Go >= 1.13.4 is required.

https://github.com/kubernetes/kubernetes/pull/82809
2019-12-18 00:26:25 -06:00
Mike Dame
b3cc62dac6 bump(*): k8s to 1.17 2019-12-12 11:02:29 -05:00
Mike Dame
3be0a9f80d Pin k8s deps to 1.17.0 2019-12-12 11:00:44 -05:00
Kubernetes Prow Robot
906bca0802 Merge pull request #174 from mccare/documentation-compile-and-example-yaml
Documentation for compile and example yaml files for creation of kubernetes resources
2019-12-12 07:54:32 -08:00
Kubernetes Prow Robot
d193bc1370 Merge pull request #183 from KohlsTechnology/k8s-events
Add k8s events support
2019-12-12 07:36:33 -08:00
Sean Malloy
6654aeff99 Update vendor directory for k8s events support
Executed "glide up -v" after adding the k8s event feature. These vendor
dependency changes are required for k8s event support.
2019-12-11 21:35:57 -06:00
Sean Malloy
e3d06d1541 Create k8s event when a pod is successfully evicted
This allows end users to more easily monitor when the descheduler evicts
a pod in their environment. Prior to adding k8s event support the only
way to track evicted pods was to view the logs.
2019-12-11 21:35:57 -06:00
Kubernetes Prow Robot
b44d7718b3 Merge pull request #201 from dileep-p/wront-intentation
Fix intentation on yaml
2019-12-02 07:43:04 -08:00
Dileep
531d6ddc49 Fix intentation on yaml 2019-11-28 12:36:45 +04:00
Kubernetes Prow Robot
bbc902b86f Merge pull request #197 from damemi/add-reviewer
Add damemi to reviewers
2019-11-13 13:58:55 -08:00
Mike Dame
1fdcbcd008 Add damemi to reviewers 2019-11-13 13:13:31 -05:00
Kubernetes Prow Robot
2fdcfc04d5 Merge pull request #194 from sivanzcw/develop
Normalize blank import
2019-11-09 09:07:39 -08:00
Zhang Jinghui
872953b9cf normalize blank import 2019-11-09 21:53:29 +08:00
Kubernetes Prow Robot
9276f0e555 Merge pull request #193 from mleneveut/master
Fix typos in kubernetes example
2019-11-08 06:50:13 -08:00
mleneveut
cee12a5019 Fix indentation 2019-11-08 15:29:48 +01:00
mleneveut
456110b508 Fix typo in ClusterRoleBinding name 2019-11-08 15:16:33 +01:00
Kubernetes Prow Robot
edab9d7fed Merge pull request #175 from swatisehgal/dev/strategyTaintTol
Strategy to consider taints and tolerations in Descheduler
2019-11-05 13:28:39 -08:00
swatisehgal
7563b5561b Strategy to consider taints and tolerations in Descheduler 2019-11-05 20:42:36 +00:00
Kubernetes Prow Robot
8b210b08f6 Merge pull request #192 from damemi/glog-klog-switch
Switch from glog to klog
2019-10-28 23:20:43 -07:00
Mike Dame
a3d33909fa bump(*): remove glog 2019-10-28 20:44:02 -04:00
Mike Dame
4b9e732c18 Switch from glog to klog 2019-10-28 20:44:02 -04:00
Kubernetes Prow Robot
dd54f1a656 Merge pull request #188 from damemi/k8s-1.16
Bump to k8s 1.16
2019-10-22 09:07:22 -07:00
Mike Dame
11044ed89d Run generated scripts 2019-10-21 13:33:39 -04:00
Mike Dame
e5d4a2eba6 bump(*): Remove codecgen dependency 2019-10-21 13:33:38 -04:00
Mike Dame
c9e3c63b85 Remove codecgen
Since [1] codecgen has been obsolete.

[1] https://github.com/kubernetes/kubernetes/issues/36120
2019-10-21 13:33:38 -04:00
Mike Dame
5b1d551ffd update e2e test 2019-10-21 13:33:38 -04:00
Mike Dame
b176dd2e77 Remove old NodeOutOfDisk unit test 2019-10-21 13:33:38 -04:00
Mike Dame
9ea6aa536e Update CriticalPod unit test
The kubelet used to only determine critical pods by the namespace and an annotation (see [1], where our code switched from only checking the annotation to using the kubelet package helper, which at that revision also only checked for an annotation and namespace). However, now the kubelet uses Pod Priority to check for critical pods[3], which is controlled by an admission plugin[4]. Given that testing the functioning of the admission plugin is out of the scope of this repository, it is probably sufficient to manually set the pod priority to critical, to ensure that our Descheduler code behaves properly in response.

[1] 062b8698c7 (diff-094ac8507e224603239ce19592b518f5)
[2] https://github.com/kubernetes/kubernetes/blob/release-1.9/pkg/kubelet/types/pod_update.go#L151
[3] https://github.com/kubernetes/kubernetes/blob/a847874655/pkg/kubelet/types/pod_update.go#L153
[4] https://github.com/kubernetes/kubernetes/blob/a847874655/plugin/pkg/admission/priority/admission.go
2019-10-21 13:33:38 -04:00
Mike Dame
bd2c217010 update makefile/dockerfile to use golang 1.12.12 2019-10-21 13:33:15 -04:00
Kubernetes Prow Robot
c42670e1cc Merge pull request #190 from bysph/master
Fix golint errors
2019-10-17 06:32:39 -07:00
sph
5e25e21ca2 Fix golint errors
Signed-off-by: sph <shepenghui1@huawei.com>
2019-10-17 20:50:13 +08:00
Mike Dame
0af97c1b5e fix compile errors 2019-10-12 11:12:04 -04:00
Mike Dame
1652ba7976 bump(*): kubernetes release-1.16.0 dependencies 2019-10-12 11:11:43 -04:00
Mike Dame
5af668e89a Remove old API install patterns 2019-10-12 11:08:46 -04:00
Mike Dame
26adf87323 Bump reactions
Change imports to reflect current component-base usage
2019-10-12 11:07:21 -04:00
Mike Dame
0a58cf4535 Update glide.yaml 2019-10-12 11:06:42 -04:00
Kubernetes Prow Robot
3116dad75e Merge pull request #186 from ksimon1/master
Evict annotation
2019-10-11 02:45:42 -07:00
ksimon1
fb1b5fc690 added unit test
updated e2e test

Signed-off-by: ksimon1 <ksimon@redhat.com>
2019-10-04 11:08:33 +02:00
ksimon1
dc7f9efc19 Descheduler can evict all types of pods with special annotation
When pod has this annotation, descheduler will always try to evict this pod.
User can control which pods (only pods with this annotation) will be evicted.
Signed-off-by: ksimon1 <ksimon@redhat.com>
2019-10-04 11:07:40 +02:00
Kubernetes Prow Robot
f1127541aa Merge pull request #182 from ingvagabund/rename-import-path-prefix
Project migrated under kubernetes-sigs, change import path prefix to sigs.k8s.io/descheduler
2019-09-19 06:23:00 -07:00
Jan Chaloupka
9d6b6094cd Explicitly set the repository root directory
Looks like Travis does not honor the new repository location.
2019-09-19 15:03:45 +02:00
Jan Chaloupka
e35eb4a0b5 Run ./hack/update-gofmt 2019-09-19 14:29:00 +02:00
Jan Chaloupka
2e6f14103b Project migrared under kubernetes-sigs, change import path prefix to sigs.k8s.io/descheduler 2019-09-19 14:09:05 +02:00
Avesh Agarwal
17ef1d5e5f Merge pull request #179 from mrbobbytables/update-owners-sec-contacts
Update OWNERS and SECURITY_CONTACTS
2019-09-14 14:19:30 -04:00
Kubernetes Prow Robot
7788d53d0b Merge pull request #178 from danedf/master
Fix small typo in output for thresholds
2019-09-14 07:42:39 -07:00
Bob Killen
5f66ed8401 Update OWNERS and SECURITY_CONTACTS 2019-09-12 08:09:47 -04:00
Dane De Forest
cd4d09726c Fix small typo in output for thresholds for low utilization 2019-08-28 11:25:55 -07:00
Christian van der Leeden
68a106aed0 changed policy, RemoveDuplicates is not automatically turned on, so used the policy.yaml file from the examples 2019-08-03 11:41:52 +02:00
Christian van der Leeden
1931bd6c1a created example yaml files out of the readme instructions with a reference to the 0.9.0 docker image. Modified the readme so the make will work since it expects a certain file structure 2019-08-01 12:05:13 +02:00
Kubernetes Prow Robot
9e28f0b362 Merge pull request #169 from Bowenislandsong/fix-test
Remove hardcoded node selection in E2e test
2019-07-12 13:55:05 -07:00
Bowen Song
7245a31f52 Change hard coded leastLoadedNode=[3rd node] to searching for leastLoadedNode worker node
Accomodate dee89a6cc1

And format files according to gofmt verification

update version to 1.11.1
2019-07-11 17:26:06 -04:00
ravisantoshgudimetla
66a2a87e49 Start use kind for e2e tests 2019-07-11 10:03:06 -04:00
Kubernetes Prow Robot
e7ceddf2bc Merge pull request #162 from tammert/duplicate-test-logic-rework
Improved logic for duplicates testing
2019-07-05 08:00:38 -07:00
Kubernetes Prow Robot
e3d25a9ab4 Merge pull request #165 from KohlsTechnology/add-gitignore
Add .gitignore to ignore output directory for binaries
2019-07-05 06:40:35 -07:00
Sean Malloy
992e00ecd2 Add .gitignore to ignore output directory for binaries 2019-07-04 22:09:28 -05:00
Kubernetes Prow Robot
d157a4359b Merge pull request #160 from KohlsTechnology/issue-144
Push Container Images To k8s.gcr.io
2019-06-20 06:32:52 -07:00
Sean Malloy
6a08b5661a Changes image variable name based on code review comments 2019-06-20 01:00:01 -05:00
Sean Malloy
7094c404c9 Change push-container to push-container-to-gcloud
Changes made based on code review comments.
2019-06-19 00:51:31 -05:00
Sebastiaan Tammer
20c610c65a Improved logic for duplicates testing 2019-06-11 20:35:55 +02:00
Sean Malloy
934a06381d Push Container Images To k8s.gcr.io
Run below commands to build and push
a staging image to staging-k8s.gcr.io.

```
$ make push
```

Run below commands to build and push
a release image to k8s.gcr.io.

```
$ git checkout v0.9.0
$ VERSION=v0.9.0 REGISTRY=k8s.gcr.io make push
```
2019-06-05 00:35:10 -05:00
Kubernetes Prow Robot
674c1db05c Merge pull request #149 from reetasingh/master
Adding golang ci lint support and Refactoring test by make it table driven
2019-06-04 07:25:11 -07:00
Reeta Singh
dee89a6cc1 Refactoring test by make it table driven and adding golangci-lint to the CI pipeline 2019-06-03 20:40:02 -07:00
Kubernetes Prow Robot
60fbaca305 Merge pull request #156 from tammert/identical-deployments-fix
Added ownerRef.UID for evaluating duplicate Pods
2019-06-02 11:26:13 -07:00
Sebastiaan Tammer
164d2b0729 Minor typo corrected 2019-05-25 11:40:03 +02:00
Sebastiaan Tammer
023a2f2a47 Replaced UID with Namespace for duplicate check, added tests (+ cleanup) 2019-05-25 11:18:11 +02:00
Sebastiaan Tammer
d1c6f3f709 Added ownerRef.UID for evaluating duplicate Pods 2019-05-24 18:51:33 +02:00
Kubernetes Prow Robot
fc1688057a Merge pull request #142 from adyada/master
Added support to evict pods that use local storage.
2019-04-12 07:42:46 -07:00
Adi Yadav
e6e200b93c Added auto-generated changes. 2019-04-11 23:24:56 -04:00
Adi Yadav
5d843d1f08 Added manual changes. 2019-04-11 23:24:20 -04:00
Kubernetes Prow Robot
6c981cc067 Merge pull request #137 from pickledrick/fix/update-installation-readme
update readme to fix unused path
2019-03-04 06:04:39 -08:00
Peter Grant
22a3a6ea1d update readme to fix unused path 2019-03-04 15:16:37 +11:00
Kubernetes Prow Robot
294bddb5e2 Merge pull request #135 from russiancook/patch-1
Update configmap.yaml
2019-02-28 13:34:37 -08:00
russiancook
0a9d1959e2 Update configmap.yaml
Put configmap in kube-system namespace
2019-02-14 12:12:47 +02:00
RaviSantosh Gudimetla
19ee5d80b5 Merge pull request #124 from ravisantoshgudimetla/fix-kubeadm
Run apiserver on private ip and bump kube version installed to 1.13
2019-01-14 22:41:26 -05:00
ravisantoshgudimetla
14d9e175c2 Run apiserver on private ip and bump kube version installed to 1.13 2018-12-14 14:54:42 -05:00
RaviSantosh Gudimetla
468e138070 Merge pull request #114 from nikhita/contributing.md
Add CONTRIBUTING.md
2018-09-03 09:15:37 -04:00
Nikhita Raghunath
db13b2ac73 Add CONTRIBUTING.md 2018-09-01 19:29:24 +05:30
Avesh Agarwal
40ca53e0a5 Merge pull request #113 from sanity-io/master
Yaml files for kubernetes
2018-08-24 08:50:41 -04:00
nicholas.klem
35d8367fe5 rbac.authorization.k8s.io/v1 - not v1beta1 2018-08-24 14:44:19 +02:00
nicholas.klem
345dd9cf27 add kubernetes yaml files 2018-08-24 14:34:38 +02:00
RaviSantosh Gudimetla
81f471fe05 Merge pull request #111 from kubernetes-incubator/ravisantoshgudimetla-patch-2
Remove production usage warning
2018-08-23 15:57:09 -04:00
RaviSantosh Gudimetla
aa5e8770f5 Remove production usage warning
Removing production usage warning to encourage more users to try using descheduler and since descheduler has been stable so far.
2018-08-23 15:44:17 -04:00
RaviSantosh Gudimetla
2690d139c5 Merge pull request #110 from kubernetes-incubator/ravisantoshgudimetla-patch-1
Update the compatibility matrix
2018-08-22 16:31:09 -04:00
RaviSantosh Gudimetla
cd192ce5fc Update the compatibility matrix
Descheduler 0.4+ should work with kube 1.9+.
2018-08-22 16:09:18 -04:00
RaviSantosh Gudimetla
048f3fd1e5 Merge pull request #109 from ravisantoshgudimetla/test-cases-cleanup
Remove the unnecessary print statements in test file
2018-08-22 12:20:27 -04:00
ravisantoshgudimetla
a079fd2757 Remove the unnecessary print statements 2018-08-22 11:27:40 -04:00
RaviSantosh Gudimetla
ae0a9ed525 Merge pull request #108 from ravisantoshgudimetla/fix-warnings-ci
Fix deprecated warning in CI
2018-08-21 15:10:40 -04:00
ravisantoshgudimetla
0a815e8786 Fix deprecated warning in CI 2018-08-21 14:45:14 -04:00
RaviSantosh Gudimetla
0115748fe8 Merge pull request #105 from ravisantoshgudimetla/priority-low-node
Low node utilization to respect priority while evicting pods
2018-08-21 14:33:50 -04:00
ravisantoshgudimetla
d0305dac3f Low node utilization to respect priority while evicting pods 2018-08-21 14:14:26 -04:00
RaviSantosh Gudimetla
72d6a8aa33 Merge pull request #106 from ravisantoshgudimetla/fix-e2e
Fix broken e2e tests
2018-08-06 10:53:13 -04:00
ravisantoshgudimetla
654fdbba94 Fix broken e2e tests 2018-08-05 13:16:47 -04:00
20497 changed files with 429268 additions and 4735965 deletions

46
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,46 @@
---
name: Bug report
about: Create a bug report to help improve descheduler
title: ''
labels: 'kind/bug'
assignees: ''
---
<!-- Please answer these questions before submitting your bug report. Thanks! -->
**What version of descheduler are you using?**
descheduler version:
**Does this issue reproduce with the latest release?**
**Which descheduler CLI options are you using?**
**Please provide a copy of your descheduler policy config file**
**What k8s version are you using (`kubectl version`)?**
<details><summary><code>kubectl version</code> Output</summary><br><pre>
$ kubectl version
</pre></details>
**What did you do?**
<!--
If possible, provide a recipe for reproducing the error.
A detailed sequence of steps describing what to do to observe the issue is good.
A complete runnable bash shell script is best.
-->
**What did you expect to see?**
**What did you see instead?**

View File

@@ -0,0 +1,26 @@
---
name: Feature request
about: Suggest an idea for descheduler
title: ''
labels: 'kind/feature'
assignees: ''
---
<!-- Please answer these questions before submitting your feature request. Thanks! -->
**Is your feature request related to a problem? Please describe.**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
**Describe the solution you'd like**
<!-- A clear and concise description of what you want to happen. -->
**Describe alternatives you've considered**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**What version of descheduler are you using?**
descheduler version:
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->

18
.github/ISSUE_TEMPLATE/misc_request.md vendored Normal file
View File

@@ -0,0 +1,18 @@
---
name: Miscellaneous
about: Not a bug and not a feature
title: ''
labels: ''
assignees: ''
---
<!--
Please do not use this to submit a bug report or feature request. Use the
bug report or feature request options instead.
Also, please consider posting in the Kubernetes Slack #sig-scheduling channel
instead of opening an issue if this is a support request.
Thanks!
-->

30
.github/workflows/release.yaml vendored Normal file
View File

@@ -0,0 +1,30 @@
name: Release Charts
on:
push:
tags:
- chart-*
jobs:
release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Configure Git
run: |
git config user.name "$GITHUB_ACTOR"
git config user.email "$GITHUB_ACTOR@users.noreply.github.com"
- name: Fetch history
run: git fetch --prune --unshallow
- name: Add dependency chart repos
run: |
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
- name: Run chart-releaser
uses: helm/chart-releaser-action@v1.0.0-rc.2
env:
CR_TOKEN: "${{ secrets.GITHUB_TOKEN }}"

5
.gitignore vendored Normal file
View File

@@ -0,0 +1,5 @@
_output/
_tmp/
vendordiff.patch
.idea/
*.code-workspace

15
.golangci.yml Normal file
View File

@@ -0,0 +1,15 @@
run:
deadline: 2m
linters:
disable-all: true
enable:
- gofmt
- gosimple
- gocyclo
- misspell
- govet
linters-settings:
goimports:
local-prefixes: sigs.k8s.io/descheduler

View File

@@ -1,7 +0,0 @@
language: go
go:
- 1.9.1
script:
- hack/verify-gofmt.sh
- make build
- make test-unit

23
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,23 @@
# Contributing Guidelines
Welcome to Kubernetes. We are excited about the prospect of you joining our [community](https://github.com/kubernetes/community)! The Kubernetes community abides by the CNCF [code of conduct](code-of-conduct.md). Here is an excerpt:
_As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities._
## Getting Started
We have full documentation on how to get started contributing here:
- [Contributor License Agreement](https://git.k8s.io/community/CLA.md) Kubernetes projects require that you sign a Contributor License Agreement (CLA) before we can accept your pull requests
- [Kubernetes Contributor Guide](http://git.k8s.io/community/contributors/guide) - Main contributor documentation, or you can just jump directly to the [contributing section](http://git.k8s.io/community/contributors/guide#contributing)
- [Contributor Cheat Sheet](https://git.k8s.io/community/contributors/guide/contributor-cheatsheet/README.md) - Common resources for existing developers
## Mentorship
- [Mentoring Initiatives](https://git.k8s.io/community/mentoring) - We have a diverse set of mentorship programs available that are always looking for volunteers!
## Contact Information
- [Slack channel](https://kubernetes.slack.com/messages/sig-scheduling)
- [Mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-scheduling)

View File

@@ -11,16 +11,16 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM golang:1.9.2
FROM golang:1.14.4
WORKDIR /go/src/github.com/kubernetes-incubator/descheduler
WORKDIR /go/src/sigs.k8s.io/descheduler
COPY . .
RUN make
FROM scratch
MAINTAINER Avesh Agarwal <avagarwa@redhat.com>
MAINTAINER Avesh Agarwal <avesh.ncsu@gmail.com>
COPY --from=0 /go/src/github.com/kubernetes-incubator/descheduler/_output/bin/descheduler /bin/descheduler
COPY --from=0 /go/src/sigs.k8s.io/descheduler/_output/bin/descheduler /bin/descheduler
CMD ["/bin/descheduler", "--help"]

View File

@@ -13,7 +13,7 @@
# limitations under the License.
FROM scratch
MAINTAINER Avesh Agarwal <avagarwa@redhat.com>
MAINTAINER Avesh Agarwal <avesh.ncsu@gmail.com>
COPY _output/bin/descheduler /bin/descheduler

View File

@@ -15,22 +15,38 @@
.PHONY: test
# VERSION is currently based on the last commit
VERSION=`git describe --tags`
COMMIT=`git rev-parse HEAD`
BUILD=`date +%FT%T%z`
LDFLAG_LOCATION=github.com/kubernetes-incubator/descheduler/cmd/descheduler/app
VERSION?=$(shell git describe --tags)
COMMIT=$(shell git rev-parse HEAD)
BUILD=$(shell date +%FT%T%z)
LDFLAG_LOCATION=sigs.k8s.io/descheduler/cmd/descheduler/app
LDFLAGS=-ldflags "-X ${LDFLAG_LOCATION}.version=${VERSION} -X ${LDFLAG_LOCATION}.buildDate=${BUILD} -X ${LDFLAG_LOCATION}.gitCommit=${COMMIT}"
GOLANGCI_VERSION := v1.15.0
HAS_GOLANGCI := $(shell ls _output/bin/golangci-lint)
# REGISTRY is the container registry to push
# into. The default is to push to the staging
# registry, not production.
REGISTRY?=gcr.io/k8s-staging-descheduler
# IMAGE is the image name of descheduler
# Should this be changed?
IMAGE:=descheduler:$(VERSION)
# IMAGE_GCLOUD is the image name of descheduler in the remote registry
IMAGE_GCLOUD:=$(REGISTRY)/descheduler:$(VERSION)
# TODO: upload binaries to GCS bucket
#
# In the future binaries can be uploaded to
# GCS bucket gs://k8s-staging-descheduler.
HAS_HELM := $(shell which helm)
all: build
build:
CGO_ENABLED=0 go build ${LDFLAGS} -o _output/bin/descheduler github.com/kubernetes-incubator/descheduler/cmd/descheduler
CGO_ENABLED=0 go build ${LDFLAGS} -o _output/bin/descheduler sigs.k8s.io/descheduler/cmd/descheduler
dev-image: build
docker build -f Dockerfile.dev -t $(IMAGE) .
@@ -38,9 +54,24 @@ dev-image: build
image:
docker build -t $(IMAGE) .
push-container-to-gcloud: image
gcloud auth configure-docker
docker tag $(IMAGE) $(IMAGE_GCLOUD)
docker push $(IMAGE_GCLOUD)
push: push-container-to-gcloud
clean:
rm -rf _output
verify: verify-gofmt verify-vendor lint lint-chart
verify-gofmt:
./hack/verify-gofmt.sh
verify-vendor:
./hack/verify-vendor.sh
test-unit:
./test/run-unit-tests.sh
@@ -48,7 +79,18 @@ test-e2e:
./test/run-e2e-tests.sh
gen:
./hack/update-codecgen.sh
./hack/update-generated-conversions.sh
./hack/update-generated-deep-copies.sh
./hack/update-generated-defaulters.sh
lint:
ifndef HAS_GOLANGCI
curl -sfL https://install.goreleaser.com/github.com/golangci/golangci-lint.sh | sh -s -- -b ./_output/bin ${GOLANGCI_VERSION}
endif
./_output/bin/golangci-lint run
lint-chart:
ifndef HAS_HELM
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 && chmod 700 ./get_helm.sh && ./get_helm.sh
endif
helm lint ./charts/descheduler

8
OWNERS
View File

@@ -1,8 +1,12 @@
approvers:
- aveshagarwal
- k82cn
- ravisantoshgudimetla
- jayunit100
- damemi
reviewers:
- aveshagarwal
- k82cn
- ravisantoshgudimetla
- jayunit100
- damemi
- seanmalloy
- ingvagabund

348
README.md
View File

@@ -1,5 +1,4 @@
[![Build Status](https://travis-ci.org/kubernetes-incubator/descheduler.svg?branch=master)](https://travis-ci.org/kubernetes-incubator/descheduler)
[![Go Report Card](https://goreportcard.com/badge/github.com/kubernetes-incubator/descheduler)](https://goreportcard.com/report/github.com/kubernetes-incubator/descheduler)
[![Go Report Card](https://goreportcard.com/badge/kubernetes-sigs/descheduler)](https://goreportcard.com/report/sigs.k8s.io/descheduler)
# Descheduler for Kubernetes
@@ -9,8 +8,8 @@ Scheduling in Kubernetes is the process of binding pending pods to nodes, and is
a component of Kubernetes called kube-scheduler. The scheduler's decisions, whether or where a
pod can or can not be scheduled, are guided by its configurable policy which comprises of set of
rules, called predicates and priorities. The scheduler's decisions are influenced by its view of
a Kubernetes cluster at that point of time when a new pod appears first time for scheduling.
As Kubernetes clusters are very dynamic and their state change over time, there may be desired
a Kubernetes cluster at that point of time when a new pod appears for scheduling.
As Kubernetes clusters are very dynamic and their state changes over time, there may be desire
to move already running pods to some other nodes for various reasons:
* Some nodes are under or over utilized.
@@ -24,164 +23,63 @@ Descheduler, based on its policy, finds pods that can be moved and evicts them.
note, in current implementation, descheduler does not schedule replacement of evicted pods
but relies on the default scheduler for that.
## Build and Run
## Quick Start
Build descheduler:
The descheduler can be run as a Job or CronJob inside of a k8s cluster. It has the
advantage of being able to be run multiple times without needing user intervention.
The descheduler pod is run as a critical pod in the `kube-system` namespace to avoid
being evicted by itself or by the kubelet.
```sh
$ make
```
and run descheduler:
```sh
$ ./_output/bin/descheduler --kubeconfig <path to kubeconfig> --policy-config-file <path-to-policy-file>
```
For more information about available options run:
```
$ ./_output/bin/descheduler --help
```
## Running Descheduler as a Job Inside of a Pod
Descheduler can be run as a job inside of a pod. It has the advantage of
being able to be run multiple times without needing user intervention.
Descheduler pod is run as a critical pod to avoid being evicted by itself,
or by kubelet due to an eviction event. Since critical pods are created in
`kube-system` namespace, descheduler job and its pod will also be created
in `kube-system` namespace.
### Create a container image
First we create a simple Docker image utilizing the Dockerfile found in the root directory:
### Run As A Job
```
$ make dev-image
kubectl create -f kubernetes/rbac.yaml
kubectl create -f kubernetes/configmap.yaml
kubectl create -f kubernetes/job.yaml
```
This creates an image based off the binary we've built before. To build both the
binary and image in one step you can run the following command:
### Run As A CronJob
```
$ make image
kubectl create -f kubernetes/rbac.yaml
kubectl create -f kubernetes/configmap.yaml
kubectl create -f kubernetes/cronjob.yaml
```
This eliminates the need to have Go installed locally and builds the binary
within it's own container.
## User Guide
### Create a cluster role
To give necessary permissions for the descheduler to work in a pod, create a cluster role:
```
$ cat << EOF| kubectl create -f -
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: descheduler-cluster-role
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "watch", "list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list", "delete"]
- apiGroups: [""]
resources: ["pods/eviction"]
verbs: ["create"]
EOF
```
### Create the service account which will be used to run the job:
```
$ kubectl create sa descheduler-sa -n kube-system
```
### Bind the cluster role to the service account:
```
$ kubectl create clusterrolebinding descheduler-cluster-role-binding \
--clusterrole=descheduler-cluster-role \
--serviceaccount=kube-system:descheduler-sa
```
### Create a configmap to store descheduler policy
Descheduler policy is created as a ConfigMap in `kube-system` namespace
so that it can be mounted as a volume inside pod.
```
$ kubectl create configmap descheduler-policy-configmap \
-n kube-system --from-file=<path-to-policy-dir/policy.yaml>
```
### Create the job specification (descheduler-job.yaml)
```
apiVersion: batch/v1
kind: Job
metadata:
name: descheduler-job
namespace: kube-system
spec:
parallelism: 1
completions: 1
template:
metadata:
name: descheduler-pod
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
spec:
containers:
- name: descheduler
image: descheduler
volumeMounts:
- mountPath: /policy-dir
name: policy-volume
command:
- "/bin/sh"
- "-ec"
- |
/bin/descheduler --policy-config-file /policy-dir/policy.yaml
restartPolicy: "Never"
serviceAccountName: descheduler-sa
volumes:
- name: policy-volume
configMap:
name: descheduler-policy-configmap
```
Please note that the pod template is configured with critical pod annotation, and
the policy `policy-file` is mounted as a volume from the config map.
### Run the descheduler as a job in a pod:
```
$ kubectl create -f descheduler-job.yaml
```
See the [user guide](docs/user-guide.md) in the `/docs` directory.
## Policy and Strategies
Descheduler's policy is configurable and includes strategies to be enabled or disabled.
Four strategies, `RemoveDuplicates`, `LowNodeUtilization`, `RemovePodsViolatingInterPodAntiAffinity`, `RemovePodsViolatingNodeAffinity` are currently implemented.
As part of the policy, the parameters associated with the strategies can be configured too.
Descheduler's policy is configurable and includes strategies that can be enabled or disabled.
Seven strategies `RemoveDuplicates`, `LowNodeUtilization`, `RemovePodsViolatingInterPodAntiAffinity`,
`RemovePodsViolatingNodeAffinity`, `RemovePodsViolatingNodeTaints`, `RemovePodsHavingTooManyRestarts`, and `PodLifeTime`
are currently implemented. As part of the policy, the parameters associated with the strategies can be configured too.
By default, all strategies are enabled.
### RemoveDuplicates
This strategy makes sure that there is only one pod associated with a Replica Set (RS),
Replication Controller (RC), Deployment, or Job running on same node. If there are more,
Replication Controller (RC), Deployment, or Job running on the same node. If there are more,
those duplicate pods are evicted for better spreading of pods in a cluster. This issue could happen
if some nodes went down due to whatever reasons, and pods on them were moved to other nodes leading to
more than one pod associated with RS or RC, for example, running on same node. Once the failed nodes
are ready again, this strategy could be enabled to evict those duplicate pods. Currently, there are no
parameters associated with this strategy. To disable this strategy, the policy should look like:
more than one pod associated with a RS or RC, for example, running on the same node. Once the failed nodes
are ready again, this strategy could be enabled to evict those duplicate pods.
It provides one optional parameter, `ExcludeOwnerKinds`, which is a list of OwnerRef `Kind`s. If a pod
has any of these `Kind`s listed as an `OwnerRef`, that pod will not be considered for eviction.
```
apiVersion: "descheduler/v1alpha1"
kind: "DeschedulerPolicy"
strategies:
"RemoveDuplicates":
enabled: false
enabled: true
params:
removeDuplicates:
excludeOwnerKinds:
- "ReplicaSet"
```
### LowNodeUtilization
@@ -190,18 +88,19 @@ This strategy finds nodes that are under utilized and evicts pods, if possible,
in the hope that recreation of evicted pods will be scheduled on these underutilized nodes. The
parameters of this strategy are configured under `nodeResourceUtilizationThresholds`.
The under utilization of nodes is determined by a configurable threshold, `thresholds`. The threshold
The under utilization of nodes is determined by a configurable threshold `thresholds`. The threshold
`thresholds` can be configured for cpu, memory, and number of pods in terms of percentage. If a node's
usage is below threshold for all (cpu, memory, and number of pods), the node is considered underutilized.
Currently, pods' request resource requirements are considered for computing node resource utilization.
Currently, pods request resource requirements are considered for computing node resource utilization.
There is another configurable threshold, `targetThresholds`, that is used to compute those potential nodes
from where pods could be evicted. Any node, between the thresholds, `thresholds` and `targetThresholds` is
from where pods could be evicted. If a node's usage is above targetThreshold for any (cpu, memory, or number of pods),
the node is considered over utilized. Any node between the thresholds, `thresholds` and `targetThresholds` is
considered appropriately utilized and is not considered for eviction. The threshold, `targetThresholds`,
can be configured for cpu, memory, and number of pods too in terms of percentage.
These thresholds, `thresholds` and `targetThresholds`, could be tuned as per your cluster requirements.
An example of the policy for this strategy would look like:
Here is an example of a policy for this strategy:
```
apiVersion: "descheduler/v1alpha1"
@@ -221,14 +120,28 @@ strategies:
"pods": 50
```
There is another parameter associated with `LowNodeUtilization` strategy, called `numberOfNodes`.
This parameter can be configured to activate the strategy only when number of under utilized nodes
Policy should pass the following validation checks:
* Only three types of resources are supported: `cpu`, `memory` and `pods`.
* `thresholds` or `targetThresholds` can not be nil and they must configure exactly the same types of resources.
* The valid range of the resource's percentage value is \[0, 100\]
* Percentage value of `thresholds` can not be greater than `targetThresholds` for the same resource.
If any of the resource types is not specified, all its thresholds default to 100% to avoid nodes going
from underutilized to overutilized.
There is another parameter associated with the `LowNodeUtilization` strategy, called `numberOfNodes`.
This parameter can be configured to activate the strategy only when the number of under utilized nodes
are above the configured value. This could be helpful in large clusters where a few nodes could go
under utilized frequently or for a short period of time. By default, `numberOfNodes` is set to zero.
### RemovePodsViolatingInterPodAntiAffinity
This strategy makes sure that pods violating interpod anti-affinity are removed from nodes. For example, if there is podA on node and podB and podC(running on same node) have antiaffinity rules which prohibit them to run on the same node, then podA will be evicted from the node so that podB and podC could run. This issue could happen, when the anti-affinity rules for pods B,C are created when they are already running on node. Currently, there are no parameters associated with this strategy. To disable this strategy, the policy should look like:
This strategy makes sure that pods violating interpod anti-affinity are removed from nodes. For example,
if there is podA on a node and podB and podC (running on the same node) have anti-affinity rules which prohibit
them to run on the same node, then podA will be evicted from the node so that podB and podC could run. This
issue could happen, when the anti-affinity rules for podB and podC are created when they are already running on
node. Currently, there are no parameters associated with this strategy. To disable this strategy, the
policy should look like:
```
apiVersion: "descheduler/v1alpha1"
@@ -240,7 +153,23 @@ strategies:
### RemovePodsViolatingNodeAffinity
This strategy makes sure that pods violating node affinity are removed from nodes. For example, there is podA that was scheduled on nodeA which satisfied the node affinity rule `requiredDuringSchedulingIgnoredDuringExecution` at the time of scheduling, but over time nodeA no longer satisfies the rule, then if another node nodeB is available that satisfies the node affinity rule, then podA will be evicted from nodeA. The policy file should like this -
This strategy makes sure all pods violating
[node affinity](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity)
are eventually removed from nodes. Node affinity rules allow a pod to specify
`requiredDuringSchedulingIgnoredDuringExecution` type, which tells the scheduler
to respect node affinity when scheduling the pod but kubelet to ignore
in case node changes over time and no longer respects the affinity.
When enabled, the strategy serves as a temporary implementation
of `requiredDuringSchedulingRequiredDuringExecution` and evicts pod for kubelet
that no longer respects node affinity.
For example, there is podA scheduled on nodeA which satisfies the node
affinity rule `requiredDuringSchedulingIgnoredDuringExecution` at the time
of scheduling. Over time nodeA stops to satisfy the rule. When the strategy gets
executed and there is another node available that satisfies the node affinity rule,
podA gets evicted from nodeA.
The policy file should look like:
```
apiVersion: "descheduler/v1alpha1"
@@ -253,43 +182,136 @@ strategies:
- "requiredDuringSchedulingIgnoredDuringExecution"
```
### RemovePodsViolatingNodeTaints
This strategy makes sure that pods violating NoSchedule taints on nodes are removed. For example there is a
pod "podA" with a toleration to tolerate a taint ``key=value:NoSchedule`` scheduled and running on the tainted
node. If the node's taint is subsequently updated/removed, taint is no longer satisfied by its pods' tolerations
and will be evicted. The policy file should look like:
````
apiVersion: "descheduler/v1alpha1"
kind: "DeschedulerPolicy"
strategies:
"RemovePodsViolatingNodeTaints":
enabled: true
````
### RemovePodsHavingTooManyRestarts
This strategy makes sure that pods having too many restarts are removed from nodes. For example a pod with EBS/PD that can't get the volume/disk attached to the instance, then the pod should be re-scheduled to other nodes.
```
apiVersion: "descheduler/v1alpha1"
kind: "DeschedulerPolicy"
strategies:
"RemovePodsHavingTooManyRestarts":
enabled: true
params:
podsHavingTooManyRestarts:
podRestartThreshold: 100
includingInitContainers: true
```
### PodLifeTime
This strategy evicts pods that are older than `.strategies.PodLifeTime.params.maxPodLifeTimeSeconds` The policy
file should look like:
````
apiVersion: "descheduler/v1alpha1"
kind: "DeschedulerPolicy"
strategies:
"PodLifeTime":
enabled: true
params:
maxPodLifeTimeSeconds: 86400
````
## Pod Evictions
When the descheduler decides to evict pods from a node, it employs following general mechanism:
When the descheduler decides to evict pods from a node, it employs the following general mechanism:
* Critical pods (with annotations scheduler.alpha.kubernetes.io/critical-pod) are never evicted.
* Pods (static or mirrored pods or stand alone pods) not part of an RC, RS, Deployment or Jobs are
* [Critical pods](https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/) (with priorityClassName set to system-cluster-critical or system-node-critical) are never evicted.
* Pods (static or mirrored pods or stand alone pods) not part of an RC, RS, Deployment or Job are
never evicted because these pods won't be recreated.
* Pods associated with DaemonSets are never evicted.
* Pods with local storage are never evicted.
* Best efforts pods are evicted before Burstable and Guaranteed pods.
* In `LowNodeUtilization` and `RemovePodsViolatingInterPodAntiAffinity`, pods are evicted by their priority from low to high, and if they have same priority,
best effort pods are evicted before burstable and guaranteed pods.
* All types of pods with the annotation descheduler.alpha.kubernetes.io/evict are evicted. This
annotation is used to override checks which prevent eviction and users can select which pod is evicted.
Users should know how and if the pod will be recreated.
### Pod disruption Budget (PDB)
Pods subject to Pod Disruption Budget (PDB) are not evicted if descheduling violates its pod
disruption budget (PDB). The pods are evicted by using eviction subresource to handle PDB.
Setting `--v=4` or greater on the Descheduler will log all reasons why any pod is not evictable.
### Pod Disruption Budget (PDB)
Pods subject to a Pod Disruption Budget(PDB) are not evicted if descheduling violates its PDB. The pods
are evicted by using the eviction subresource to handle PDB.
## Compatibility Matrix
The below compatibility matrix shows the k8s client package(client-go, apimachinery, etc) versions that descheduler
is compiled with. At this time descheduler does not have a hard dependency to a specific k8s release. However a
particular descheduler release is only tested against the three latest k8s minor versions. For example descheduler
v0.18 should work with k8s v1.18, v1.17, and v1.16.
Starting with descheduler release v0.18 the minor version of descheduler matches the minor version of the k8s client
packages that it is compiled with.
Descheduler | Supported Kubernetes Version
-------------|-----------------------------
v0.18 | v1.18
v0.10 | v1.17
v0.4-v0.9 | v1.9+
v0.1-v0.3 | v1.7-v1.8
## Getting Involved and Contributing
Are you interested in contributing to descheduler? We, the
maintainers and community, would love your suggestions, contributions, and help!
Also, the maintainers can be contacted at any time to learn more about how to get
involved.
To get started writing code see the [contributor guide](docs/contributor-guide.md) in the `/docs` directory.
In the interest of getting more new people involved we tag issues with
[`good first issue`][good_first_issue].
These are typically issues that have smaller scope but are good ways to start
to get acquainted with the codebase.
We also encourage ALL active community participants to act as if they are
maintainers, even if you don't have "official" write permissions. This is a
community effort, we are here to serve the Kubernetes community. If you have an
active interest and you want to get involved, you have real power! Don't assume
that the only people who can get things done around here are the "maintainers".
We also would love to add more "official" maintainers, so show us what you can
do!
This repository uses the Kubernetes bots. See a full list of the commands [here][prow].
### Communicating With Contributors
You can reach the contributors of this project at:
- [Slack channel](https://kubernetes.slack.com/messages/sig-scheduling)
- [Mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-scheduling)
Learn how to engage with the Kubernetes community on the [community page](http://kubernetes.io/community/).
## Roadmap
This roadmap is not in any particular order.
* Strategy to consider taints and tolerations
* Consideration of pod affinity
* Strategy to consider pod life time
* Strategy to consider number of pending pods
* Integration with cluster autoscaler
* Integration with metrics providers for obtaining real load metrics
* Consideration of Kubernetes's scheduler's predicates
## Compatibility matrix
### Code of conduct
Descheduler | supported Kubernetes version
-------------|-----------------------------
0.4 | 1.9+
0.1-0.3 | 1.7-1.8
## Note
This project is under active development, and is not intended for production use.
Any api could be changed any time with out any notice. That said, your feedback is
very important and appreciated to make this project more stable and useful.
Participation in the Kubernetes community is governed by the [Kubernetes Code of Conduct](code-of-conduct.md).

View File

@@ -11,4 +11,5 @@
# INSTRUCTIONS AT https://kubernetes.io/security/
aveshagarwal
k82cn
ravisantoshgudimetla

View File

@@ -0,0 +1,22 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -0,0 +1,16 @@
apiVersion: v1
name: descheduler
version: 0.1.0
appVersion: 0.18.0
description: Descheduler for Kubernetes is used to rebalance clusters by evicting pods that can potentially be scheduled on better nodes. In the current implementation, descheduler does not schedule replacement of evicted pods but relies on the default scheduler for that.
keywords:
- kubernetes
- descheduler
- kube-scheduler
home: https://github.com/kubernetes-sigs/descheduler
icon: https://kubernetes.io/images/favicon.png
sources:
- https://github.com/kubernetes-sigs/descheduler
maintainers:
- name: stevehipwell
email: steve.hipwell@github.com

View File

@@ -0,0 +1,59 @@
# Descheduler for Kubernetes
[Descheduler](https://github.com/kubernetes-sigs/descheduler/) for Kubernetes is used to rebalance clusters by evicting pods that can potentially be scheduled on better nodes. In the current implementation, descheduler does not schedule replacement of evicted pods but relies on the default scheduler for that.
## TL;DR:
```shell
helm repo add descheduler https://kubernetes-sigs.github.io/descheduler/
$ helm install descheduler/descheduler --name my-release
```
## Introduction
This chart bootstraps a [desheduler](https://github.com/kubernetes-sigs/descheduler/) cron job on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
## Prerequisites
- Kubernetes 1.14+
## Installing the Chart
To install the chart with the release name `my-release`:
```shell
helm install --name my-release descheduler/descheduler
```
The command deploys _descheduler_ on the Kubernetes cluster in the default configuration. The [configuration](#configuration) section lists the parameters that can be configured during installation.
> **Tip**: List all releases using `helm list`
## Uninstalling the Chart
To uninstall/delete the `my-release` deployment:
```shell
helm delete my-release
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
The following table lists the configurable parameters of the _descheduler_ chart and their default values.
| Parameter | Description | Default |
| ------------------------------ | --------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------ |
| `image.repository` | Docker repository to use | `us.gcr.io/k8s-artifacts-prod/descheduler/descheduler` |
| `image.tag` | Docker tag to use | `v[chart appVersion]` |
| `image.pullPolicy` | Docker image pull policy | `IfNotPresent` |
| `nameOverride` | String to partially override `descheduler.fullname` template (will prepend the release name) | `""` |
| `fullnameOverride` | String to fully override `descheduler.fullname` template | `""` |
| `schedule` | The cron schedule to run the _descheduler_ job on | `"*/2 * * * *"` |
| `cmdOptions` | The options to pass to the _descheduler_ command | _see values.yaml_ |
| `deschedulerPolicy.strategies` | The _descheduler_ strategies to apply | _see values.yaml_ |
| `priorityClassName` | The name of the priority class to add to pods | `system-cluster-critical` |
| `rbac.create` | If `true`, create & use RBAC resources | `true` |
| `serviceAccount.create` | If `true`, create a service account for the cron job | `true` |
| `serviceAccount.name` | The name of the service account to use, if not set and create is true a name is generated using the fullname template | `nil` |

View File

@@ -0,0 +1 @@
Descheduler installed as a cron job.

View File

@@ -0,0 +1,56 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "descheduler.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "descheduler.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "descheduler.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "descheduler.labels" -}}
app.kubernetes.io/name: {{ include "descheduler.name" . }}
helm.sh/chart: {{ include "descheduler.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "descheduler.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "descheduler.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}

View File

@@ -0,0 +1,21 @@
{{- if .Values.rbac.create -}}
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "descheduler.fullname" . }}
labels:
{{- include "descheduler.labels" . | nindent 4 }}
rules:
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "watch", "list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list", "delete"]
- apiGroups: [""]
resources: ["pods/eviction"]
verbs: ["create"]
{{- end -}}

View File

@@ -0,0 +1,16 @@
{{- if .Values.rbac.create -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ template "descheduler.fullname" . }}
labels:
{{- include "descheduler.labels" . | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "descheduler.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "descheduler.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end -}}

View File

@@ -0,0 +1,11 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "descheduler.fullname" . }}
labels:
{{- include "descheduler.labels" . | nindent 4 }}
data:
policy.yaml: |
apiVersion: "descheduler/v1alpha1"
kind: "DeschedulerPolicy"
{{ toYaml .Values.deschedulerPolicy | trim | indent 4 }}

View File

@@ -0,0 +1,53 @@
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ template "descheduler.fullname" . }}
labels:
{{- include "descheduler.labels" . | nindent 4 }}
spec:
schedule: {{ .Values.schedule | quote }}
concurrencyPolicy: "Forbid"
jobTemplate:
spec:
template:
metadata:
name: {{ template "descheduler.fullname" . }}
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
{{- if .Values.podAnnotations }}
{{- .Values.podAnnotations | toYaml | nindent 12 }}
{{- end }}
labels:
app.kubernetes.io/name: {{ include "descheduler.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.podLabels }}
{{- .Values.podLabels | toYaml | nindent 12 }}
{{- end }}
spec:
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName }}
{{- end }}
serviceAccountName: {{ template "descheduler.serviceAccountName" . }}
restartPolicy: "Never"
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default (printf "v%s" .Chart.AppVersion) }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
- "/bin/descheduler"
args:
- "--policy-config-file"
- "/policy-dir/policy.yaml"
{{- range $key, $value := .Values.cmdOptions }}
- {{ printf "--%s" $key | quote }}
{{- if $value }}
- {{ $value | quote }}
{{- end }}
{{- end }}
volumeMounts:
- mountPath: /policy-dir
name: policy-volume
volumes:
- name: policy-volume
configMap:
name: {{ template "descheduler.fullname" . }}

View File

@@ -0,0 +1,8 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "descheduler.serviceAccountName" . }}
labels:
{{- include "descheduler.labels" . | nindent 4 }}
{{- end -}}

View File

@@ -0,0 +1,59 @@
# Default values for descheduler.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
image:
repository: us.gcr.io/k8s-artifacts-prod/descheduler/descheduler
# Overrides the image tag whose default is the chart version
tag: ""
pullPolicy: IfNotPresent
nameOverride: ""
fullnameOverride: ""
schedule: "*/2 * * * *"
cmdOptions:
v: 3
# evict-local-storage-pods:
# max-pods-to-evict-per-node: 10
# node-selector: "key1=value1,key2=value2"
deschedulerPolicy:
strategies:
RemoveDuplicates:
enabled: true
RemovePodsViolatingNodeTaints:
enabled: true
RemovePodsViolatingNodeAffinity:
enabled: true
params:
nodeAffinityType:
- requiredDuringSchedulingIgnoredDuringExecution
RemovePodsViolatingInterPodAntiAffinity:
enabled: true
LowNodeUtilization:
enabled: true
params:
nodeResourceUtilizationThresholds:
thresholds:
cpu: 20
memory: 20
pods: 20
targetThresholds:
cpu: 50
memory: 50
pods: 50
priorityClassName: system-cluster-critical
rbac:
# Specifies whether RBAC resources should be created
create: true
serviceAccount:
# Specifies whether a ServiceAccount should be created
create: true
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname template
name:

24
cloudbuild.yaml Normal file
View File

@@ -0,0 +1,24 @@
# See https://cloud.google.com/cloud-build/docs/build-config
# this must be specified in seconds. If omitted, defaults to 600s (10 mins)
timeout: 1200s
# this prevents errors if you don't use both _GIT_TAG and _PULL_BASE_REF,
# or any new substitutions added in the future.
options:
substitution_option: ALLOW_LOOSE
steps:
- name: 'gcr.io/k8s-testimages/gcb-docker-gcloud:v20190906-745fed4'
entrypoint: make
env:
- DOCKER_CLI_EXPERIMENTAL=enabled
- VERSION=$_GIT_TAG
- BASE_REF=$_PULL_BASE_REF
args:
- push
substitutions:
# _GIT_TAG will be filled with a git-based tag for the image, of the form vYYYYMMDD-hash, and
# can be used as a substitution
_GIT_TAG: '12345'
# _PULL_BASE_REF will contain the ref that was pushed to to trigger this build -
# a branch like 'master' or 'release-0.2', or a tag like 'v0.2'.
_PULL_BASE_REF: 'master'

View File

@@ -21,10 +21,9 @@ import (
clientset "k8s.io/client-go/kubernetes"
// install the componentconfig api so we get its defaulting and conversion functions
"github.com/kubernetes-incubator/descheduler/pkg/apis/componentconfig"
_ "github.com/kubernetes-incubator/descheduler/pkg/apis/componentconfig/install"
"github.com/kubernetes-incubator/descheduler/pkg/apis/componentconfig/v1alpha1"
deschedulerscheme "github.com/kubernetes-incubator/descheduler/pkg/descheduler/scheme"
"sigs.k8s.io/descheduler/pkg/apis/componentconfig"
"sigs.k8s.io/descheduler/pkg/apis/componentconfig/v1alpha1"
deschedulerscheme "sigs.k8s.io/descheduler/pkg/descheduler/scheme"
"github.com/spf13/pflag"
)
@@ -49,7 +48,7 @@ func NewDeschedulerServer() *DeschedulerServer {
// AddFlags adds flags for a specific SchedulerServer to the specified FlagSet
func (rs *DeschedulerServer) AddFlags(fs *pflag.FlagSet) {
fs.DurationVar(&rs.DeschedulingInterval, "descheduling-interval", rs.DeschedulingInterval, "time interval between two consecutive descheduler executions")
fs.DurationVar(&rs.DeschedulingInterval, "descheduling-interval", rs.DeschedulingInterval, "Time interval between two consecutive descheduler executions. Setting this value instructs the descheduler to run in a continuous loop at the interval specified.")
fs.StringVar(&rs.KubeconfigFile, "kubeconfig", rs.KubeconfigFile, "File with kube configuration.")
fs.StringVar(&rs.PolicyConfigFile, "policy-config-file", rs.PolicyConfigFile, "File with descheduler policy configuration.")
fs.BoolVar(&rs.DryRun, "dry-run", rs.DryRun, "execute descheduler in dry run mode.")
@@ -57,4 +56,6 @@ func (rs *DeschedulerServer) AddFlags(fs *pflag.FlagSet) {
fs.StringVar(&rs.NodeSelector, "node-selector", rs.NodeSelector, "Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2)")
// max-no-pods-to-evict limits the maximum number of pods to be evicted per node by descheduler.
fs.IntVar(&rs.MaxNoOfPodsToEvictPerNode, "max-pods-to-evict-per-node", rs.MaxNoOfPodsToEvictPerNode, "Limits the maximum number of pods to be evicted per node by descheduler")
// evict-local-storage-pods allows eviction of pods that are using local storage. This is false by default.
fs.BoolVar(&rs.EvictLocalStoragePods, "evict-local-storage-pods", rs.EvictLocalStoragePods, "Enables evicting pods using local storage by descheduler")
}

View File

@@ -21,14 +21,14 @@ import (
"flag"
"io"
"github.com/kubernetes-incubator/descheduler/cmd/descheduler/app/options"
"github.com/kubernetes-incubator/descheduler/pkg/descheduler"
"sigs.k8s.io/descheduler/cmd/descheduler/app/options"
"sigs.k8s.io/descheduler/pkg/descheduler"
"github.com/golang/glog"
"github.com/spf13/cobra"
aflag "k8s.io/apiserver/pkg/util/flag"
"k8s.io/apiserver/pkg/util/logs"
aflag "k8s.io/component-base/cli/flag"
"k8s.io/component-base/logs"
"k8s.io/klog/v2"
)
// NewDeschedulerCommand creates a *cobra.Command object with default parameters
@@ -43,7 +43,7 @@ func NewDeschedulerCommand(out io.Writer) *cobra.Command {
defer logs.FlushLogs()
err := Run(s)
if err != nil {
glog.Errorf("%v", err)
klog.Errorf("%v", err)
}
},
}

View File

@@ -18,9 +18,10 @@ package app
import (
"fmt"
"github.com/spf13/cobra"
"runtime"
"strings"
"github.com/spf13/cobra"
)
var (

View File

@@ -21,7 +21,7 @@ import (
"fmt"
"os"
"github.com/kubernetes-incubator/descheduler/cmd/descheduler/app"
"sigs.k8s.io/descheduler/cmd/descheduler/app"
)
func main() {

5
docs/README.md Normal file
View File

@@ -0,0 +1,5 @@
# Documentation Index
- [Contributor Guide](contributor-guide.md)
- [Release Guide](release-guide.md)
- [User Guide](user-guide.md)

42
docs/contributor-guide.md Normal file
View File

@@ -0,0 +1,42 @@
# Contributor Guide
## Required Tools
- [Git](https://git-scm.com/downloads)
- [Go 1.14+](https://golang.org/dl/)
- [Docker](https://docs.docker.com/install/)
- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl)
- [kind](https://kind.sigs.k8s.io/)
## Build and Run
Build descheduler.
```sh
cd $GOPATH/src/sigs.k8s.io
git clone https://github.com/kubernetes-sigs/descheduler.git
cd descheduler
make
```
Run descheduler.
```sh
./_output/bin/descheduler --kubeconfig <path to kubeconfig> --policy-config-file <path-to-policy-file> --v 1
```
View all CLI options.
```
./_output/bin/descheduler --help
```
## Run Tests
```
GOOS=linux make dev-image
kind create cluster --config hack/kind_config.yaml
kind load docker-image <image name>
kind get kubeconfig > /tmp/admin.conf
make test-unit
make test-e2e
```
### Miscellaneous
See the [hack directory](https://github.com/kubernetes-sigs/descheduler/tree/master/hack) for additional tools and scripts used for developing the descheduler.

69
docs/release-guide.md Normal file
View File

@@ -0,0 +1,69 @@
# Release Guide
## Container Image
### Semi-automatic
1. Make sure your repo is clean by git's standards
2. Create a release branch `git checkout -b release-1.18` (not required for patch releases)
3. Push the release branch to the descheuler repo and ensure branch protection is enabled (not required for patch releases)
4. Tag the repository and push the tag `VERSION=v0.18.0 git tag -m $VERSION $VERSION; git push origin $VERSION`
5. Publish a draft release using the tag you just created
6. Perform the [image promotion process](https://github.com/kubernetes/k8s.io/tree/master/k8s.gcr.io#image-promoter)
7. Publish release
8. Email `kubernetes-sig-scheduling@googlegroups.com` to announce the release
### Manual
1. Make sure your repo is clean by git's standards
2. Create a release branch `git checkout -b release-1.18` (not required for patch releases)
3. Push the release branch to the descheuler repo and ensure branch protection is enabled (not required for patch releases)
4. Tag the repository and push the tag `VERSION=v0.18.0 git tag -m $VERSION $VERSION; git push origin $VERSION`
5. Checkout the tag you just created and make sure your repo is clean by git's standards `git checkout $VERSION`
6. Build and push the container image to the staging registry `VERSION=$VERSION make push`
7. Publish a draft release using the tag you just created
8. Perform the [image promotion process](https://github.com/kubernetes/k8s.io/tree/master/k8s.gcr.io#image-promoter)
9. Publish release
10. Email `kubernetes-sig-scheduling@googlegroups.com` to announce the release
### Notes
See [post-descheduler-push-images dashboard](https://testgrid.k8s.io/sig-scheduling#post-descheduler-push-images) for staging registry image build job status.
View the descheduler staging registry using [this URL](https://console.cloud.google.com/gcr/images/k8s-staging-descheduler/GLOBAL/descheduler) in a web browser
or use the below `gcloud` commands.
List images in staging registry.
```
gcloud container images list --repository gcr.io/k8s-staging-descheduler
```
List descheduler image tags in the staging registry.
```
gcloud container images list-tags gcr.io/k8s-staging-descheduler/descheduler
```
Get SHA256 hash for a specific image in the staging registry.
```
gcloud container images describe gcr.io/k8s-staging-descheduler/descheduler:v20200206-0.9.0-94-ge2a23f284
```
Pull image from the staging registry.
```
docker pull gcr.io/k8s-staging-descheduler/descheduler:v20200206-0.9.0-94-ge2a23f284
```
## Helm Chart
Helm chart releases are managed by a separate set of git tags that are prefixed with `chart-*`. Example git tag name is `chart-0.18.0`. Released versions of the
helm charts are stored in the `gh-pages` branch of this repo. The [chart-releaser-action GitHub Action](https://github.com/helm/chart-releaser-action) is setup to
build and push the helm charts to the `gh-pages` branch when a `chart-*` git tag is created.
The major and minor version of the chart matches the descheduler major and minor versions. For example descheduler helm chart version chart-0.18.0 corresponds
to descheduler version v0.18.0. The patch version of the descheduler helm chart and the patcher version of the descheduler will not necessarily match. The patch
version of the descheduler helm chart is used to version changes specific to the helm chart.
1. Merge all helm chart changes into the appropriate release branch(i.e. release-1.18)
1. Ensure that `appVersion` in file `charts/descheduler/Chart.yaml` matches the descheduler version(no `v` prefix)
2. Ensure that `version` in file `charts/descheduler/Chart.yaml` has been incremented. This is the chart version.
2. Make sure your repo is clean by git's standards
3. Create the tag and push it `git checkout release-1.18; CHART_VERSION=chart-0.18.0; git tag $CHART_VERSION; git push origin $CHART_VERSION`
4. Verify the new helm artifact has been successfully pushed to the `gh-pages` branch

90
docs/user-guide.md Normal file
View File

@@ -0,0 +1,90 @@
# User Guide
Starting with descheduler release v0.10.0 container images are available in these container registries.
* `asia.gcr.io/k8s-artifacts-prod/descheduler/descheduler`
* `eu.gcr.io/k8s-artifacts-prod/descheduler/descheduler`
* `us.gcr.io/k8s-artifacts-prod/descheduler/descheduler`
## Policy Configuration Examples
The [examples](https://github.com/kubernetes-sigs/descheduler/tree/master/examples) directory has descheduler policy configuration examples.
## CLI Options
The descheduler has many CLI options that can be used to override its default behavior.
```
descheduler --help
The descheduler evicts pods which may be bound to less desired nodes
Usage:
descheduler [flags]
descheduler [command]
Available Commands:
help Help about any command
version Version of descheduler
Flags:
--add-dir-header If true, adds the file directory to the header
--alsologtostderr log to standard error as well as files
--descheduling-interval duration Time interval between two consecutive descheduler executions. Setting this value instructs the descheduler to run in a continuous loop at the interval specified.
--dry-run execute descheduler in dry run mode.
--evict-local-storage-pods Enables evicting pods using local storage by descheduler
-h, --help help for descheduler
--kubeconfig string File with kube configuration.
--log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
--log-dir string If non-empty, write log files in this directory
--log-file string If non-empty, use this log file
--log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
--log-flush-frequency duration Maximum number of seconds between log flushes (default 5s)
--logtostderr log to standard error instead of files (default true)
--max-pods-to-evict-per-node int Limits the maximum number of pods to be evicted per node by descheduler
--node-selector string Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2)
--policy-config-file string File with descheduler policy configuration.
--skip-headers If true, avoid header prefixes in the log messages
--skip-log-headers If true, avoid headers when opening log files
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
-v, --v Level number for the log level verbosity
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
Use "descheduler [command] --help" for more information about a command.
```
## Production Use Cases
This section contains descriptions of real world production use cases.
### Balance Cluster By Pod Age
When initially migrating applications from a static virtual machine infrastructure to a cloud native k8s
infrastructure there can be a tendency to treat application pods like static virtual machines. One approach
to help prevent developers and operators from treating pods like virtual machines is to ensure that pods
only run for a fixed amount
of time.
The `PodLifeTime` strategy can be used to ensure that old pods are evicted. It is recommended to create a
[pod disruption budget](https://kubernetes.io/docs/tasks/run-application/configure-pdb/) for each
application to ensure application availability.
```
descheduler -v=3 --evict-local-storage-pods --policy-config-file=pod-life-time.yml
```
This policy configuration file ensures that pods created more than 7 days ago are evicted.
```
---
apiVersion: "descheduler/v1alpha1"
kind: "DeschedulerPolicy"
strategies:
"LowNodeUtilization":
enabled: false
"RemoveDuplicates":
enabled: false
"RemovePodsViolatingInterPodAntiAffinity":
enabled: false
"RemovePodsViolatingNodeAffinity":
enabled: false
"RemovePodsViolatingNodeTaints":
enabled: false
"RemovePodsHavingTooManyRestarts":
enabled: false
"PodLifeTime":
enabled: true
params:
maxPodLifeTimeSeconds: 604800 # pods run for a maximum of 7 days
```

View File

@@ -0,0 +1,20 @@
---
apiVersion: "descheduler/v1alpha1"
kind: "DeschedulerPolicy"
strategies:
"LowNodeUtilization":
enabled: false
"RemoveDuplicates":
enabled: false
"RemovePodsViolatingInterPodAntiAffinity":
enabled: false
"RemovePodsViolatingNodeAffinity":
enabled: false
"RemovePodsViolatingNodeTaints":
enabled: false
"RemovePodsHavingTooManyRestarts":
enabled: false
"PodLifeTime":
enabled: true
params:
maxPodLifeTimeSeconds: 604800 # 7 days

View File

@@ -17,3 +17,9 @@ strategies:
"cpu" : 50
"memory": 50
"pods": 50
"RemovePodsHavingTooManyRestarts":
enabled: true
params:
podsHavingTooManyRestarts:
podRestartThreshold: 100
includingInitContainers: true

464
glide.lock generated
View File

@@ -1,464 +0,0 @@
hash: ed51a8e643db6e9996ef0ffca671fb31ab5b7fe0d61ecdda828192871f9da366
updated: 2018-05-22T18:05:00.26435-07:00
imports:
- name: cloud.google.com/go
version: 3b1ae45394a234c385be014e9a488f2bb6eef821
subpackages:
- compute/metadata
- internal
- name: github.com/Azure/go-autorest
version: e14a70c556c8e0db173358d1a903dca345a8e75e
subpackages:
- autorest
- autorest/adal
- autorest/azure
- autorest/date
- name: github.com/davecgh/go-spew
version: 782f4967f2dc4564575ca782fe2d04090b5faca8
subpackages:
- spew
- name: github.com/dgrijalva/jwt-go
version: 01aeca54ebda6e0fbfafd0a524d234159c05ec20
- name: github.com/docker/distribution
version: edc3ab29cdff8694dd6feb85cfeb4b5f1b38ed9c
subpackages:
- digestset
- reference
- name: github.com/emicklei/go-restful
version: ff4f55a206334ef123e4f79bbf348980da81ca46
subpackages:
- log
- name: github.com/ghodss/yaml
version: 73d445a93680fa1a78ae23a5839bad48f32ba1ee
- name: github.com/go-openapi/jsonpointer
version: 46af16f9f7b149af66e5d1bd010e3574dc06de98
- name: github.com/go-openapi/jsonreference
version: 13c6e3589ad90f49bd3e3bbe2c2cb3d7a4142272
- name: github.com/go-openapi/spec
version: 7abd5745472fff5eb3685386d5fb8bf38683154d
- name: github.com/go-openapi/swag
version: f3f9494671f93fcff853e3c6e9e948b3eb71e590
- name: github.com/gogo/protobuf
version: c0656edd0d9eab7c66d1eb0c568f9039345796f7
subpackages:
- proto
- sortkeys
- name: github.com/golang/glog
version: 23def4e6c14b4da8ac2ed8007337bc5eb5007998
- name: github.com/golang/protobuf
version: 1643683e1b54a9e88ad26d98f81400c8c9d9f4f9
subpackages:
- jsonpb
- proto
- ptypes
- ptypes/any
- ptypes/duration
- ptypes/struct
- ptypes/timestamp
- name: github.com/google/btree
version: 7d79101e329e5a3adf994758c578dab82b90c017
- name: github.com/google/gofuzz
version: 44d81051d367757e1c7c6a5a86423ece9afcf63c
- name: github.com/googleapis/gnostic
version: 0c5108395e2debce0d731cf0287ddf7242066aba
subpackages:
- OpenAPIv2
- compiler
- extensions
- name: github.com/gophercloud/gophercloud
version: 8183543f90d1aef267a5ecc209f2e0715b355acb
subpackages:
- openstack
- openstack/identity/v2/tenants
- openstack/identity/v2/tokens
- openstack/identity/v3/tokens
- openstack/utils
- pagination
- name: github.com/gregjones/httpcache
version: 787624de3eb7bd915c329cba748687a3b22666a6
subpackages:
- diskcache
- name: github.com/hashicorp/golang-lru
version: a0d98a5f288019575c6d1f4bb1573fef2d1fcdc4
subpackages:
- simplelru
- name: github.com/howeyc/gopass
version: bf9dde6d0d2c004a008c27aaee91170c786f6db8
- name: github.com/imdario/mergo
version: 6633656539c1639d9d78127b7d47c622b5d7b6dc
- name: github.com/inconshreveable/mousetrap
version: 76626ae9c91c4f2a10f34cad8ce83ea42c93bb75
- name: github.com/json-iterator/go
version: 36b14963da70d11297d313183d7e6388c8510e1e
- name: github.com/juju/ratelimit
version: 5b9ff866471762aa2ab2dced63c9fb6f53921342
- name: github.com/kubernetes/repo-infra
version: dbcbd7624d5e4eb29f33c48edf1b1651809827a3
- name: github.com/mailru/easyjson
version: 2f5df55504ebc322e4d52d34df6a1f5b503bf26d
subpackages:
- buffer
- jlexer
- jwriter
- name: github.com/opencontainers/go-digest
version: a6d0ee40d4207ea02364bd3b9e8e77b9159ba1eb
- name: github.com/peterbourgon/diskv
version: 5f041e8faa004a95c88a202771f4cc3e991971e6
- name: github.com/PuerkitoBio/purell
version: 8a290539e2e8629dbc4e6bad948158f790ec31f4
- name: github.com/PuerkitoBio/urlesc
version: 5bd2802263f21d8788851d5305584c82a5c75d7e
- name: github.com/spf13/cobra
version: f62e98d28ab7ad31d707ba837a966378465c7b57
- name: github.com/spf13/pflag
version: 4c012f6dcd9546820e378d0bdda4d8fc772cdfea
- name: github.com/ugorji/go
version: f57d8945648dbfe4c332cff9c50fb57548958e3f
subpackages:
- codec
- name: golang.org/x/crypto
version: 81e90905daefcd6fd217b62423c0908922eadb30
subpackages:
- bcrypt
- blowfish
- nacl/secretbox
- poly1305
- salsa20/salsa
- ssh/terminal
- name: golang.org/x/net
version: 1c05540f6879653db88113bc4a2b70aec4bd491f
subpackages:
- context
- context/ctxhttp
- html
- html/atom
- http2
- http2/hpack
- idna
- internal/timeseries
- lex/httplex
- trace
- websocket
- name: golang.org/x/oauth2
version: a6bd8cefa1811bd24b86f8902872e4e8225f74c4
subpackages:
- google
- internal
- jws
- jwt
- name: golang.org/x/sys
version: 95c6576299259db960f6c5b9b69ea52422860fce
subpackages:
- unix
- windows
- name: golang.org/x/text
version: b19bf474d317b857955b12035d2c5acb57ce8b01
subpackages:
- cases
- internal
- internal/tag
- language
- runes
- secure/bidirule
- secure/precis
- transform
- unicode/bidi
- unicode/norm
- width
- name: golang.org/x/tools
version: 8cab8a1319f0be9798e7fe78b15da75e5f94b2e9
subpackages:
- imports
- name: google.golang.org/appengine
version: b1f26356af11148e710935ed1ac8a7f5702c7612
subpackages:
- internal
- internal/app_identity
- internal/base
- internal/datastore
- internal/log
- internal/modules
- internal/remote_api
- internal/urlfetch
- urlfetch
- name: gopkg.in/inf.v0
version: 3887ee99ecf07df5b447e9b00d9c0b2adaa9f3e4
- name: gopkg.in/yaml.v2
version: 53feefa2559fb8dfa8d81baad31be332c97d6c77
- name: k8s.io/api
version: af4bc157c3a209798fc897f6d4aaaaeb6c2e0d6a
subpackages:
- admission/v1beta1
- admissionregistration/v1alpha1
- admissionregistration/v1beta1
- apps/v1
- apps/v1beta1
- apps/v1beta2
- authentication/v1
- authentication/v1beta1
- authorization/v1
- authorization/v1beta1
- autoscaling/v1
- autoscaling/v2beta1
- batch/v1
- batch/v1beta1
- batch/v2alpha1
- certificates/v1beta1
- core/v1
- events/v1beta1
- extensions/v1beta1
- imagepolicy/v1alpha1
- networking/v1
- policy/v1beta1
- rbac/v1
- rbac/v1alpha1
- rbac/v1beta1
- scheduling/v1alpha1
- settings/v1alpha1
- storage/v1
- storage/v1alpha1
- storage/v1beta1
- name: k8s.io/apiextensions-apiserver
version: aaccab68c17a51ccdfd3c8559221cce9334d3394
subpackages:
- pkg/features
- name: k8s.io/apimachinery
version: 180eddb345a5be3a157cea1c624700ad5bd27b8f
subpackages:
- pkg/api/errors
- pkg/api/meta
- pkg/api/resource
- pkg/apimachinery
- pkg/apimachinery/announced
- pkg/apimachinery/registered
- pkg/apis/meta/internalversion
- pkg/apis/meta/v1
- pkg/apis/meta/v1/unstructured
- pkg/apis/meta/v1alpha1
- pkg/conversion
- pkg/conversion/queryparams
- pkg/fields
- pkg/labels
- pkg/runtime
- pkg/runtime/schema
- pkg/runtime/serializer
- pkg/runtime/serializer/json
- pkg/runtime/serializer/protobuf
- pkg/runtime/serializer/recognizer
- pkg/runtime/serializer/streaming
- pkg/runtime/serializer/versioning
- pkg/selection
- pkg/types
- pkg/util/cache
- pkg/util/clock
- pkg/util/diff
- pkg/util/errors
- pkg/util/framer
- pkg/util/intstr
- pkg/util/json
- pkg/util/net
- pkg/util/runtime
- pkg/util/sets
- pkg/util/validation
- pkg/util/validation/field
- pkg/util/wait
- pkg/util/yaml
- pkg/version
- pkg/watch
- third_party/forked/golang/reflect
- name: k8s.io/apiserver
version: 91e14f394e4796abf5a994a349a222e7081d86b6
subpackages:
- pkg/features
- pkg/util/feature
- pkg/util/flag
- pkg/util/logs
- name: k8s.io/client-go
version: 78700dec6369ba22221b72770783300f143df150
subpackages:
- discovery
- discovery/fake
- kubernetes
- kubernetes/fake
- kubernetes/scheme
- kubernetes/typed/admissionregistration/v1alpha1
- kubernetes/typed/admissionregistration/v1alpha1/fake
- kubernetes/typed/admissionregistration/v1beta1
- kubernetes/typed/admissionregistration/v1beta1/fake
- kubernetes/typed/apps/v1
- kubernetes/typed/apps/v1/fake
- kubernetes/typed/apps/v1beta1
- kubernetes/typed/apps/v1beta1/fake
- kubernetes/typed/apps/v1beta2
- kubernetes/typed/apps/v1beta2/fake
- kubernetes/typed/authentication/v1
- kubernetes/typed/authentication/v1/fake
- kubernetes/typed/authentication/v1beta1
- kubernetes/typed/authentication/v1beta1/fake
- kubernetes/typed/authorization/v1
- kubernetes/typed/authorization/v1/fake
- kubernetes/typed/authorization/v1beta1
- kubernetes/typed/authorization/v1beta1/fake
- kubernetes/typed/autoscaling/v1
- kubernetes/typed/autoscaling/v1/fake
- kubernetes/typed/autoscaling/v2beta1
- kubernetes/typed/autoscaling/v2beta1/fake
- kubernetes/typed/batch/v1
- kubernetes/typed/batch/v1/fake
- kubernetes/typed/batch/v1beta1
- kubernetes/typed/batch/v1beta1/fake
- kubernetes/typed/batch/v2alpha1
- kubernetes/typed/batch/v2alpha1/fake
- kubernetes/typed/certificates/v1beta1
- kubernetes/typed/certificates/v1beta1/fake
- kubernetes/typed/core/v1
- kubernetes/typed/core/v1/fake
- kubernetes/typed/events/v1beta1
- kubernetes/typed/events/v1beta1/fake
- kubernetes/typed/extensions/v1beta1
- kubernetes/typed/extensions/v1beta1/fake
- kubernetes/typed/networking/v1
- kubernetes/typed/networking/v1/fake
- kubernetes/typed/policy/v1beta1
- kubernetes/typed/policy/v1beta1/fake
- kubernetes/typed/rbac/v1
- kubernetes/typed/rbac/v1/fake
- kubernetes/typed/rbac/v1alpha1
- kubernetes/typed/rbac/v1alpha1/fake
- kubernetes/typed/rbac/v1beta1
- kubernetes/typed/rbac/v1beta1/fake
- kubernetes/typed/scheduling/v1alpha1
- kubernetes/typed/scheduling/v1alpha1/fake
- kubernetes/typed/settings/v1alpha1
- kubernetes/typed/settings/v1alpha1/fake
- kubernetes/typed/storage/v1
- kubernetes/typed/storage/v1/fake
- kubernetes/typed/storage/v1alpha1
- kubernetes/typed/storage/v1alpha1/fake
- kubernetes/typed/storage/v1beta1
- kubernetes/typed/storage/v1beta1/fake
- listers/core/v1
- pkg/version
- plugin/pkg/client/auth
- plugin/pkg/client/auth/azure
- plugin/pkg/client/auth/gcp
- plugin/pkg/client/auth/oidc
- plugin/pkg/client/auth/openstack
- rest
- rest/watch
- testing
- third_party/forked/golang/template
- tools/auth
- tools/cache
- tools/clientcmd
- tools/clientcmd/api
- tools/clientcmd/api/latest
- tools/clientcmd/api/v1
- tools/metrics
- tools/pager
- tools/reference
- transport
- util/buffer
- util/cert
- util/flowcontrol
- util/homedir
- util/integer
- util/jsonpath
- name: k8s.io/code-generator
version: fef8bcdbaf36ac6a1a18c9ef7d85200b249fad30
- name: k8s.io/gengo
version: 1ef560bbde5195c01629039ad3b337ce63e7b321
- name: k8s.io/kube-openapi
version: 39a7bf85c140f972372c2a0d1ee40adbf0c8bfe1
subpackages:
- pkg/builder
- pkg/common
- pkg/handler
- pkg/util
- pkg/util/proto
- name: k8s.io/kubernetes
version: 925c127ec6b946659ad0fd596fa959be43f0cc05
subpackages:
- pkg/api/legacyscheme
- pkg/api/testapi
- pkg/api/v1/resource
- pkg/apis/admission
- pkg/apis/admission/install
- pkg/apis/admission/v1beta1
- pkg/apis/admissionregistration
- pkg/apis/admissionregistration/install
- pkg/apis/admissionregistration/v1alpha1
- pkg/apis/admissionregistration/v1beta1
- pkg/apis/apps
- pkg/apis/apps/install
- pkg/apis/apps/v1
- pkg/apis/apps/v1beta1
- pkg/apis/apps/v1beta2
- pkg/apis/authentication
- pkg/apis/authentication/install
- pkg/apis/authentication/v1
- pkg/apis/authentication/v1beta1
- pkg/apis/authorization
- pkg/apis/authorization/install
- pkg/apis/authorization/v1
- pkg/apis/authorization/v1beta1
- pkg/apis/autoscaling
- pkg/apis/autoscaling/install
- pkg/apis/autoscaling/v1
- pkg/apis/autoscaling/v2beta1
- pkg/apis/batch
- pkg/apis/batch/install
- pkg/apis/batch/v1
- pkg/apis/batch/v1beta1
- pkg/apis/batch/v2alpha1
- pkg/apis/certificates
- pkg/apis/certificates/install
- pkg/apis/certificates/v1beta1
- pkg/apis/componentconfig
- pkg/apis/componentconfig/install
- pkg/apis/componentconfig/v1alpha1
- pkg/apis/core
- pkg/apis/core/helper
- pkg/apis/core/install
- pkg/apis/core/v1
- pkg/apis/core/v1/helper
- pkg/apis/core/v1/helper/qos
- pkg/apis/events
- pkg/apis/events/install
- pkg/apis/events/v1beta1
- pkg/apis/extensions
- pkg/apis/extensions/install
- pkg/apis/extensions/v1beta1
- pkg/apis/imagepolicy
- pkg/apis/imagepolicy/install
- pkg/apis/imagepolicy/v1alpha1
- pkg/apis/networking
- pkg/apis/networking/install
- pkg/apis/networking/v1
- pkg/apis/policy
- pkg/apis/policy/install
- pkg/apis/policy/v1beta1
- pkg/apis/rbac
- pkg/apis/rbac/install
- pkg/apis/rbac/v1
- pkg/apis/rbac/v1alpha1
- pkg/apis/rbac/v1beta1
- pkg/apis/scheduling
- pkg/apis/scheduling/install
- pkg/apis/scheduling/v1alpha1
- pkg/apis/settings
- pkg/apis/settings/install
- pkg/apis/settings/v1alpha1
- pkg/apis/storage
- pkg/apis/storage/install
- pkg/apis/storage/v1
- pkg/apis/storage/v1alpha1
- pkg/apis/storage/v1beta1
- pkg/features
- pkg/kubelet/apis
- pkg/kubelet/types
- pkg/master/ports
- pkg/util/parsers
- pkg/util/pointer
- plugin/pkg/scheduler/algorithm/priorities/util
testImports: []

View File

@@ -1,25 +0,0 @@
package: github.com/kubernetes-incubator/descheduler
import:
- package: k8s.io/client-go
version: 78700dec6369ba22221b72770783300f143df150
- package: k8s.io/api
version: af4bc157c3a209798fc897f6d4aaaaeb6c2e0d6a
- package: k8s.io/apiserver
version: 91e14f394e4796abf5a994a349a222e7081d86b6
- package: k8s.io/apimachinery
version: 180eddb345a5be3a157cea1c624700ad5bd27b8f
- package: k8s.io/kubernetes
version: v1.9.0
- package: k8s.io/code-generator
version: kubernetes-1.9.0
- package: github.com/kubernetes/repo-infra
- package: github.com/spf13/cobra
version: f62e98d28ab7ad31d707ba837a966378465c7b57
- package: k8s.io/gengo
- package: github.com/ugorji/go
version: v.1.1-beta
- package: github.com/Azure/go-autorest
version: e14a70c556c8e0db173358d1a903dca345a8e75e
- package: golang.org/x/tools
subpackages:
- imports

15
go.mod Normal file
View File

@@ -0,0 +1,15 @@
module sigs.k8s.io/descheduler
go 1.14
require (
github.com/spf13/cobra v0.0.5
github.com/spf13/pflag v1.0.5
k8s.io/api v0.18.4
k8s.io/apimachinery v0.18.4
k8s.io/apiserver v0.18.4
k8s.io/client-go v0.18.4
k8s.io/code-generator v0.18.4
k8s.io/component-base v0.18.4
k8s.io/klog/v2 v2.0.0
)

381
go.sum Normal file
View File

@@ -0,0 +1,381 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.38.0 h1:ROfEUZz+Gh5pa62DJWXSaonyu3StP6EA6lPEXPI6mCo=
cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
github.com/Azure/go-autorest/autorest v0.9.0 h1:MRvx8gncNaXJqOoLmhNjUAKh33JJF8LyxPhomEtOsjs=
github.com/Azure/go-autorest/autorest v0.9.0/go.mod h1:xyHB1BMZT0cuDHU7I0+g046+BFDTQ8rEZB0s4Yfa6bI=
github.com/Azure/go-autorest/autorest/adal v0.5.0 h1:q2gDruN08/guU9vAjuPWff0+QIrpH6ediguzdAzXAUU=
github.com/Azure/go-autorest/autorest/adal v0.5.0/go.mod h1:8Z9fGy2MpX0PvDjB1pEgQTmVqjGhiHBW7RJJEciWzS0=
github.com/Azure/go-autorest/autorest/date v0.1.0 h1:YGrhWfrgtFs84+h0o46rJrlmsZtyZRg470CqAXTZaGM=
github.com/Azure/go-autorest/autorest/date v0.1.0/go.mod h1:plvfp3oPSKwf2DNjlBjWF/7vwR+cUD/ELuzDCXwHUVA=
github.com/Azure/go-autorest/autorest/mocks v0.1.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0=
github.com/Azure/go-autorest/autorest/mocks v0.2.0 h1:Ww5g4zThfD/6cLb4z6xxgeyDa7QDkizMkJKe0ysZXp0=
github.com/Azure/go-autorest/autorest/mocks v0.2.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0=
github.com/Azure/go-autorest/logger v0.1.0 h1:ruG4BSDXONFRrZZJ2GUXDiUyVpayPmb1GnWeHDdaNKY=
github.com/Azure/go-autorest/logger v0.1.0/go.mod h1:oExouG+K6PryycPJfVSxi/koC6LSNgds39diKLz7Vrc=
github.com/Azure/go-autorest/tracing v0.5.0 h1:TRn4WjSnkcSy5AEG3pnbtFSwNtwzjr4VYyQflFE619k=
github.com/Azure/go-autorest/tracing v0.5.0/go.mod h1:r/s2XiOKccPW3HrqB+W0TQzfbtp2fGCgRFtBroKn4Dk=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
github.com/PuerkitoBio/purell v1.0.0/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
github.com/PuerkitoBio/purell v1.1.1 h1:WEQqlqaGbrPkxLJWfBwQmfEAE1Z7ONdDLqrN38tNFfI=
github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
github.com/PuerkitoBio/urlesc v0.0.0-20160726150825-5bd2802263f2/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 h1:d+Bc7a5rLufV/sSk/8dngufqelfh6jnri85riMAaF/M=
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
github.com/blang/semver v3.5.0+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cockroachdb/datadriven v0.0.0-20190809214429-80d97fb3cbaa/go.mod h1:zn76sxSg3SzpJ0PPJaLDCu+Bu0Lg3sKTORVIj19EIF8=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk=
github.com/coreos/go-oidc v2.1.0+incompatible/go.mod h1:CgnwVTmzoESiwO9qyAFEMiHoZ1nMCKZlZ9V6mm3/LKc=
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd v0.0.0-20180511133405-39ca1b05acc7/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/pkg v0.0.0-20160727233714-3ac0863d7acf/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/coreos/pkg v0.0.0-20180108230652-97fdf19511ea/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE=
github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgrijalva/jwt-go v3.2.0+incompatible h1:7qlOGliEKZXTDg6OTjfoBKDXWrumCAMpl/TFQ4/5kLM=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/docker/docker v0.7.3-0.20190327010347-be7ac8be2ae0/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/spdystream v0.0.0-20160310174837-449fdfce4d96/go.mod h1:Qh8CwZgvJUkLughtfhJv5dyTYa91l1fOUCrgjqmcifM=
github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/elazarl/goproxy v0.0.0-20180725130230-947c36da3153/go.mod h1:/Zj4wYkgs4iZTTu3o/KG3Itv/qCCa8VVMlb3i9OVuzc=
github.com/emicklei/go-restful v0.0.0-20170410110728-ff4f55a20633/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
github.com/emicklei/go-restful v2.9.5+incompatible h1:spTtZBk5DYEvbxMVutUuTyh1Ao2r4iyvLdACqsl/Ljk=
github.com/emicklei/go-restful v2.9.5+incompatible/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/evanphx/json-patch v4.2.0+incompatible h1:fUDGZCv/7iAN7u0puUVhvKCcsR6vRfwrJatElLBEf0I=
github.com/evanphx/json-patch v4.2.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/ghodss/yaml v0.0.0-20150909031657-73d445a93680/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logr/logr v0.1.0 h1:M1Tv3VzNlEHg6uyACnRdtrploV2P7wZqH8BoQMtz0cg=
github.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7sIas=
github.com/go-openapi/jsonpointer v0.0.0-20160704185906-46af16f9f7b1/go.mod h1:+35s3my2LFTysnkMfxsJBAMHj/DoqoB9knIWoYG/Vk0=
github.com/go-openapi/jsonpointer v0.19.2/go.mod h1:3akKfEdA7DF1sugOqz1dVQHBcuDBPKZGEoHC/NkiQRg=
github.com/go-openapi/jsonpointer v0.19.3 h1:gihV7YNZK1iK6Tgwwsxo2rJbD1GTbdm72325Bq8FI3w=
github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
github.com/go-openapi/jsonreference v0.0.0-20160704190145-13c6e3589ad9/go.mod h1:W3Z9FmVs9qj+KR4zFKmDPGiLdk1D9Rlm7cyMvf57TTg=
github.com/go-openapi/jsonreference v0.19.2/go.mod h1:jMjeRr2HHw6nAVajTXJ4eiUwohSTlpa0o73RUL1owJc=
github.com/go-openapi/jsonreference v0.19.3 h1:5cxNfTy0UVC3X8JL5ymxzyoUZmo8iZb+jeTWn7tUa8o=
github.com/go-openapi/jsonreference v0.19.3/go.mod h1:rjx6GuL8TTa9VaixXglHmQmIL98+wF9xc8zWvFonSJ8=
github.com/go-openapi/spec v0.0.0-20160808142527-6aced65f8501/go.mod h1:J8+jY1nAiCcj+friV/PDoE1/3eeccG9LYBs0tYvLOWc=
github.com/go-openapi/spec v0.19.3 h1:0XRyw8kguri6Yw4SxhsQA/atC88yqrk0+G4YhI2wabc=
github.com/go-openapi/spec v0.19.3/go.mod h1:FpwSN1ksY1eteniUU7X0N/BgJ7a4WvBFVA8Lj9mJglo=
github.com/go-openapi/swag v0.0.0-20160704191624-1d0bd113de87/go.mod h1:DXUve3Dpr1UfpPtxFw+EFuQ41HhCWZfha5jSVRG7C7I=
github.com/go-openapi/swag v0.19.2/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk=
github.com/go-openapi/swag v0.19.5 h1:lTz6Ys4CmqqCQmZPBlbQENR1/GucA2bzYTE12Pw4tFY=
github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
github.com/gogo/protobuf v1.3.1 h1:DqDEcV5aeaTmdFBePNpYsp3FlcVH/2ISVVM9Qf8PSls=
github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903 h1:LbsanbbD6LieFkXbj9YNNBupiGHJgFeLpO0j0Fza1h8=
github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/protobuf v0.0.0-20161109072736-4bd1920723d7/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0 h1:crn/baboCvb5fXaQ0IJ1SGTsTVrWpDsCWC8EGETZijY=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.1.0 h1:Hsa8mG0dQ46ij8Sl2AYJDUv1oA9/d6Vk+3LG99Oe02g=
github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gnostic v0.0.0-20170729233727-0c5108395e2d/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY=
github.com/googleapis/gnostic v0.1.0 h1:rVsPeBmXbYv4If/cumu1AzZPwV58q433hvONV1UEZoI=
github.com/googleapis/gnostic v0.1.0/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY=
github.com/gophercloud/gophercloud v0.1.0 h1:P/nh25+rzXouhytV2pUHBb65fnds26Ghl8/391+sT5o=
github.com/gophercloud/gophercloud v0.1.0/go.mod h1:vxM41WHh5uqHVBMZHzuwNOHh8XEoIEcSTewFxm1c5g8=
github.com/gorilla/websocket v0.0.0-20170926233335-4201258b820c/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.1-0.20190118093823-f849b5445de4/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
github.com/grpc-ecosystem/grpc-gateway v1.9.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.1 h1:0hERBMJE1eitiLkihrMvRVBYAkpHzc/J3QdDN+dAcgU=
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/imdario/mergo v0.3.5 h1:JboBksRwiiAJWvIYJVo46AfV+IAIKZpfrSzVKj42R4Q=
github.com/imdario/mergo v0.3.5/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NHg9XEKhtSvM=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.8 h1:QiWkFLKq0T7mpzwOTu6BzNDbfTE8OLrYhVKYMLF46Ok=
github.com/json-iterator/go v1.1.8/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/pty v1.1.5/go.mod h1:9r2w37qlBe7rQ6e1fg1S/9xpWHSnaqNdHD3WcMdbPDA=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/mailru/easyjson v0.0.0-20160728113105-d5b7844b561a/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.7.0 h1:aizVhC/NAAcKWb+5QsU1iNOZb4Yws5UO2I+aIprQITM=
github.com/mailru/easyjson v0.7.0/go.mod h1:KAzv3t3aY1NaHWoQz1+4F1ccyAH66Jk7yos7ldAVICs=
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/mattn/go-runewidth v0.0.2/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.1 h1:9f412s+6RmYXLWZSEzVVgPGK7C2PphHj5RJrvfx9AWI=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/munnerz/goautoneg v0.0.0-20120707110453-a547fc61f48d/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw=
github.com/olekukonko/tablewriter v0.0.0-20170122224234-a0225b3f23b5/go.mod h1:vsDQFd/mU46D+Z4whnwzcISnGGzXWMclvtLoiIKAKIo=
github.com/onsi/ginkgo v0.0.0-20170829012221-11459a886d9c/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.11.0 h1:JAKSXpt1YjtLA7YpPiqO9ss6sNXEsPfSGdwN0UHqzrw=
github.com/onsi/ginkgo v1.11.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/gomega v0.0.0-20170829124025-dcabb60a477c/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=
github.com/onsi/gomega v1.7.0 h1:XPnZz8VVBHjVsy1vzJmRwIcSwiUO+JFfrv/xGiigmME=
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pquerna/cachecontrol v0.0.0-20171018203845-0dec1b30a021/go.mod h1:prYjPmNq4d1NPVmpShWobRqXY3q7Vp+80DqgxxUrUIA=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cobra v0.0.3/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
github.com/spf13/cobra v0.0.5 h1:f0B+LkLX6DtmRH1isoNA9VTtNUK9K8xYd28JNNfOv/s=
github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/pflag v0.0.0-20170130214245-9ff6c6923cff/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.1/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0=
github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
go.etcd.io/bbolt v1.3.3/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.etcd.io/etcd v0.0.0-20191023171146-3cf2f69b5738/go.mod h1:dnLIgRNXwCJa5e+c6mIZCrds/GIG4ncV9HhK5PX7jPg=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190211182817-74369b46fc67/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200220183623-bac4c82f6975 h1:/Tl7pH94bvbAAHBdZJT947M/+gp0+CqQXDtMRC0fseo=
golang.org/x/crypto v0.0.0-20200220183623-bac4c82f6975/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/net v0.0.0-20170114055629-f2499483f923/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190813141303-74dc4d7220e7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20191004110552-13f9640d40b9 h1:rjwSpXsdiK0dV8/Naq3kAw9ymfAeJIyd0upUIElB+lI=
golang.org/x/net v0.0.0-20191004110552-13f9640d40b9/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45 h1:SVwTIAaPC2U/AvvLNZ2a7OVsmBpC8L5BlwK1whH3hm0=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20170830134202-bb24a47a89ea/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190209173611-3b5209105503/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190826190057-c7b8b68b1456/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191022100944-742c48ecaeb7 h1:HmbHVPwrPEKPGLAcHSrMe6+hqSUlvZU0rab6x5EXfGU=
golang.org/x/sys v0.0.0-20191022100944-742c48ecaeb7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.0.0-20160726164857-2910a502d2bf/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 h1:SvFZT6jyqRaOeXpc5h/JSfZenJ2O330aBsf7JfSUXmQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20181011042414-1f849cf54d09/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190614205625-5aca471b1d59/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190920225731-5eefd052ad72 h1:bw9doJza/SFBEweII/rHQh338oozWyiFsBRHtrflcws=
golang.org/x/tools v0.0.0-20190920225731-5eefd052ad72/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.5.0 h1:KxkO13IPW4Lslp2bz+KHP2E3gtFlrIGNThxkZQ3g+4c=
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.23.1/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/cheggaaa/pb.v1 v1.0.25/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qStrOgw=
gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/natefinch/lumberjack.v2 v2.0.0/go.mod h1:l0ndWWf7gzL7RNwBG7wST/UCcT4T24xpD6X8LsfU/+k=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/square/go-jose.v2 v2.2.2/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8 h1:obN1ZagJSUGI0Ek/LBmuj4SNLPfIny3KsKFopxRdj10=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gotest.tools v2.2.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
k8s.io/api v0.18.4 h1:8x49nBRxuXGUlDlwlWd3RMY1SayZrzFfxea3UZSkFw4=
k8s.io/api v0.18.4/go.mod h1:lOIQAKYgai1+vz9J7YcDZwC26Z0zQewYOGWdyIPUUQ4=
k8s.io/apimachinery v0.18.4 h1:ST2beySjhqwJoIFk6p7Hp5v5O0hYY6Gngq/gUYXTPIA=
k8s.io/apimachinery v0.18.4/go.mod h1:OaXp26zu/5J7p0f92ASynJa1pZo06YlV9fG7BoWbCko=
k8s.io/apiserver v0.18.4 h1:pn1jSQkfboPSirZopkVpEdLW4FcQLnYMaIY8LFxxj30=
k8s.io/apiserver v0.18.4/go.mod h1:q+zoFct5ABNnYkGIaGQ3bcbUNdmPyOCoEBcg51LChY8=
k8s.io/client-go v0.18.4 h1:un55V1Q/B3JO3A76eS0kUSywgGK/WR3BQ8fHQjNa6Zc=
k8s.io/client-go v0.18.4/go.mod h1:f5sXwL4yAZRkAtzOxRWUhA/N8XzGCb+nPZI8PfobZ9g=
k8s.io/code-generator v0.18.4 h1:SouAMfh3jbL7aL8rnUQ/C+7WwXYTZnPa8L9V2TtIE7o=
k8s.io/code-generator v0.18.4/go.mod h1:TgNEVx9hCyPGpdtCWA34olQYLkh3ok9ar7XfSsr8b6c=
k8s.io/component-base v0.18.4 h1:Kr53Fp1iCGNsl9Uv4VcRvLy7YyIqi9oaJOQ7SXtKI98=
k8s.io/component-base v0.18.4/go.mod h1:7jr/Ef5PGmKwQhyAz/pjByxJbC58mhKAhiaDu0vXfPk=
k8s.io/gengo v0.0.0-20190128074634-0689ccc1d7d6/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
k8s.io/gengo v0.0.0-20200114144118-36b2048a9120 h1:RPscN6KhmG54S33L+lr3GS+oD1jmchIU0ll519K6FA4=
k8s.io/gengo v0.0.0-20200114144118-36b2048a9120/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
k8s.io/klog v0.0.0-20181102134211-b9b56d5dfc92/go.mod h1:Gq+BEi5rUBO/HRz0bTSXDUcqjScdoY3a9IHpCEIOOfk=
k8s.io/klog v0.3.0/go.mod h1:Gq+BEi5rUBO/HRz0bTSXDUcqjScdoY3a9IHpCEIOOfk=
k8s.io/klog v1.0.0 h1:Pt+yjF5aB1xDSVbau4VsWe+dQNzA0qv1LlXdC2dF6Q8=
k8s.io/klog v1.0.0/go.mod h1:4Bi6QPql/J/LkTDqv7R/cd3hPo4k2DG6Ptcz060Ez5I=
k8s.io/klog/v2 v2.0.0 h1:Foj74zO6RbjjP4hBEKjnYtjjAhGg4jNynUdYF6fJrok=
k8s.io/klog/v2 v2.0.0/go.mod h1:PBfzABfn139FHAV07az/IF9Wp1bkk3vpT2XSJ76fSDE=
k8s.io/kube-openapi v0.0.0-20200410145947-61e04a5be9a6 h1:Oh3Mzx5pJ+yIumsAD0MOECPVeXsVot0UkiaCGVyfGQY=
k8s.io/kube-openapi v0.0.0-20200410145947-61e04a5be9a6/go.mod h1:GRQhZsXIAJ1xR0C9bd8UpWHZ5plfAS9fzPjJuQ6JL3E=
k8s.io/utils v0.0.0-20200324210504-a9aa75ae1b89 h1:d4vVOjXm687F1iLSP2q3lyPPuyvTUt3aVoBpi2DqRsU=
k8s.io/utils v0.0.0-20200324210504-a9aa75ae1b89/go.mod h1:sZAwmy6armz5eXlNoLmJcl4F1QuKu7sr+mFQ0byX7Ew=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.7/go.mod h1:PHgbrJT7lCHcxMU+mDHEm+nx46H4zuuHZkDP6icnhu0=
sigs.k8s.io/structured-merge-diff/v3 v3.0.0-20200116222232-67a7b8c61874/go.mod h1:PlARxl6Hbt/+BC80dRLi1qAmnMqwqDg62YvvVkZjemw=
sigs.k8s.io/structured-merge-diff/v3 v3.0.0 h1:dOmIZBMfhcHS09XZkMyUgkq5trg3/jRyJYFZUiaOp8E=
sigs.k8s.io/structured-merge-diff/v3 v3.0.0/go.mod h1:PlARxl6Hbt/+BC80dRLi1qAmnMqwqDg62YvvVkZjemw=
sigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o=
sigs.k8s.io/yaml v1.2.0 h1:kr/MCeFWJWTwyaHoR9c8EjH9OumOmoF9YGiZd7lFm/Q=
sigs.k8s.io/yaml v1.2.0/go.mod h1:yfXDCHCao9+ENCvLSE62v9VSji2MKu5jeNfTrofGhJc=

View File

@@ -10,7 +10,7 @@ master_uuid=$(uuid)
node1_uuid=$(uuid)
node2_uuid=$(uuid)
kube_apiserver_port=6443
kube_version=1.9.4
kube_version=1.13.1
DESCHEDULER_ROOT=$(dirname "${BASH_SOURCE}")/../../
E2E_GCE_HOME=$DESCHEDULER_ROOT/hack/e2e-gce
@@ -36,10 +36,10 @@ create_cluster() {
generate_kubeadm_instance_files() {
# TODO: Check if they have come up. awk $6 contains the state(RUNNING or not).
master_public_ip=$(gcloud compute instances list | grep $master_uuid|awk '{print $5}')
master_private_ip=$(gcloud compute instances list | grep $master_uuid|awk '{print $4}')
node1_public_ip=$(gcloud compute instances list | grep $node1_uuid|awk '{print $5}')
node2_public_ip=$(gcloud compute instances list | grep $node2_uuid|awk '{print $5}')
echo "kubeadm init --kubernetes-version=${kube_version} --apiserver-advertise-address=${master_public_ip}" --skip-preflight-checks --pod-network-cidr=10.96.0.0/12 > $E2E_GCE_HOME/kubeadm_install.sh
echo "kubeadm init --kubernetes-version=${kube_version} --apiserver-advertise-address=${master_private_ip}" --ignore-preflight-errors=all --pod-network-cidr=10.96.0.0/12 > $E2E_GCE_HOME/kubeadm_install.sh
}

6
hack/kind_config.yaml Normal file
View File

@@ -0,0 +1,6 @@
kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
nodes:
- role: control-plane
- role: worker
- role: worker

View File

@@ -43,5 +43,5 @@ OS_ROOT="$( os::util::absolute_path "${init_source}" )"
export OS_ROOT
cd "${OS_ROOT}"
PRJ_PREFIX="github.com/${REPO_ORG:-kubernetes-incubator}/descheduler"
PRJ_PREFIX="sigs.k8s.io/descheduler"
OS_OUTPUT_BINPATH="${OS_ROOT}/_output/bin"

22
hack/tools.go Normal file
View File

@@ -0,0 +1,22 @@
// +build tools
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// This package imports things required by build scripts, to force `go mod` to see them as dependencies
package tools
import _ "k8s.io/code-generator"

View File

@@ -1,151 +0,0 @@
#!/bin/bash
# Copyright 2015 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -o errexit
set -o nounset
set -o pipefail
KUBE_ROOT=$(dirname "${BASH_SOURCE}")/..
# The sort at the end makes sure we feed the topological sort a deterministic
# list (since there aren't many dependencies).
generated_files=($(
find . -not \( \
\( \
-wholename './output' \
-o -wholename './_output' \
-o -wholename './staging' \
-o -wholename './release' \
-o -wholename './target' \
-o -wholename '*/third_party/*' \
-o -wholename '*/vendor/*' \
-o -wholename '*/codecgen-*-1234.generated.go' \
\) -prune \
\) -name '*.generated.go' | LC_ALL=C sort -r
))
# We only work for deps within this prefix.
#my_prefix="k8s.io/kubernetes"
my_prefix="github.com/${REPO_ORG:-kubernetes-incubator}/descheduler"
# Register function to be called on EXIT to remove codecgen
# binary and also to touch the files that should be regenerated
# since they are first removed.
# This is necessary to make the script work after previous failure.
function cleanup {
rm -f "${CODECGEN:-}"
pushd "${KUBE_ROOT}" > /dev/null
for (( i=0; i < number; i++ )); do
touch "${generated_files[${i}]}" || true
done
popd > /dev/null
}
trap cleanup EXIT
# Precompute dependencies for all directories.
# Then sort all files in the dependency order.
number=${#generated_files[@]}
result=""
for (( i=0; i<number; i++ )); do
visited[${i}]=false
file="${generated_files[${i}]/\.generated\.go/.go}"
deps[${i}]=$(go list -f '{{range .Deps}}{{.}}{{"\n"}}{{end}}' ${file} | grep "^${my_prefix}")
done
###echo "DBG: found $number generated files"
###for f in $(echo "${generated_files[@]}" | LC_ALL=C sort); do
### echo "DBG: $f"
###done
# NOTE: depends function assumes that the whole repository is under
# $my_prefix - it will NOT work if that is not true.
function depends {
rhs="$(dirname ${generated_files[$2]/#./${my_prefix}})"
###echo "DBG: does ${file} depend on ${rhs}?"
for dep in ${deps[$1]}; do
###echo "DBG: checking against $dep"
if [[ "${dep}" == "${rhs}" ]]; then
###echo "DBG: = yes"
return 0
fi
done
###echo "DBG: = no"
return 1
}
function tsort {
visited[$1]=true
local j=0
for (( j=0; j<number; j++ )); do
if ! ${visited[${j}]}; then
if depends "$1" ${j}; then
tsort $j
fi
fi
done
result="${result} $1"
}
echo "Building dependencies"
for (( i=0; i<number; i++ )); do
###echo "DBG: considering ${generated_files[${i}]}"
if ! ${visited[${i}]}; then
###echo "DBG: tsorting ${generated_files[${i}]}"
tsort ${i}
fi
done
index=(${result})
haveindex=${index:-}
if [[ -z ${haveindex} ]]; then
echo No files found for $0
echo A previous run of $0 may have deleted all the files and then crashed.
echo Use 'touch' to create files named 'types.generated.go' listed as deleted in 'git status'
exit 1
fi
echo "Building codecgen"
CODECGEN="${PWD}/codecgen_binary"
go build -o "${CODECGEN}" ./vendor/github.com/ugorji/go/codec/codecgen
# Running codecgen fails if some of the files doesn't compile.
# Thus (since all the files are completely auto-generated and
# not required for the code to be compilable, we first remove
# them and the regenerate them.
for (( i=0; i < number; i++ )); do
rm -f "${generated_files[${i}]}"
done
# Generate files in the dependency order.
for current in "${index[@]}"; do
generated_file=${generated_files[${current}]}
initial_dir=${PWD}
file=${generated_file/\.generated\.go/.go}
echo "processing ${file}"
# codecgen work only if invoked from directory where the file
# is located.
pushd "$(dirname ${file})" > /dev/null
base_file=$(basename "${file}")
base_generated_file=$(basename "${generated_file}")
# We use '-d 1234' flag to have a deterministic output every time.
# The constant was just randomly chosen.
###echo "DBG: running ${CODECGEN} -d 1234 -o ${base_generated_file} ${base_file}"
${CODECGEN} -d 1234 -o "${base_generated_file}" "${base_file}"
# Add boilerplate at the beginning of the generated file.
sed 's/YEAR/2017/' "${initial_dir}/hack/boilerplate/boilerplate.go.txt" > "${base_generated_file}.tmp"
cat "${base_generated_file}" >> "${base_generated_file}.tmp"
mv "${base_generated_file}.tmp" "${base_generated_file}"
popd > /dev/null
done

View File

@@ -1,7 +1,7 @@
#!/bin/bash
source "$(dirname "${BASH_SOURCE}")/lib/init.sh"
go build -o "${OS_OUTPUT_BINPATH}/conversion-gen" "${PRJ_PREFIX}/vendor/k8s.io/code-generator/cmd/conversion-gen"
go build -o "${OS_OUTPUT_BINPATH}/conversion-gen" "k8s.io/code-generator/cmd/conversion-gen"
${OS_OUTPUT_BINPATH}/conversion-gen \
--go-header-file "hack/boilerplate/boilerplate.go.txt" \

View File

@@ -1,7 +1,7 @@
#!/bin/bash
source "$(dirname "${BASH_SOURCE}")/lib/init.sh"
go build -o "${OS_OUTPUT_BINPATH}/deepcopy-gen" "${PRJ_PREFIX}/vendor/k8s.io/code-generator/cmd/deepcopy-gen"
go build -o "${OS_OUTPUT_BINPATH}/deepcopy-gen" "k8s.io/code-generator/cmd/deepcopy-gen"
${OS_OUTPUT_BINPATH}/deepcopy-gen \
--go-header-file "hack/boilerplate/boilerplate.go.txt" \

View File

@@ -1,7 +1,7 @@
#!/bin/bash
source "$(dirname "${BASH_SOURCE}")/lib/init.sh"
go build -o "${OS_OUTPUT_BINPATH}/defaulter-gen" "${PRJ_PREFIX}/vendor/k8s.io/code-generator/cmd/defaulter-gen"
go build -o "${OS_OUTPUT_BINPATH}/defaulter-gen" "k8s.io/code-generator/cmd/defaulter-gen"
${OS_OUTPUT_BINPATH}/defaulter-gen \
--go-header-file "hack/boilerplate/boilerplate.go.txt" \

49
hack/update-gofmt.sh Executable file
View File

@@ -0,0 +1,49 @@
#!/bin/bash
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -o errexit
set -o nounset
set -o pipefail
DESCHEDULER_ROOT=$(dirname "${BASH_SOURCE}")/..
GO_VERSION=($(go version))
if [[ -z $(echo "${GO_VERSION[2]}" | grep -E 'go1.2|go1.3|go1.4|go1.5|go1.6|go1.7|go1.8|go1.9|go1.10|go1.11|go1.12|go1.13|go1.14') ]]; then
echo "Unknown go version '${GO_VERSION[2]}', skipping gofmt."
exit 1
fi
cd "${DESCHEDULER_ROOT}"
find_files() {
find . -not \( \
\( \
-wholename './output' \
-o -wholename './_output' \
-o -wholename './release' \
-o -wholename './target' \
-o -wholename './.git' \
-o -wholename '*/third_party/*' \
-o -wholename '*/Godeps/*' \
-o -wholename '*/vendor/*' \
\) -prune \
\) -name '*.go'
}
GOFMT="gofmt -s -w"
find_files | xargs $GOFMT -l

21
hack/update-vendor.sh Executable file
View File

@@ -0,0 +1,21 @@
#!/bin/bash
# Copyright 2020 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
source "$(dirname "${BASH_SOURCE}")/lib/init.sh"
go mod tidy
go mod vendor

View File

@@ -23,8 +23,8 @@ DESCHEDULER_ROOT=$(dirname "${BASH_SOURCE}")/..
GO_VERSION=($(go version))
if [[ -z $(echo "${GO_VERSION[2]}" | grep -E 'go1.2|go1.3|go1.4|go1.5|go1.6|go1.7|go1.8|go1.9|go1.10') ]]; then
echo "Unknown go version '${GO_VERSION}', skipping gofmt."
if [[ -z $(echo "${GO_VERSION[2]}" | grep -E 'go1.2|go1.3|go1.4|go1.5|go1.6|go1.7|go1.8|go1.9|go1.10|go1.11|go1.12|go1.13|go1.14') ]]; then
echo "Unknown go version '${GO_VERSION[2]}', skipping gofmt."
exit 1
fi
@@ -45,7 +45,7 @@ find_files() {
\) -name '*.go'
}
GOFMT="gofmt -s"
GOFMT="gofmt -s"
bad_files=$(find_files | xargs $GOFMT -l)
if [[ -n "${bad_files}" ]]; then
echo "!!! '$GOFMT' needs to be run on the following files: "

88
hack/verify-vendor.sh Executable file
View File

@@ -0,0 +1,88 @@
#!/bin/bash
# Copyright 2020 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This is mostly copied from the hack/verify-vendor.sh script located in k8s.io/kubernetes
set -o errexit
set -o nounset
set -o pipefail
source "$(dirname "${BASH_SOURCE}")/lib/init.sh"
DESCHEDULER_ROOT=$(dirname "${BASH_SOURCE}")/..
mkdir -p "${DESCHEDULER_ROOT}/_tmp"
_tmpdir="$(mktemp -d "${DESCHEDULER_ROOT}/_tmp/kube-vendor.XXXXXX")"
if [[ -z ${KEEP_TMP:-} ]]; then
KEEP_TMP=false
fi
function cleanup {
# make go module dirs writeable
chmod -R +w "${_tmpdir}"
if [ "${KEEP_TMP}" == "true" ]; then
echo "Leaving ${_tmpdir} for you to examine or copy. Please delete it manually when finished. (rm -rf ${_tmpdir})"
else
echo "Removing ${_tmpdir}"
rm -rf "${_tmpdir}"
fi
}
trap "cleanup" EXIT
_deschedulertmp="${_tmpdir}"
mkdir -p "${_deschedulertmp}"
git archive --format=tar --prefix=descheduler/ "$(git write-tree)" | (cd "${_deschedulertmp}" && tar xf -)
_deschedulertmp="${_deschedulertmp}/descheduler"
pushd "${_deschedulertmp}" > /dev/null 2>&1
# Destroy deps in the copy of the kube tree
rm -rf ./vendor
# Recreate the vendor tree using the nice clean set we just downloaded
hack/update-vendor.sh
popd > /dev/null 2>&1
ret=0
pushd "${DESCHEDULER_ROOT}" > /dev/null 2>&1
# Test for diffs
if ! _out="$(diff -Naupr --ignore-matching-lines='^\s*\"GoVersion\":' go.mod "${_deschedulertmp}/go.mod")"; then
echo "Your go.mod file is different:" >&2
echo "${_out}" >&2
echo "Vendor Verify failed." >&2
echo "If you're seeing this locally, run the below command to fix your go.mod:" >&2
echo "hack/update-vendor.sh" >&2
ret=1
fi
if ! _out="$(diff -Naupr -x "BUILD" -x "AUTHORS*" -x "CONTRIBUTORS*" vendor "${_deschedulertmp}/vendor")"; then
echo "Your vendored results are different:" >&2
echo "${_out}" >&2
echo "Vendor Verify failed." >&2
echo "${_out}" > vendordiff.patch
echo "If you're seeing this locally, run the below command to fix your directories:" >&2
echo "hack/update-vendor.sh" >&2
ret=1
fi
popd > /dev/null 2>&1
if [[ ${ret} -gt 0 ]]; then
exit ${ret}
fi
echo "Vendor Verified."

28
kubernetes/configmap.yaml Normal file
View File

@@ -0,0 +1,28 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: descheduler-policy-configmap
namespace: kube-system
data:
policy.yaml: |
apiVersion: "descheduler/v1alpha1"
kind: "DeschedulerPolicy"
strategies:
"RemoveDuplicates":
enabled: true
"RemovePodsViolatingInterPodAntiAffinity":
enabled: true
"LowNodeUtilization":
enabled: true
params:
nodeResourceUtilizationThresholds:
thresholds:
"cpu" : 20
"memory": 20
"pods": 20
targetThresholds:
"cpu" : 50
"memory": 50
"pods": 50

35
kubernetes/cronjob.yaml Normal file
View File

@@ -0,0 +1,35 @@
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: descheduler-cronjob
namespace: kube-system
spec:
schedule: "*/2 * * * *"
concurrencyPolicy: "Forbid"
jobTemplate:
spec:
template:
metadata:
name: descheduler-pod
spec:
priorityClassName: system-cluster-critical
containers:
- name: descheduler
image: us.gcr.io/k8s-artifacts-prod/descheduler/descheduler:v0.18.0
volumeMounts:
- mountPath: /policy-dir
name: policy-volume
command:
- "/bin/descheduler"
args:
- "--policy-config-file"
- "/policy-dir/policy.yaml"
- "--v"
- "3"
restartPolicy: "Never"
serviceAccountName: descheduler-sa
volumes:
- name: policy-volume
configMap:
name: descheduler-policy-configmap

33
kubernetes/job.yaml Normal file
View File

@@ -0,0 +1,33 @@
---
apiVersion: batch/v1
kind: Job
metadata:
name: descheduler-job
namespace: kube-system
spec:
parallelism: 1
completions: 1
template:
metadata:
name: descheduler-pod
spec:
priorityClassName: system-cluster-critical
containers:
- name: descheduler
image: us.gcr.io/k8s-artifacts-prod/descheduler/descheduler:v0.18.0
volumeMounts:
- mountPath: /policy-dir
name: policy-volume
command:
- "/bin/descheduler"
args:
- "--policy-config-file"
- "/policy-dir/policy.yaml"
- "--v"
- "3"
restartPolicy: "Never"
serviceAccountName: descheduler-sa
volumes:
- name: policy-volume
configMap:
name: descheduler-policy-configmap

40
kubernetes/rbac.yaml Normal file
View File

@@ -0,0 +1,40 @@
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: descheduler-cluster-role
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "watch", "list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list", "delete"]
- apiGroups: [""]
resources: ["pods/eviction"]
verbs: ["create"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: descheduler-sa
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: descheduler-cluster-role-binding
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: descheduler-cluster-role
subjects:
- name: descheduler-sa
kind: ServiceAccount
namespace: kube-system

View File

@@ -16,4 +16,4 @@ limitations under the License.
// +k8s:deepcopy-gen=package,register
package api // import "github.com/kubernetes-incubator/descheduler/pkg/api"
package api // import "sigs.k8s.io/descheduler/pkg/api"

View File

@@ -1,48 +0,0 @@
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package install installs the descheduler's policy API group.
package install
import (
"k8s.io/apimachinery/pkg/apimachinery/announced"
"k8s.io/apimachinery/pkg/apimachinery/registered"
"k8s.io/apimachinery/pkg/runtime"
deschedulerapi "github.com/kubernetes-incubator/descheduler/pkg/api"
"github.com/kubernetes-incubator/descheduler/pkg/api/v1alpha1"
deschedulerscheme "github.com/kubernetes-incubator/descheduler/pkg/descheduler/scheme"
)
func init() {
Install(deschedulerscheme.GroupFactoryRegistry, deschedulerscheme.Registry, deschedulerscheme.Scheme)
}
// Install registers the API group and adds types to a scheme
func Install(groupFactoryRegistry announced.APIGroupFactoryRegistry, registry *registered.APIRegistrationManager, scheme *runtime.Scheme) {
if err := announced.NewGroupMetaFactory(
&announced.GroupMetaFactoryArgs{
GroupName: deschedulerapi.GroupName,
VersionPreferenceOrder: []string{v1alpha1.SchemeGroupVersion.Version},
AddInternalObjectsToScheme: deschedulerapi.AddToScheme,
},
announced.VersionToSchemeFunc{
v1alpha1.SchemeGroupVersion.Version: v1alpha1.AddToScheme,
},
).Announce(groupFactoryRegistry).RegisterAndEnable(registry, scheme); err != nil {
panic(err)
}
}

View File

@@ -19,11 +19,14 @@ package api
import (
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"sigs.k8s.io/descheduler/pkg/descheduler/scheme"
)
var (
SchemeBuilder = runtime.NewSchemeBuilder(addKnownTypes)
AddToScheme = SchemeBuilder.AddToScheme
Scheme = runtime.NewScheme()
)
// GroupName is the group name use in this package
@@ -32,6 +35,12 @@ const GroupName = "descheduler"
// SchemeGroupVersion is group version used to register these objects
var SchemeGroupVersion = schema.GroupVersion{Group: GroupName, Version: runtime.APIVersionInternal}
func init() {
if err := addKnownTypes(scheme.Scheme); err != nil {
panic(err)
}
}
// Kind takes an unqualified kind and returns a Group qualified GroupKind
func Kind(kind string) schema.GroupKind {
return SchemeGroupVersion.WithKind(kind).GroupKind()

File diff suppressed because it is too large Load Diff

View File

@@ -17,7 +17,7 @@ limitations under the License.
package api
import (
"k8s.io/api/core/v1"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
@@ -41,13 +41,16 @@ type DeschedulerStrategy struct {
Weight int
// Strategy parameters
Params StrategyParameters
Params *StrategyParameters
}
// Only one of its members may be specified
type StrategyParameters struct {
NodeResourceUtilizationThresholds NodeResourceUtilizationThresholds
NodeResourceUtilizationThresholds *NodeResourceUtilizationThresholds
NodeAffinityType []string
PodsHavingTooManyRestarts *PodsHavingTooManyRestarts
MaxPodLifeTimeSeconds *uint
RemoveDuplicates *RemoveDuplicates
}
type Percentage float64
@@ -58,3 +61,12 @@ type NodeResourceUtilizationThresholds struct {
TargetThresholds ResourceThresholds
NumberOfNodes int
}
type PodsHavingTooManyRestarts struct {
PodRestartThreshold int32
IncludingInitContainers bool
}
type RemoveDuplicates struct {
ExcludeOwnerKinds []string
}

View File

@@ -15,10 +15,10 @@ limitations under the License.
*/
// +k8s:deepcopy-gen=package,register
// +k8s:conversion-gen=github.com/kubernetes-incubator/descheduler/pkg/api
// +k8s:conversion-gen=sigs.k8s.io/descheduler/pkg/api
// +k8s:defaulter-gen=TypeMeta
// Package v1alpha1 is the v1alpha1 version of the descheduler API
// +groupName=descheduler
package v1alpha1 // import "github.com/kubernetes-incubator/descheduler/pkg/api/v1alpha1"
package v1alpha1 // import "sigs.k8s.io/descheduler/pkg/api/v1alpha1"

File diff suppressed because it is too large Load Diff

View File

@@ -17,7 +17,7 @@ limitations under the License.
package v1alpha1
import (
"k8s.io/api/core/v1"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
@@ -41,13 +41,16 @@ type DeschedulerStrategy struct {
Weight int `json:"weight,omitempty"`
// Strategy parameters
Params StrategyParameters `json:"params,omitempty"`
Params *StrategyParameters `json:"params,omitempty"`
}
// Only one of its members may be specified
type StrategyParameters struct {
NodeResourceUtilizationThresholds NodeResourceUtilizationThresholds `json:"nodeResourceUtilizationThresholds,omitempty"`
NodeAffinityType []string `json:"nodeAffinityType,omitempty"`
NodeResourceUtilizationThresholds *NodeResourceUtilizationThresholds `json:"nodeResourceUtilizationThresholds,omitempty"`
NodeAffinityType []string `json:"nodeAffinityType,omitempty"`
PodsHavingTooManyRestarts *PodsHavingTooManyRestarts `json:"podsHavingTooManyRestarts,omitempty"`
MaxPodLifeTimeSeconds *uint `json:"maxPodLifeTimeSeconds,omitempty"`
RemoveDuplicates *RemoveDuplicates `json:"removeDuplicates,omitempty"`
}
type Percentage float64
@@ -58,3 +61,12 @@ type NodeResourceUtilizationThresholds struct {
TargetThresholds ResourceThresholds `json:"targetThresholds,omitempty"`
NumberOfNodes int `json:"numberOfNodes,omitempty"`
}
type PodsHavingTooManyRestarts struct {
PodRestartThreshold int32 `json:"podRestartThreshold,omitempty"`
IncludingInitContainers bool `json:"includingInitContainers,omitempty"`
}
type RemoveDuplicates struct {
ExcludeOwnerKinds []string `json:"excludeOwnerKinds,omitempty"`
}

View File

@@ -1,7 +1,7 @@
// +build !ignore_autogenerated
/*
Copyright 2018 The Kubernetes Authors.
Copyright 2020 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -16,16 +16,16 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
// This file was autogenerated by conversion-gen. Do not edit it manually!
// Code generated by conversion-gen. DO NOT EDIT.
package v1alpha1
import (
unsafe "unsafe"
api "github.com/kubernetes-incubator/descheduler/pkg/api"
conversion "k8s.io/apimachinery/pkg/conversion"
runtime "k8s.io/apimachinery/pkg/runtime"
api "sigs.k8s.io/descheduler/pkg/api"
)
func init() {
@@ -34,17 +34,68 @@ func init() {
// RegisterConversions adds conversion functions to the given scheme.
// Public to allow building arbitrary schemes.
func RegisterConversions(scheme *runtime.Scheme) error {
return scheme.AddGeneratedConversionFuncs(
Convert_v1alpha1_DeschedulerPolicy_To_api_DeschedulerPolicy,
Convert_api_DeschedulerPolicy_To_v1alpha1_DeschedulerPolicy,
Convert_v1alpha1_DeschedulerStrategy_To_api_DeschedulerStrategy,
Convert_api_DeschedulerStrategy_To_v1alpha1_DeschedulerStrategy,
Convert_v1alpha1_NodeResourceUtilizationThresholds_To_api_NodeResourceUtilizationThresholds,
Convert_api_NodeResourceUtilizationThresholds_To_v1alpha1_NodeResourceUtilizationThresholds,
Convert_v1alpha1_StrategyParameters_To_api_StrategyParameters,
Convert_api_StrategyParameters_To_v1alpha1_StrategyParameters,
)
func RegisterConversions(s *runtime.Scheme) error {
if err := s.AddGeneratedConversionFunc((*DeschedulerPolicy)(nil), (*api.DeschedulerPolicy)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_v1alpha1_DeschedulerPolicy_To_api_DeschedulerPolicy(a.(*DeschedulerPolicy), b.(*api.DeschedulerPolicy), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*api.DeschedulerPolicy)(nil), (*DeschedulerPolicy)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_api_DeschedulerPolicy_To_v1alpha1_DeschedulerPolicy(a.(*api.DeschedulerPolicy), b.(*DeschedulerPolicy), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*DeschedulerStrategy)(nil), (*api.DeschedulerStrategy)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_v1alpha1_DeschedulerStrategy_To_api_DeschedulerStrategy(a.(*DeschedulerStrategy), b.(*api.DeschedulerStrategy), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*api.DeschedulerStrategy)(nil), (*DeschedulerStrategy)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_api_DeschedulerStrategy_To_v1alpha1_DeschedulerStrategy(a.(*api.DeschedulerStrategy), b.(*DeschedulerStrategy), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*NodeResourceUtilizationThresholds)(nil), (*api.NodeResourceUtilizationThresholds)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_v1alpha1_NodeResourceUtilizationThresholds_To_api_NodeResourceUtilizationThresholds(a.(*NodeResourceUtilizationThresholds), b.(*api.NodeResourceUtilizationThresholds), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*api.NodeResourceUtilizationThresholds)(nil), (*NodeResourceUtilizationThresholds)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_api_NodeResourceUtilizationThresholds_To_v1alpha1_NodeResourceUtilizationThresholds(a.(*api.NodeResourceUtilizationThresholds), b.(*NodeResourceUtilizationThresholds), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*PodsHavingTooManyRestarts)(nil), (*api.PodsHavingTooManyRestarts)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_v1alpha1_PodsHavingTooManyRestarts_To_api_PodsHavingTooManyRestarts(a.(*PodsHavingTooManyRestarts), b.(*api.PodsHavingTooManyRestarts), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*api.PodsHavingTooManyRestarts)(nil), (*PodsHavingTooManyRestarts)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_api_PodsHavingTooManyRestarts_To_v1alpha1_PodsHavingTooManyRestarts(a.(*api.PodsHavingTooManyRestarts), b.(*PodsHavingTooManyRestarts), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*RemoveDuplicates)(nil), (*api.RemoveDuplicates)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_v1alpha1_RemoveDuplicates_To_api_RemoveDuplicates(a.(*RemoveDuplicates), b.(*api.RemoveDuplicates), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*api.RemoveDuplicates)(nil), (*RemoveDuplicates)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_api_RemoveDuplicates_To_v1alpha1_RemoveDuplicates(a.(*api.RemoveDuplicates), b.(*RemoveDuplicates), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*StrategyParameters)(nil), (*api.StrategyParameters)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_v1alpha1_StrategyParameters_To_api_StrategyParameters(a.(*StrategyParameters), b.(*api.StrategyParameters), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*api.StrategyParameters)(nil), (*StrategyParameters)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_api_StrategyParameters_To_v1alpha1_StrategyParameters(a.(*api.StrategyParameters), b.(*StrategyParameters), scope)
}); err != nil {
return err
}
return nil
}
func autoConvert_v1alpha1_DeschedulerPolicy_To_api_DeschedulerPolicy(in *DeschedulerPolicy, out *api.DeschedulerPolicy, s conversion.Scope) error {
@@ -70,9 +121,7 @@ func Convert_api_DeschedulerPolicy_To_v1alpha1_DeschedulerPolicy(in *api.Desched
func autoConvert_v1alpha1_DeschedulerStrategy_To_api_DeschedulerStrategy(in *DeschedulerStrategy, out *api.DeschedulerStrategy, s conversion.Scope) error {
out.Enabled = in.Enabled
out.Weight = in.Weight
if err := Convert_v1alpha1_StrategyParameters_To_api_StrategyParameters(&in.Params, &out.Params, s); err != nil {
return err
}
out.Params = (*api.StrategyParameters)(unsafe.Pointer(in.Params))
return nil
}
@@ -84,9 +133,7 @@ func Convert_v1alpha1_DeschedulerStrategy_To_api_DeschedulerStrategy(in *Desched
func autoConvert_api_DeschedulerStrategy_To_v1alpha1_DeschedulerStrategy(in *api.DeschedulerStrategy, out *DeschedulerStrategy, s conversion.Scope) error {
out.Enabled = in.Enabled
out.Weight = in.Weight
if err := Convert_api_StrategyParameters_To_v1alpha1_StrategyParameters(&in.Params, &out.Params, s); err != nil {
return err
}
out.Params = (*StrategyParameters)(unsafe.Pointer(in.Params))
return nil
}
@@ -119,11 +166,54 @@ func Convert_api_NodeResourceUtilizationThresholds_To_v1alpha1_NodeResourceUtili
return autoConvert_api_NodeResourceUtilizationThresholds_To_v1alpha1_NodeResourceUtilizationThresholds(in, out, s)
}
func autoConvert_v1alpha1_PodsHavingTooManyRestarts_To_api_PodsHavingTooManyRestarts(in *PodsHavingTooManyRestarts, out *api.PodsHavingTooManyRestarts, s conversion.Scope) error {
out.PodRestartThreshold = in.PodRestartThreshold
out.IncludingInitContainers = in.IncludingInitContainers
return nil
}
// Convert_v1alpha1_PodsHavingTooManyRestarts_To_api_PodsHavingTooManyRestarts is an autogenerated conversion function.
func Convert_v1alpha1_PodsHavingTooManyRestarts_To_api_PodsHavingTooManyRestarts(in *PodsHavingTooManyRestarts, out *api.PodsHavingTooManyRestarts, s conversion.Scope) error {
return autoConvert_v1alpha1_PodsHavingTooManyRestarts_To_api_PodsHavingTooManyRestarts(in, out, s)
}
func autoConvert_api_PodsHavingTooManyRestarts_To_v1alpha1_PodsHavingTooManyRestarts(in *api.PodsHavingTooManyRestarts, out *PodsHavingTooManyRestarts, s conversion.Scope) error {
out.PodRestartThreshold = in.PodRestartThreshold
out.IncludingInitContainers = in.IncludingInitContainers
return nil
}
// Convert_api_PodsHavingTooManyRestarts_To_v1alpha1_PodsHavingTooManyRestarts is an autogenerated conversion function.
func Convert_api_PodsHavingTooManyRestarts_To_v1alpha1_PodsHavingTooManyRestarts(in *api.PodsHavingTooManyRestarts, out *PodsHavingTooManyRestarts, s conversion.Scope) error {
return autoConvert_api_PodsHavingTooManyRestarts_To_v1alpha1_PodsHavingTooManyRestarts(in, out, s)
}
func autoConvert_v1alpha1_RemoveDuplicates_To_api_RemoveDuplicates(in *RemoveDuplicates, out *api.RemoveDuplicates, s conversion.Scope) error {
out.ExcludeOwnerKinds = *(*[]string)(unsafe.Pointer(&in.ExcludeOwnerKinds))
return nil
}
// Convert_v1alpha1_RemoveDuplicates_To_api_RemoveDuplicates is an autogenerated conversion function.
func Convert_v1alpha1_RemoveDuplicates_To_api_RemoveDuplicates(in *RemoveDuplicates, out *api.RemoveDuplicates, s conversion.Scope) error {
return autoConvert_v1alpha1_RemoveDuplicates_To_api_RemoveDuplicates(in, out, s)
}
func autoConvert_api_RemoveDuplicates_To_v1alpha1_RemoveDuplicates(in *api.RemoveDuplicates, out *RemoveDuplicates, s conversion.Scope) error {
out.ExcludeOwnerKinds = *(*[]string)(unsafe.Pointer(&in.ExcludeOwnerKinds))
return nil
}
// Convert_api_RemoveDuplicates_To_v1alpha1_RemoveDuplicates is an autogenerated conversion function.
func Convert_api_RemoveDuplicates_To_v1alpha1_RemoveDuplicates(in *api.RemoveDuplicates, out *RemoveDuplicates, s conversion.Scope) error {
return autoConvert_api_RemoveDuplicates_To_v1alpha1_RemoveDuplicates(in, out, s)
}
func autoConvert_v1alpha1_StrategyParameters_To_api_StrategyParameters(in *StrategyParameters, out *api.StrategyParameters, s conversion.Scope) error {
if err := Convert_v1alpha1_NodeResourceUtilizationThresholds_To_api_NodeResourceUtilizationThresholds(&in.NodeResourceUtilizationThresholds, &out.NodeResourceUtilizationThresholds, s); err != nil {
return err
}
out.NodeResourceUtilizationThresholds = (*api.NodeResourceUtilizationThresholds)(unsafe.Pointer(in.NodeResourceUtilizationThresholds))
out.NodeAffinityType = *(*[]string)(unsafe.Pointer(&in.NodeAffinityType))
out.PodsHavingTooManyRestarts = (*api.PodsHavingTooManyRestarts)(unsafe.Pointer(in.PodsHavingTooManyRestarts))
out.MaxPodLifeTimeSeconds = (*uint)(unsafe.Pointer(in.MaxPodLifeTimeSeconds))
out.RemoveDuplicates = (*api.RemoveDuplicates)(unsafe.Pointer(in.RemoveDuplicates))
return nil
}
@@ -133,10 +223,11 @@ func Convert_v1alpha1_StrategyParameters_To_api_StrategyParameters(in *StrategyP
}
func autoConvert_api_StrategyParameters_To_v1alpha1_StrategyParameters(in *api.StrategyParameters, out *StrategyParameters, s conversion.Scope) error {
if err := Convert_api_NodeResourceUtilizationThresholds_To_v1alpha1_NodeResourceUtilizationThresholds(&in.NodeResourceUtilizationThresholds, &out.NodeResourceUtilizationThresholds, s); err != nil {
return err
}
out.NodeResourceUtilizationThresholds = (*NodeResourceUtilizationThresholds)(unsafe.Pointer(in.NodeResourceUtilizationThresholds))
out.NodeAffinityType = *(*[]string)(unsafe.Pointer(&in.NodeAffinityType))
out.PodsHavingTooManyRestarts = (*PodsHavingTooManyRestarts)(unsafe.Pointer(in.PodsHavingTooManyRestarts))
out.MaxPodLifeTimeSeconds = (*uint)(unsafe.Pointer(in.MaxPodLifeTimeSeconds))
out.RemoveDuplicates = (*RemoveDuplicates)(unsafe.Pointer(in.RemoveDuplicates))
return nil
}

View File

@@ -1,7 +1,7 @@
// +build !ignore_autogenerated
/*
Copyright 2018 The Kubernetes Authors.
Copyright 2020 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -16,7 +16,7 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
// This file was autogenerated by deepcopy-gen. Do not edit it manually!
// Code generated by deepcopy-gen. DO NOT EDIT.
package v1alpha1
@@ -32,9 +32,7 @@ func (in *DeschedulerPolicy) DeepCopyInto(out *DeschedulerPolicy) {
in, out := &in.Strategies, &out.Strategies
*out = make(StrategyList, len(*in))
for key, val := range *in {
newVal := new(DeschedulerStrategy)
val.DeepCopyInto(newVal)
(*out)[key] = *newVal
(*out)[key] = *val.DeepCopy()
}
}
return
@@ -54,15 +52,18 @@ func (in *DeschedulerPolicy) DeepCopy() *DeschedulerPolicy {
func (in *DeschedulerPolicy) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
} else {
return nil
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *DeschedulerStrategy) DeepCopyInto(out *DeschedulerStrategy) {
*out = *in
in.Params.DeepCopyInto(&out.Params)
if in.Params != nil {
in, out := &in.Params, &out.Params
*out = new(StrategyParameters)
(*in).DeepCopyInto(*out)
}
return
}
@@ -106,15 +107,115 @@ func (in *NodeResourceUtilizationThresholds) DeepCopy() *NodeResourceUtilization
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *PodsHavingTooManyRestarts) DeepCopyInto(out *PodsHavingTooManyRestarts) {
*out = *in
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PodsHavingTooManyRestarts.
func (in *PodsHavingTooManyRestarts) DeepCopy() *PodsHavingTooManyRestarts {
if in == nil {
return nil
}
out := new(PodsHavingTooManyRestarts)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RemoveDuplicates) DeepCopyInto(out *RemoveDuplicates) {
*out = *in
if in.ExcludeOwnerKinds != nil {
in, out := &in.ExcludeOwnerKinds, &out.ExcludeOwnerKinds
*out = make([]string, len(*in))
copy(*out, *in)
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RemoveDuplicates.
func (in *RemoveDuplicates) DeepCopy() *RemoveDuplicates {
if in == nil {
return nil
}
out := new(RemoveDuplicates)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in ResourceThresholds) DeepCopyInto(out *ResourceThresholds) {
{
in := &in
*out = make(ResourceThresholds, len(*in))
for key, val := range *in {
(*out)[key] = val
}
return
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ResourceThresholds.
func (in ResourceThresholds) DeepCopy() ResourceThresholds {
if in == nil {
return nil
}
out := new(ResourceThresholds)
in.DeepCopyInto(out)
return *out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in StrategyList) DeepCopyInto(out *StrategyList) {
{
in := &in
*out = make(StrategyList, len(*in))
for key, val := range *in {
(*out)[key] = *val.DeepCopy()
}
return
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new StrategyList.
func (in StrategyList) DeepCopy() StrategyList {
if in == nil {
return nil
}
out := new(StrategyList)
in.DeepCopyInto(out)
return *out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *StrategyParameters) DeepCopyInto(out *StrategyParameters) {
*out = *in
in.NodeResourceUtilizationThresholds.DeepCopyInto(&out.NodeResourceUtilizationThresholds)
if in.NodeResourceUtilizationThresholds != nil {
in, out := &in.NodeResourceUtilizationThresholds, &out.NodeResourceUtilizationThresholds
*out = new(NodeResourceUtilizationThresholds)
(*in).DeepCopyInto(*out)
}
if in.NodeAffinityType != nil {
in, out := &in.NodeAffinityType, &out.NodeAffinityType
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.PodsHavingTooManyRestarts != nil {
in, out := &in.PodsHavingTooManyRestarts, &out.PodsHavingTooManyRestarts
*out = new(PodsHavingTooManyRestarts)
**out = **in
}
if in.MaxPodLifeTimeSeconds != nil {
in, out := &in.MaxPodLifeTimeSeconds, &out.MaxPodLifeTimeSeconds
*out = new(uint)
**out = **in
}
if in.RemoveDuplicates != nil {
in, out := &in.RemoveDuplicates, &out.RemoveDuplicates
*out = new(RemoveDuplicates)
(*in).DeepCopyInto(*out)
}
return
}

View File

@@ -1,7 +1,7 @@
// +build !ignore_autogenerated
/*
Copyright 2018 The Kubernetes Authors.
Copyright 2020 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -16,7 +16,7 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
// This file was autogenerated by defaulter-gen. Do not edit it manually!
// Code generated by defaulter-gen. DO NOT EDIT.
package v1alpha1

View File

@@ -1,7 +1,7 @@
// +build !ignore_autogenerated
/*
Copyright 2018 The Kubernetes Authors.
Copyright 2020 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -16,7 +16,7 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
// This file was autogenerated by deepcopy-gen. Do not edit it manually!
// Code generated by deepcopy-gen. DO NOT EDIT.
package api
@@ -32,9 +32,7 @@ func (in *DeschedulerPolicy) DeepCopyInto(out *DeschedulerPolicy) {
in, out := &in.Strategies, &out.Strategies
*out = make(StrategyList, len(*in))
for key, val := range *in {
newVal := new(DeschedulerStrategy)
val.DeepCopyInto(newVal)
(*out)[key] = *newVal
(*out)[key] = *val.DeepCopy()
}
}
return
@@ -54,15 +52,18 @@ func (in *DeschedulerPolicy) DeepCopy() *DeschedulerPolicy {
func (in *DeschedulerPolicy) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
} else {
return nil
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *DeschedulerStrategy) DeepCopyInto(out *DeschedulerStrategy) {
*out = *in
in.Params.DeepCopyInto(&out.Params)
if in.Params != nil {
in, out := &in.Params, &out.Params
*out = new(StrategyParameters)
(*in).DeepCopyInto(*out)
}
return
}
@@ -106,15 +107,115 @@ func (in *NodeResourceUtilizationThresholds) DeepCopy() *NodeResourceUtilization
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *PodsHavingTooManyRestarts) DeepCopyInto(out *PodsHavingTooManyRestarts) {
*out = *in
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PodsHavingTooManyRestarts.
func (in *PodsHavingTooManyRestarts) DeepCopy() *PodsHavingTooManyRestarts {
if in == nil {
return nil
}
out := new(PodsHavingTooManyRestarts)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RemoveDuplicates) DeepCopyInto(out *RemoveDuplicates) {
*out = *in
if in.ExcludeOwnerKinds != nil {
in, out := &in.ExcludeOwnerKinds, &out.ExcludeOwnerKinds
*out = make([]string, len(*in))
copy(*out, *in)
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RemoveDuplicates.
func (in *RemoveDuplicates) DeepCopy() *RemoveDuplicates {
if in == nil {
return nil
}
out := new(RemoveDuplicates)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in ResourceThresholds) DeepCopyInto(out *ResourceThresholds) {
{
in := &in
*out = make(ResourceThresholds, len(*in))
for key, val := range *in {
(*out)[key] = val
}
return
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ResourceThresholds.
func (in ResourceThresholds) DeepCopy() ResourceThresholds {
if in == nil {
return nil
}
out := new(ResourceThresholds)
in.DeepCopyInto(out)
return *out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in StrategyList) DeepCopyInto(out *StrategyList) {
{
in := &in
*out = make(StrategyList, len(*in))
for key, val := range *in {
(*out)[key] = *val.DeepCopy()
}
return
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new StrategyList.
func (in StrategyList) DeepCopy() StrategyList {
if in == nil {
return nil
}
out := new(StrategyList)
in.DeepCopyInto(out)
return *out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *StrategyParameters) DeepCopyInto(out *StrategyParameters) {
*out = *in
in.NodeResourceUtilizationThresholds.DeepCopyInto(&out.NodeResourceUtilizationThresholds)
if in.NodeResourceUtilizationThresholds != nil {
in, out := &in.NodeResourceUtilizationThresholds, &out.NodeResourceUtilizationThresholds
*out = new(NodeResourceUtilizationThresholds)
(*in).DeepCopyInto(*out)
}
if in.NodeAffinityType != nil {
in, out := &in.NodeAffinityType, &out.NodeAffinityType
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.PodsHavingTooManyRestarts != nil {
in, out := &in.PodsHavingTooManyRestarts, &out.PodsHavingTooManyRestarts
*out = new(PodsHavingTooManyRestarts)
**out = **in
}
if in.MaxPodLifeTimeSeconds != nil {
in, out := &in.MaxPodLifeTimeSeconds, &out.MaxPodLifeTimeSeconds
*out = new(uint)
**out = **in
}
if in.RemoveDuplicates != nil {
in, out := &in.RemoveDuplicates, &out.RemoveDuplicates
*out = new(RemoveDuplicates)
(*in).DeepCopyInto(*out)
}
return
}

View File

@@ -16,4 +16,4 @@ limitations under the License.
// +k8s:deepcopy-gen=package,register
package componentconfig // import "github.com/kubernetes-incubator/descheduler/pkg/apis/componentconfig"
package componentconfig // import "sigs.k8s.io/descheduler/pkg/apis/componentconfig"

View File

@@ -1,48 +0,0 @@
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package install installs the descheduler's componentconfig API group.
package install
import (
"k8s.io/apimachinery/pkg/apimachinery/announced"
"k8s.io/apimachinery/pkg/apimachinery/registered"
"k8s.io/apimachinery/pkg/runtime"
"github.com/kubernetes-incubator/descheduler/pkg/apis/componentconfig"
"github.com/kubernetes-incubator/descheduler/pkg/apis/componentconfig/v1alpha1"
deschedulerscheme "github.com/kubernetes-incubator/descheduler/pkg/descheduler/scheme"
)
func init() {
Install(deschedulerscheme.GroupFactoryRegistry, deschedulerscheme.Registry, deschedulerscheme.Scheme)
}
// Install registers the API group and adds types to a scheme
func Install(groupFactoryRegistry announced.APIGroupFactoryRegistry, registry *registered.APIRegistrationManager, scheme *runtime.Scheme) {
if err := announced.NewGroupMetaFactory(
&announced.GroupMetaFactoryArgs{
GroupName: componentconfig.GroupName,
VersionPreferenceOrder: []string{v1alpha1.SchemeGroupVersion.Version},
AddInternalObjectsToScheme: componentconfig.AddToScheme,
},
announced.VersionToSchemeFunc{
v1alpha1.SchemeGroupVersion.Version: v1alpha1.AddToScheme,
},
).Announce(groupFactoryRegistry).RegisterAndEnable(registry, scheme); err != nil {
panic(err)
}
}

View File

@@ -19,6 +19,8 @@ package componentconfig
import (
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"sigs.k8s.io/descheduler/pkg/descheduler/scheme"
)
var (
@@ -32,6 +34,12 @@ const GroupName = "deschedulercomponentconfig"
// SchemeGroupVersion is group version used to register these objects
var SchemeGroupVersion = schema.GroupVersion{Group: GroupName, Version: runtime.APIVersionInternal}
func init() {
if err := addKnownTypes(scheme.Scheme); err != nil {
panic(err)
}
}
// Kind takes an unqualified kind and returns a Group qualified GroupKind
func Kind(kind string) schema.GroupKind {
return SchemeGroupVersion.WithKind(kind).GroupKind()

View File

@@ -1,629 +0,0 @@
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by codecgen - DO NOT EDIT.
package componentconfig
import (
"errors"
"fmt"
codec1978 "github.com/ugorji/go/codec"
pkg1_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"reflect"
"runtime"
time "time"
)
const (
// ----- content types ----
codecSelferCcUTF81234 = 1
codecSelferCcRAW1234 = 0
// ----- value types used ----
codecSelferValueTypeArray1234 = 10
codecSelferValueTypeMap1234 = 9
codecSelferValueTypeString1234 = 6
codecSelferValueTypeInt1234 = 2
codecSelferValueTypeUint1234 = 3
codecSelferValueTypeFloat1234 = 4
)
var (
codecSelferBitsize1234 = uint8(reflect.TypeOf(uint(0)).Bits())
errCodecSelferOnlyMapOrArrayEncodeToStruct1234 = errors.New(`only encoded map or array can be decoded into a struct`)
)
type codecSelfer1234 struct{}
func init() {
if codec1978.GenVersion != 8 {
_, file, _, _ := runtime.Caller(0)
err := fmt.Errorf("codecgen version mismatch: current: %v, need %v. Re-generate file: %v",
8, codec1978.GenVersion, file)
panic(err)
}
if false { // reference the types, but skip this branch at build/run time
var v0 pkg1_v1.TypeMeta
var v1 time.Duration
_, _ = v0, v1
}
}
func (x *DeschedulerConfiguration) CodecEncodeSelf(e *codec1978.Encoder) {
var h codecSelfer1234
z, r := codec1978.GenHelperEncoder(e)
_, _, _ = h, z, r
if x == nil {
r.EncodeNil()
} else {
yym1 := z.EncBinary()
_ = yym1
if false {
} else if yyxt1 := z.Extension(z.I2Rtid(x)); yyxt1 != nil {
z.EncExtension(x, yyxt1)
} else {
yysep2 := !z.EncBinary()
yy2arr2 := z.EncBasicHandle().StructToArray
var yyq2 [8]bool
_ = yyq2
_, _ = yysep2, yy2arr2
const yyr2 bool = false
yyq2[0] = x.Kind != ""
yyq2[1] = x.APIVersion != ""
if yyr2 || yy2arr2 {
r.WriteArrayStart(8)
} else {
var yynn2 = 6
for _, b := range yyq2 {
if b {
yynn2++
}
}
r.WriteMapStart(yynn2)
yynn2 = 0
}
if yyr2 || yy2arr2 {
r.WriteArrayElem()
if yyq2[0] {
yym4 := z.EncBinary()
_ = yym4
if false {
} else {
r.EncodeString(codecSelferCcUTF81234, string(x.Kind))
}
} else {
r.EncodeString(codecSelferCcUTF81234, "")
}
} else {
if yyq2[0] {
r.WriteMapElemKey()
r.EncStructFieldKey(codecSelferValueTypeString1234, `kind`)
r.WriteMapElemValue()
yym5 := z.EncBinary()
_ = yym5
if false {
} else {
r.EncodeString(codecSelferCcUTF81234, string(x.Kind))
}
}
}
if yyr2 || yy2arr2 {
r.WriteArrayElem()
if yyq2[1] {
yym7 := z.EncBinary()
_ = yym7
if false {
} else {
r.EncodeString(codecSelferCcUTF81234, string(x.APIVersion))
}
} else {
r.EncodeString(codecSelferCcUTF81234, "")
}
} else {
if yyq2[1] {
r.WriteMapElemKey()
r.EncStructFieldKey(codecSelferValueTypeString1234, `apiVersion`)
r.WriteMapElemValue()
yym8 := z.EncBinary()
_ = yym8
if false {
} else {
r.EncodeString(codecSelferCcUTF81234, string(x.APIVersion))
}
}
}
if yyr2 || yy2arr2 {
r.WriteArrayElem()
yym10 := z.EncBinary()
_ = yym10
if false {
} else if yyxt10 := z.Extension(z.I2Rtid(x.DeschedulingInterval)); yyxt10 != nil {
z.EncExtension(x.DeschedulingInterval, yyxt10)
} else {
r.EncodeInt(int64(x.DeschedulingInterval))
}
} else {
r.WriteMapElemKey()
r.EncStructFieldKey(codecSelferValueTypeString1234, `DeschedulingInterval`)
r.WriteMapElemValue()
yym11 := z.EncBinary()
_ = yym11
if false {
} else if yyxt11 := z.Extension(z.I2Rtid(x.DeschedulingInterval)); yyxt11 != nil {
z.EncExtension(x.DeschedulingInterval, yyxt11)
} else {
r.EncodeInt(int64(x.DeschedulingInterval))
}
}
if yyr2 || yy2arr2 {
r.WriteArrayElem()
yym13 := z.EncBinary()
_ = yym13
if false {
} else {
r.EncodeString(codecSelferCcUTF81234, string(x.KubeconfigFile))
}
} else {
r.WriteMapElemKey()
r.EncStructFieldKey(codecSelferValueTypeString1234, `KubeconfigFile`)
r.WriteMapElemValue()
yym14 := z.EncBinary()
_ = yym14
if false {
} else {
r.EncodeString(codecSelferCcUTF81234, string(x.KubeconfigFile))
}
}
if yyr2 || yy2arr2 {
r.WriteArrayElem()
yym16 := z.EncBinary()
_ = yym16
if false {
} else {
r.EncodeString(codecSelferCcUTF81234, string(x.PolicyConfigFile))
}
} else {
r.WriteMapElemKey()
r.EncStructFieldKey(codecSelferValueTypeString1234, `PolicyConfigFile`)
r.WriteMapElemValue()
yym17 := z.EncBinary()
_ = yym17
if false {
} else {
r.EncodeString(codecSelferCcUTF81234, string(x.PolicyConfigFile))
}
}
if yyr2 || yy2arr2 {
r.WriteArrayElem()
yym19 := z.EncBinary()
_ = yym19
if false {
} else {
r.EncodeBool(bool(x.DryRun))
}
} else {
r.WriteMapElemKey()
r.EncStructFieldKey(codecSelferValueTypeString1234, `DryRun`)
r.WriteMapElemValue()
yym20 := z.EncBinary()
_ = yym20
if false {
} else {
r.EncodeBool(bool(x.DryRun))
}
}
if yyr2 || yy2arr2 {
r.WriteArrayElem()
yym22 := z.EncBinary()
_ = yym22
if false {
} else {
r.EncodeString(codecSelferCcUTF81234, string(x.NodeSelector))
}
} else {
r.WriteMapElemKey()
r.EncStructFieldKey(codecSelferValueTypeString1234, `NodeSelector`)
r.WriteMapElemValue()
yym23 := z.EncBinary()
_ = yym23
if false {
} else {
r.EncodeString(codecSelferCcUTF81234, string(x.NodeSelector))
}
}
if yyr2 || yy2arr2 {
r.WriteArrayElem()
yym25 := z.EncBinary()
_ = yym25
if false {
} else {
r.EncodeInt(int64(x.MaxNoOfPodsToEvictPerNode))
}
} else {
r.WriteMapElemKey()
r.EncStructFieldKey(codecSelferValueTypeString1234, `MaxNoOfPodsToEvictPerNode`)
r.WriteMapElemValue()
yym26 := z.EncBinary()
_ = yym26
if false {
} else {
r.EncodeInt(int64(x.MaxNoOfPodsToEvictPerNode))
}
}
if yyr2 || yy2arr2 {
r.WriteArrayEnd()
} else {
r.WriteMapEnd()
}
}
}
}
func (x *DeschedulerConfiguration) CodecDecodeSelf(d *codec1978.Decoder) {
var h codecSelfer1234
z, r := codec1978.GenHelperDecoder(d)
_, _, _ = h, z, r
yym1 := z.DecBinary()
_ = yym1
if false {
} else if yyxt1 := z.Extension(z.I2Rtid(x)); yyxt1 != nil {
z.DecExtension(x, yyxt1)
} else {
yyct2 := r.ContainerType()
if yyct2 == codecSelferValueTypeMap1234 {
yyl2 := r.ReadMapStart()
if yyl2 == 0 {
r.ReadMapEnd()
} else {
x.codecDecodeSelfFromMap(yyl2, d)
}
} else if yyct2 == codecSelferValueTypeArray1234 {
yyl2 := r.ReadArrayStart()
if yyl2 == 0 {
r.ReadArrayEnd()
} else {
x.codecDecodeSelfFromArray(yyl2, d)
}
} else {
panic(errCodecSelferOnlyMapOrArrayEncodeToStruct1234)
}
}
}
func (x *DeschedulerConfiguration) codecDecodeSelfFromMap(l int, d *codec1978.Decoder) {
var h codecSelfer1234
z, r := codec1978.GenHelperDecoder(d)
_, _, _ = h, z, r
var yyhl3 bool = l >= 0
for yyj3 := 0; ; yyj3++ {
if yyhl3 {
if yyj3 >= l {
break
}
} else {
if r.CheckBreak() {
break
}
}
r.ReadMapElemKey()
yys3 := z.StringView(r.DecStructFieldKey(codecSelferValueTypeString1234, z.DecScratchArrayBuffer()))
r.ReadMapElemValue()
switch yys3 {
case "kind":
if r.TryDecodeAsNil() {
x.Kind = ""
} else {
yyv4 := &x.Kind
yym5 := z.DecBinary()
_ = yym5
if false {
} else {
*((*string)(yyv4)) = r.DecodeString()
}
}
case "apiVersion":
if r.TryDecodeAsNil() {
x.APIVersion = ""
} else {
yyv6 := &x.APIVersion
yym7 := z.DecBinary()
_ = yym7
if false {
} else {
*((*string)(yyv6)) = r.DecodeString()
}
}
case "DeschedulingInterval":
if r.TryDecodeAsNil() {
x.DeschedulingInterval = 0
} else {
yyv8 := &x.DeschedulingInterval
yym9 := z.DecBinary()
_ = yym9
if false {
} else if yyxt9 := z.Extension(z.I2Rtid(yyv8)); yyxt9 != nil {
z.DecExtension(yyv8, yyxt9)
} else {
*((*int64)(yyv8)) = int64(r.DecodeInt(64))
}
}
case "KubeconfigFile":
if r.TryDecodeAsNil() {
x.KubeconfigFile = ""
} else {
yyv10 := &x.KubeconfigFile
yym11 := z.DecBinary()
_ = yym11
if false {
} else {
*((*string)(yyv10)) = r.DecodeString()
}
}
case "PolicyConfigFile":
if r.TryDecodeAsNil() {
x.PolicyConfigFile = ""
} else {
yyv12 := &x.PolicyConfigFile
yym13 := z.DecBinary()
_ = yym13
if false {
} else {
*((*string)(yyv12)) = r.DecodeString()
}
}
case "DryRun":
if r.TryDecodeAsNil() {
x.DryRun = false
} else {
yyv14 := &x.DryRun
yym15 := z.DecBinary()
_ = yym15
if false {
} else {
*((*bool)(yyv14)) = r.DecodeBool()
}
}
case "NodeSelector":
if r.TryDecodeAsNil() {
x.NodeSelector = ""
} else {
yyv16 := &x.NodeSelector
yym17 := z.DecBinary()
_ = yym17
if false {
} else {
*((*string)(yyv16)) = r.DecodeString()
}
}
case "MaxNoOfPodsToEvictPerNode":
if r.TryDecodeAsNil() {
x.MaxNoOfPodsToEvictPerNode = 0
} else {
yyv18 := &x.MaxNoOfPodsToEvictPerNode
yym19 := z.DecBinary()
_ = yym19
if false {
} else {
*((*int)(yyv18)) = int(r.DecodeInt(codecSelferBitsize1234))
}
}
default:
z.DecStructFieldNotFound(-1, yys3)
} // end switch yys3
} // end for yyj3
r.ReadMapEnd()
}
func (x *DeschedulerConfiguration) codecDecodeSelfFromArray(l int, d *codec1978.Decoder) {
var h codecSelfer1234
z, r := codec1978.GenHelperDecoder(d)
_, _, _ = h, z, r
var yyj20 int
var yyb20 bool
var yyhl20 bool = l >= 0
yyj20++
if yyhl20 {
yyb20 = yyj20 > l
} else {
yyb20 = r.CheckBreak()
}
if yyb20 {
r.ReadArrayEnd()
return
}
r.ReadArrayElem()
if r.TryDecodeAsNil() {
x.Kind = ""
} else {
yyv21 := &x.Kind
yym22 := z.DecBinary()
_ = yym22
if false {
} else {
*((*string)(yyv21)) = r.DecodeString()
}
}
yyj20++
if yyhl20 {
yyb20 = yyj20 > l
} else {
yyb20 = r.CheckBreak()
}
if yyb20 {
r.ReadArrayEnd()
return
}
r.ReadArrayElem()
if r.TryDecodeAsNil() {
x.APIVersion = ""
} else {
yyv23 := &x.APIVersion
yym24 := z.DecBinary()
_ = yym24
if false {
} else {
*((*string)(yyv23)) = r.DecodeString()
}
}
yyj20++
if yyhl20 {
yyb20 = yyj20 > l
} else {
yyb20 = r.CheckBreak()
}
if yyb20 {
r.ReadArrayEnd()
return
}
r.ReadArrayElem()
if r.TryDecodeAsNil() {
x.DeschedulingInterval = 0
} else {
yyv25 := &x.DeschedulingInterval
yym26 := z.DecBinary()
_ = yym26
if false {
} else if yyxt26 := z.Extension(z.I2Rtid(yyv25)); yyxt26 != nil {
z.DecExtension(yyv25, yyxt26)
} else {
*((*int64)(yyv25)) = int64(r.DecodeInt(64))
}
}
yyj20++
if yyhl20 {
yyb20 = yyj20 > l
} else {
yyb20 = r.CheckBreak()
}
if yyb20 {
r.ReadArrayEnd()
return
}
r.ReadArrayElem()
if r.TryDecodeAsNil() {
x.KubeconfigFile = ""
} else {
yyv27 := &x.KubeconfigFile
yym28 := z.DecBinary()
_ = yym28
if false {
} else {
*((*string)(yyv27)) = r.DecodeString()
}
}
yyj20++
if yyhl20 {
yyb20 = yyj20 > l
} else {
yyb20 = r.CheckBreak()
}
if yyb20 {
r.ReadArrayEnd()
return
}
r.ReadArrayElem()
if r.TryDecodeAsNil() {
x.PolicyConfigFile = ""
} else {
yyv29 := &x.PolicyConfigFile
yym30 := z.DecBinary()
_ = yym30
if false {
} else {
*((*string)(yyv29)) = r.DecodeString()
}
}
yyj20++
if yyhl20 {
yyb20 = yyj20 > l
} else {
yyb20 = r.CheckBreak()
}
if yyb20 {
r.ReadArrayEnd()
return
}
r.ReadArrayElem()
if r.TryDecodeAsNil() {
x.DryRun = false
} else {
yyv31 := &x.DryRun
yym32 := z.DecBinary()
_ = yym32
if false {
} else {
*((*bool)(yyv31)) = r.DecodeBool()
}
}
yyj20++
if yyhl20 {
yyb20 = yyj20 > l
} else {
yyb20 = r.CheckBreak()
}
if yyb20 {
r.ReadArrayEnd()
return
}
r.ReadArrayElem()
if r.TryDecodeAsNil() {
x.NodeSelector = ""
} else {
yyv33 := &x.NodeSelector
yym34 := z.DecBinary()
_ = yym34
if false {
} else {
*((*string)(yyv33)) = r.DecodeString()
}
}
yyj20++
if yyhl20 {
yyb20 = yyj20 > l
} else {
yyb20 = r.CheckBreak()
}
if yyb20 {
r.ReadArrayEnd()
return
}
r.ReadArrayElem()
if r.TryDecodeAsNil() {
x.MaxNoOfPodsToEvictPerNode = 0
} else {
yyv35 := &x.MaxNoOfPodsToEvictPerNode
yym36 := z.DecBinary()
_ = yym36
if false {
} else {
*((*int)(yyv35)) = int(r.DecodeInt(codecSelferBitsize1234))
}
}
for {
yyj20++
if yyhl20 {
yyb20 = yyj20 > l
} else {
yyb20 = r.CheckBreak()
}
if yyb20 {
break
}
r.ReadArrayElem()
z.DecStructFieldNotFound(yyj20-1, "")
}
r.ReadArrayEnd()
}

View File

@@ -45,4 +45,7 @@ type DeschedulerConfiguration struct {
// MaxNoOfPodsToEvictPerNode restricts maximum of pods to be evicted per node.
MaxNoOfPodsToEvictPerNode int
// EvictLocalStoragePods allows pods using local storage to be evicted.
EvictLocalStoragePods bool
}

View File

@@ -15,10 +15,10 @@ limitations under the License.
*/
// +k8s:deepcopy-gen=package,register
// +k8s:conversion-gen=github.com/kubernetes-incubator/descheduler/pkg/apis/componentconfig
// +k8s:conversion-gen=sigs.k8s.io/descheduler/pkg/apis/componentconfig
// +k8s:defaulter-gen=TypeMeta
// Package v1alpha1 is the v1alpha1 version of the descheduler's componentconfig API
// +groupName=deschedulercomponentconfig
package v1alpha1 // import "github.com/kubernetes-incubator/descheduler/pkg/apis/componentconfig/v1alpha1"
package v1alpha1 // import "sigs.k8s.io/descheduler/pkg/apis/componentconfig/v1alpha1"

View File

@@ -1,664 +0,0 @@
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by codecgen - DO NOT EDIT.
package v1alpha1
import (
"errors"
"fmt"
codec1978 "github.com/ugorji/go/codec"
pkg1_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"reflect"
"runtime"
time "time"
)
const (
// ----- content types ----
codecSelferCcUTF81234 = 1
codecSelferCcRAW1234 = 0
// ----- value types used ----
codecSelferValueTypeArray1234 = 10
codecSelferValueTypeMap1234 = 9
codecSelferValueTypeString1234 = 6
codecSelferValueTypeInt1234 = 2
codecSelferValueTypeUint1234 = 3
codecSelferValueTypeFloat1234 = 4
)
var (
codecSelferBitsize1234 = uint8(reflect.TypeOf(uint(0)).Bits())
errCodecSelferOnlyMapOrArrayEncodeToStruct1234 = errors.New(`only encoded map or array can be decoded into a struct`)
)
type codecSelfer1234 struct{}
func init() {
if codec1978.GenVersion != 8 {
_, file, _, _ := runtime.Caller(0)
err := fmt.Errorf("codecgen version mismatch: current: %v, need %v. Re-generate file: %v",
8, codec1978.GenVersion, file)
panic(err)
}
if false { // reference the types, but skip this branch at build/run time
var v0 pkg1_v1.TypeMeta
var v1 time.Duration
_, _ = v0, v1
}
}
func (x *DeschedulerConfiguration) CodecEncodeSelf(e *codec1978.Encoder) {
var h codecSelfer1234
z, r := codec1978.GenHelperEncoder(e)
_, _, _ = h, z, r
if x == nil {
r.EncodeNil()
} else {
yym1 := z.EncBinary()
_ = yym1
if false {
} else if yyxt1 := z.Extension(z.I2Rtid(x)); yyxt1 != nil {
z.EncExtension(x, yyxt1)
} else {
yysep2 := !z.EncBinary()
yy2arr2 := z.EncBasicHandle().StructToArray
var yyq2 [8]bool
_ = yyq2
_, _ = yysep2, yy2arr2
const yyr2 bool = false
yyq2[0] = x.Kind != ""
yyq2[1] = x.APIVersion != ""
yyq2[2] = x.DeschedulingInterval != 0
yyq2[4] = x.PolicyConfigFile != ""
yyq2[5] = x.DryRun != false
yyq2[6] = x.NodeSelector != ""
yyq2[7] = x.MaxNoOfPodsToEvictPerNode != 0
if yyr2 || yy2arr2 {
r.WriteArrayStart(8)
} else {
var yynn2 = 1
for _, b := range yyq2 {
if b {
yynn2++
}
}
r.WriteMapStart(yynn2)
yynn2 = 0
}
if yyr2 || yy2arr2 {
r.WriteArrayElem()
if yyq2[0] {
yym4 := z.EncBinary()
_ = yym4
if false {
} else {
r.EncodeString(codecSelferCcUTF81234, string(x.Kind))
}
} else {
r.EncodeString(codecSelferCcUTF81234, "")
}
} else {
if yyq2[0] {
r.WriteMapElemKey()
r.EncStructFieldKey(codecSelferValueTypeString1234, `kind`)
r.WriteMapElemValue()
yym5 := z.EncBinary()
_ = yym5
if false {
} else {
r.EncodeString(codecSelferCcUTF81234, string(x.Kind))
}
}
}
if yyr2 || yy2arr2 {
r.WriteArrayElem()
if yyq2[1] {
yym7 := z.EncBinary()
_ = yym7
if false {
} else {
r.EncodeString(codecSelferCcUTF81234, string(x.APIVersion))
}
} else {
r.EncodeString(codecSelferCcUTF81234, "")
}
} else {
if yyq2[1] {
r.WriteMapElemKey()
r.EncStructFieldKey(codecSelferValueTypeString1234, `apiVersion`)
r.WriteMapElemValue()
yym8 := z.EncBinary()
_ = yym8
if false {
} else {
r.EncodeString(codecSelferCcUTF81234, string(x.APIVersion))
}
}
}
if yyr2 || yy2arr2 {
r.WriteArrayElem()
if yyq2[2] {
yym10 := z.EncBinary()
_ = yym10
if false {
} else if yyxt10 := z.Extension(z.I2Rtid(x.DeschedulingInterval)); yyxt10 != nil {
z.EncExtension(x.DeschedulingInterval, yyxt10)
} else {
r.EncodeInt(int64(x.DeschedulingInterval))
}
} else {
r.EncodeInt(0)
}
} else {
if yyq2[2] {
r.WriteMapElemKey()
r.EncStructFieldKey(codecSelferValueTypeString1234, `deschedulingInterval`)
r.WriteMapElemValue()
yym11 := z.EncBinary()
_ = yym11
if false {
} else if yyxt11 := z.Extension(z.I2Rtid(x.DeschedulingInterval)); yyxt11 != nil {
z.EncExtension(x.DeschedulingInterval, yyxt11)
} else {
r.EncodeInt(int64(x.DeschedulingInterval))
}
}
}
if yyr2 || yy2arr2 {
r.WriteArrayElem()
yym13 := z.EncBinary()
_ = yym13
if false {
} else {
r.EncodeString(codecSelferCcUTF81234, string(x.KubeconfigFile))
}
} else {
r.WriteMapElemKey()
r.EncStructFieldKey(codecSelferValueTypeString1234, `kubeconfigFile`)
r.WriteMapElemValue()
yym14 := z.EncBinary()
_ = yym14
if false {
} else {
r.EncodeString(codecSelferCcUTF81234, string(x.KubeconfigFile))
}
}
if yyr2 || yy2arr2 {
r.WriteArrayElem()
if yyq2[4] {
yym16 := z.EncBinary()
_ = yym16
if false {
} else {
r.EncodeString(codecSelferCcUTF81234, string(x.PolicyConfigFile))
}
} else {
r.EncodeString(codecSelferCcUTF81234, "")
}
} else {
if yyq2[4] {
r.WriteMapElemKey()
r.EncStructFieldKey(codecSelferValueTypeString1234, `policyConfigFile`)
r.WriteMapElemValue()
yym17 := z.EncBinary()
_ = yym17
if false {
} else {
r.EncodeString(codecSelferCcUTF81234, string(x.PolicyConfigFile))
}
}
}
if yyr2 || yy2arr2 {
r.WriteArrayElem()
if yyq2[5] {
yym19 := z.EncBinary()
_ = yym19
if false {
} else {
r.EncodeBool(bool(x.DryRun))
}
} else {
r.EncodeBool(false)
}
} else {
if yyq2[5] {
r.WriteMapElemKey()
r.EncStructFieldKey(codecSelferValueTypeString1234, `dryRun`)
r.WriteMapElemValue()
yym20 := z.EncBinary()
_ = yym20
if false {
} else {
r.EncodeBool(bool(x.DryRun))
}
}
}
if yyr2 || yy2arr2 {
r.WriteArrayElem()
if yyq2[6] {
yym22 := z.EncBinary()
_ = yym22
if false {
} else {
r.EncodeString(codecSelferCcUTF81234, string(x.NodeSelector))
}
} else {
r.EncodeString(codecSelferCcUTF81234, "")
}
} else {
if yyq2[6] {
r.WriteMapElemKey()
r.EncStructFieldKey(codecSelferValueTypeString1234, `nodeSelector`)
r.WriteMapElemValue()
yym23 := z.EncBinary()
_ = yym23
if false {
} else {
r.EncodeString(codecSelferCcUTF81234, string(x.NodeSelector))
}
}
}
if yyr2 || yy2arr2 {
r.WriteArrayElem()
if yyq2[7] {
yym25 := z.EncBinary()
_ = yym25
if false {
} else {
r.EncodeInt(int64(x.MaxNoOfPodsToEvictPerNode))
}
} else {
r.EncodeInt(0)
}
} else {
if yyq2[7] {
r.WriteMapElemKey()
r.EncStructFieldKey(codecSelferValueTypeString1234, `maxNoOfPodsToEvictPerNode`)
r.WriteMapElemValue()
yym26 := z.EncBinary()
_ = yym26
if false {
} else {
r.EncodeInt(int64(x.MaxNoOfPodsToEvictPerNode))
}
}
}
if yyr2 || yy2arr2 {
r.WriteArrayEnd()
} else {
r.WriteMapEnd()
}
}
}
}
func (x *DeschedulerConfiguration) CodecDecodeSelf(d *codec1978.Decoder) {
var h codecSelfer1234
z, r := codec1978.GenHelperDecoder(d)
_, _, _ = h, z, r
yym1 := z.DecBinary()
_ = yym1
if false {
} else if yyxt1 := z.Extension(z.I2Rtid(x)); yyxt1 != nil {
z.DecExtension(x, yyxt1)
} else {
yyct2 := r.ContainerType()
if yyct2 == codecSelferValueTypeMap1234 {
yyl2 := r.ReadMapStart()
if yyl2 == 0 {
r.ReadMapEnd()
} else {
x.codecDecodeSelfFromMap(yyl2, d)
}
} else if yyct2 == codecSelferValueTypeArray1234 {
yyl2 := r.ReadArrayStart()
if yyl2 == 0 {
r.ReadArrayEnd()
} else {
x.codecDecodeSelfFromArray(yyl2, d)
}
} else {
panic(errCodecSelferOnlyMapOrArrayEncodeToStruct1234)
}
}
}
func (x *DeschedulerConfiguration) codecDecodeSelfFromMap(l int, d *codec1978.Decoder) {
var h codecSelfer1234
z, r := codec1978.GenHelperDecoder(d)
_, _, _ = h, z, r
var yyhl3 bool = l >= 0
for yyj3 := 0; ; yyj3++ {
if yyhl3 {
if yyj3 >= l {
break
}
} else {
if r.CheckBreak() {
break
}
}
r.ReadMapElemKey()
yys3 := z.StringView(r.DecStructFieldKey(codecSelferValueTypeString1234, z.DecScratchArrayBuffer()))
r.ReadMapElemValue()
switch yys3 {
case "kind":
if r.TryDecodeAsNil() {
x.Kind = ""
} else {
yyv4 := &x.Kind
yym5 := z.DecBinary()
_ = yym5
if false {
} else {
*((*string)(yyv4)) = r.DecodeString()
}
}
case "apiVersion":
if r.TryDecodeAsNil() {
x.APIVersion = ""
} else {
yyv6 := &x.APIVersion
yym7 := z.DecBinary()
_ = yym7
if false {
} else {
*((*string)(yyv6)) = r.DecodeString()
}
}
case "deschedulingInterval":
if r.TryDecodeAsNil() {
x.DeschedulingInterval = 0
} else {
yyv8 := &x.DeschedulingInterval
yym9 := z.DecBinary()
_ = yym9
if false {
} else if yyxt9 := z.Extension(z.I2Rtid(yyv8)); yyxt9 != nil {
z.DecExtension(yyv8, yyxt9)
} else {
*((*int64)(yyv8)) = int64(r.DecodeInt(64))
}
}
case "kubeconfigFile":
if r.TryDecodeAsNil() {
x.KubeconfigFile = ""
} else {
yyv10 := &x.KubeconfigFile
yym11 := z.DecBinary()
_ = yym11
if false {
} else {
*((*string)(yyv10)) = r.DecodeString()
}
}
case "policyConfigFile":
if r.TryDecodeAsNil() {
x.PolicyConfigFile = ""
} else {
yyv12 := &x.PolicyConfigFile
yym13 := z.DecBinary()
_ = yym13
if false {
} else {
*((*string)(yyv12)) = r.DecodeString()
}
}
case "dryRun":
if r.TryDecodeAsNil() {
x.DryRun = false
} else {
yyv14 := &x.DryRun
yym15 := z.DecBinary()
_ = yym15
if false {
} else {
*((*bool)(yyv14)) = r.DecodeBool()
}
}
case "nodeSelector":
if r.TryDecodeAsNil() {
x.NodeSelector = ""
} else {
yyv16 := &x.NodeSelector
yym17 := z.DecBinary()
_ = yym17
if false {
} else {
*((*string)(yyv16)) = r.DecodeString()
}
}
case "maxNoOfPodsToEvictPerNode":
if r.TryDecodeAsNil() {
x.MaxNoOfPodsToEvictPerNode = 0
} else {
yyv18 := &x.MaxNoOfPodsToEvictPerNode
yym19 := z.DecBinary()
_ = yym19
if false {
} else {
*((*int)(yyv18)) = int(r.DecodeInt(codecSelferBitsize1234))
}
}
default:
z.DecStructFieldNotFound(-1, yys3)
} // end switch yys3
} // end for yyj3
r.ReadMapEnd()
}
func (x *DeschedulerConfiguration) codecDecodeSelfFromArray(l int, d *codec1978.Decoder) {
var h codecSelfer1234
z, r := codec1978.GenHelperDecoder(d)
_, _, _ = h, z, r
var yyj20 int
var yyb20 bool
var yyhl20 bool = l >= 0
yyj20++
if yyhl20 {
yyb20 = yyj20 > l
} else {
yyb20 = r.CheckBreak()
}
if yyb20 {
r.ReadArrayEnd()
return
}
r.ReadArrayElem()
if r.TryDecodeAsNil() {
x.Kind = ""
} else {
yyv21 := &x.Kind
yym22 := z.DecBinary()
_ = yym22
if false {
} else {
*((*string)(yyv21)) = r.DecodeString()
}
}
yyj20++
if yyhl20 {
yyb20 = yyj20 > l
} else {
yyb20 = r.CheckBreak()
}
if yyb20 {
r.ReadArrayEnd()
return
}
r.ReadArrayElem()
if r.TryDecodeAsNil() {
x.APIVersion = ""
} else {
yyv23 := &x.APIVersion
yym24 := z.DecBinary()
_ = yym24
if false {
} else {
*((*string)(yyv23)) = r.DecodeString()
}
}
yyj20++
if yyhl20 {
yyb20 = yyj20 > l
} else {
yyb20 = r.CheckBreak()
}
if yyb20 {
r.ReadArrayEnd()
return
}
r.ReadArrayElem()
if r.TryDecodeAsNil() {
x.DeschedulingInterval = 0
} else {
yyv25 := &x.DeschedulingInterval
yym26 := z.DecBinary()
_ = yym26
if false {
} else if yyxt26 := z.Extension(z.I2Rtid(yyv25)); yyxt26 != nil {
z.DecExtension(yyv25, yyxt26)
} else {
*((*int64)(yyv25)) = int64(r.DecodeInt(64))
}
}
yyj20++
if yyhl20 {
yyb20 = yyj20 > l
} else {
yyb20 = r.CheckBreak()
}
if yyb20 {
r.ReadArrayEnd()
return
}
r.ReadArrayElem()
if r.TryDecodeAsNil() {
x.KubeconfigFile = ""
} else {
yyv27 := &x.KubeconfigFile
yym28 := z.DecBinary()
_ = yym28
if false {
} else {
*((*string)(yyv27)) = r.DecodeString()
}
}
yyj20++
if yyhl20 {
yyb20 = yyj20 > l
} else {
yyb20 = r.CheckBreak()
}
if yyb20 {
r.ReadArrayEnd()
return
}
r.ReadArrayElem()
if r.TryDecodeAsNil() {
x.PolicyConfigFile = ""
} else {
yyv29 := &x.PolicyConfigFile
yym30 := z.DecBinary()
_ = yym30
if false {
} else {
*((*string)(yyv29)) = r.DecodeString()
}
}
yyj20++
if yyhl20 {
yyb20 = yyj20 > l
} else {
yyb20 = r.CheckBreak()
}
if yyb20 {
r.ReadArrayEnd()
return
}
r.ReadArrayElem()
if r.TryDecodeAsNil() {
x.DryRun = false
} else {
yyv31 := &x.DryRun
yym32 := z.DecBinary()
_ = yym32
if false {
} else {
*((*bool)(yyv31)) = r.DecodeBool()
}
}
yyj20++
if yyhl20 {
yyb20 = yyj20 > l
} else {
yyb20 = r.CheckBreak()
}
if yyb20 {
r.ReadArrayEnd()
return
}
r.ReadArrayElem()
if r.TryDecodeAsNil() {
x.NodeSelector = ""
} else {
yyv33 := &x.NodeSelector
yym34 := z.DecBinary()
_ = yym34
if false {
} else {
*((*string)(yyv33)) = r.DecodeString()
}
}
yyj20++
if yyhl20 {
yyb20 = yyj20 > l
} else {
yyb20 = r.CheckBreak()
}
if yyb20 {
r.ReadArrayEnd()
return
}
r.ReadArrayElem()
if r.TryDecodeAsNil() {
x.MaxNoOfPodsToEvictPerNode = 0
} else {
yyv35 := &x.MaxNoOfPodsToEvictPerNode
yym36 := z.DecBinary()
_ = yym36
if false {
} else {
*((*int)(yyv35)) = int(r.DecodeInt(codecSelferBitsize1234))
}
}
for {
yyj20++
if yyhl20 {
yyb20 = yyj20 > l
} else {
yyb20 = r.CheckBreak()
}
if yyb20 {
break
}
r.ReadArrayElem()
z.DecStructFieldNotFound(yyj20-1, "")
}
r.ReadArrayEnd()
}

View File

@@ -45,4 +45,7 @@ type DeschedulerConfiguration struct {
// MaxNoOfPodsToEvictPerNode restricts maximum of pods to be evicted per node.
MaxNoOfPodsToEvictPerNode int `json:"maxNoOfPodsToEvictPerNode,omitempty"`
// EvictLocalStoragePods allows pods using local storage to be evicted.
EvictLocalStoragePods bool `json:"evictLocalStoragePods,omitempty"`
}

View File

@@ -1,7 +1,7 @@
// +build !ignore_autogenerated
/*
Copyright 2018 The Kubernetes Authors.
Copyright 2020 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -16,16 +16,16 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
// This file was autogenerated by conversion-gen. Do not edit it manually!
// Code generated by conversion-gen. DO NOT EDIT.
package v1alpha1
import (
time "time"
componentconfig "github.com/kubernetes-incubator/descheduler/pkg/apis/componentconfig"
conversion "k8s.io/apimachinery/pkg/conversion"
runtime "k8s.io/apimachinery/pkg/runtime"
componentconfig "sigs.k8s.io/descheduler/pkg/apis/componentconfig"
)
func init() {
@@ -34,11 +34,18 @@ func init() {
// RegisterConversions adds conversion functions to the given scheme.
// Public to allow building arbitrary schemes.
func RegisterConversions(scheme *runtime.Scheme) error {
return scheme.AddGeneratedConversionFuncs(
Convert_v1alpha1_DeschedulerConfiguration_To_componentconfig_DeschedulerConfiguration,
Convert_componentconfig_DeschedulerConfiguration_To_v1alpha1_DeschedulerConfiguration,
)
func RegisterConversions(s *runtime.Scheme) error {
if err := s.AddGeneratedConversionFunc((*DeschedulerConfiguration)(nil), (*componentconfig.DeschedulerConfiguration)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_v1alpha1_DeschedulerConfiguration_To_componentconfig_DeschedulerConfiguration(a.(*DeschedulerConfiguration), b.(*componentconfig.DeschedulerConfiguration), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*componentconfig.DeschedulerConfiguration)(nil), (*DeschedulerConfiguration)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_componentconfig_DeschedulerConfiguration_To_v1alpha1_DeschedulerConfiguration(a.(*componentconfig.DeschedulerConfiguration), b.(*DeschedulerConfiguration), scope)
}); err != nil {
return err
}
return nil
}
func autoConvert_v1alpha1_DeschedulerConfiguration_To_componentconfig_DeschedulerConfiguration(in *DeschedulerConfiguration, out *componentconfig.DeschedulerConfiguration, s conversion.Scope) error {
@@ -48,6 +55,7 @@ func autoConvert_v1alpha1_DeschedulerConfiguration_To_componentconfig_Deschedule
out.DryRun = in.DryRun
out.NodeSelector = in.NodeSelector
out.MaxNoOfPodsToEvictPerNode = in.MaxNoOfPodsToEvictPerNode
out.EvictLocalStoragePods = in.EvictLocalStoragePods
return nil
}
@@ -63,6 +71,7 @@ func autoConvert_componentconfig_DeschedulerConfiguration_To_v1alpha1_Deschedule
out.DryRun = in.DryRun
out.NodeSelector = in.NodeSelector
out.MaxNoOfPodsToEvictPerNode = in.MaxNoOfPodsToEvictPerNode
out.EvictLocalStoragePods = in.EvictLocalStoragePods
return nil
}

View File

@@ -1,7 +1,7 @@
// +build !ignore_autogenerated
/*
Copyright 2018 The Kubernetes Authors.
Copyright 2020 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -16,7 +16,7 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
// This file was autogenerated by deepcopy-gen. Do not edit it manually!
// Code generated by deepcopy-gen. DO NOT EDIT.
package v1alpha1
@@ -45,7 +45,6 @@ func (in *DeschedulerConfiguration) DeepCopy() *DeschedulerConfiguration {
func (in *DeschedulerConfiguration) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
} else {
return nil
}
return nil
}

View File

@@ -1,7 +1,7 @@
// +build !ignore_autogenerated
/*
Copyright 2018 The Kubernetes Authors.
Copyright 2020 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -16,7 +16,7 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
// This file was autogenerated by defaulter-gen. Do not edit it manually!
// Code generated by defaulter-gen. DO NOT EDIT.
package v1alpha1

View File

@@ -1,7 +1,7 @@
// +build !ignore_autogenerated
/*
Copyright 2018 The Kubernetes Authors.
Copyright 2020 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -16,7 +16,7 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
// This file was autogenerated by deepcopy-gen. Do not edit it manually!
// Code generated by deepcopy-gen. DO NOT EDIT.
package componentconfig
@@ -45,7 +45,6 @@ func (in *DeschedulerConfiguration) DeepCopy() *DeschedulerConfiguration {
func (in *DeschedulerConfiguration) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
} else {
return nil
}
return nil
}

View File

@@ -20,6 +20,7 @@ import (
"fmt"
clientset "k8s.io/client-go/kubernetes"
// Ensure to load all auth plugins.
_ "k8s.io/client-go/plugin/pkg/client/auth"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"

View File

@@ -17,19 +17,26 @@ limitations under the License.
package descheduler
import (
"context"
"fmt"
"github.com/golang/glog"
"k8s.io/api/core/v1"
clientset "k8s.io/client-go/kubernetes"
"k8s.io/klog/v2"
"github.com/kubernetes-incubator/descheduler/cmd/descheduler/app/options"
"github.com/kubernetes-incubator/descheduler/pkg/descheduler/client"
eutils "github.com/kubernetes-incubator/descheduler/pkg/descheduler/evictions/utils"
nodeutil "github.com/kubernetes-incubator/descheduler/pkg/descheduler/node"
"github.com/kubernetes-incubator/descheduler/pkg/descheduler/strategies"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/informers"
"sigs.k8s.io/descheduler/cmd/descheduler/app/options"
"sigs.k8s.io/descheduler/pkg/api"
"sigs.k8s.io/descheduler/pkg/descheduler/client"
"sigs.k8s.io/descheduler/pkg/descheduler/evictions"
eutils "sigs.k8s.io/descheduler/pkg/descheduler/evictions/utils"
nodeutil "sigs.k8s.io/descheduler/pkg/descheduler/node"
"sigs.k8s.io/descheduler/pkg/descheduler/strategies"
)
func Run(rs *options.DeschedulerServer) error {
ctx := context.Background()
rsclient, err := client.CreateClient(rs.KubeconfigFile)
if err != nil {
return err
@@ -50,21 +57,62 @@ func Run(rs *options.DeschedulerServer) error {
}
stopChannel := make(chan struct{})
nodes, err := nodeutil.ReadyNodes(rs.Client, rs.NodeSelector, stopChannel)
if err != nil {
return err
return RunDeschedulerStrategies(ctx, rs, deschedulerPolicy, evictionPolicyGroupVersion, stopChannel)
}
type strategyFunction func(ctx context.Context, client clientset.Interface, strategy api.DeschedulerStrategy, nodes []*v1.Node, podEvictor *evictions.PodEvictor)
func RunDeschedulerStrategies(ctx context.Context, rs *options.DeschedulerServer, deschedulerPolicy *api.DeschedulerPolicy, evictionPolicyGroupVersion string, stopChannel chan struct{}) error {
sharedInformerFactory := informers.NewSharedInformerFactory(rs.Client, 0)
nodeInformer := sharedInformerFactory.Core().V1().Nodes()
sharedInformerFactory.Start(stopChannel)
sharedInformerFactory.WaitForCacheSync(stopChannel)
strategyFuncs := map[string]strategyFunction{
"RemoveDuplicates": strategies.RemoveDuplicatePods,
"LowNodeUtilization": strategies.LowNodeUtilization,
"RemovePodsViolatingInterPodAntiAffinity": strategies.RemovePodsViolatingInterPodAntiAffinity,
"RemovePodsViolatingNodeAffinity": strategies.RemovePodsViolatingNodeAffinity,
"RemovePodsViolatingNodeTaints": strategies.RemovePodsViolatingNodeTaints,
"RemovePodsHavingTooManyRestarts": strategies.RemovePodsHavingTooManyRestarts,
"PodLifeTime": strategies.PodLifeTime,
}
if len(nodes) <= 1 {
glog.V(1).Infof("The cluster size is 0 or 1 meaning eviction causes service disruption or degradation. So aborting..")
return nil
}
wait.Until(func() {
nodes, err := nodeutil.ReadyNodes(ctx, rs.Client, nodeInformer, rs.NodeSelector, stopChannel)
if err != nil {
klog.V(1).Infof("Unable to get ready nodes: %v", err)
close(stopChannel)
return
}
nodePodCount := strategies.InitializeNodePodCount(nodes)
strategies.RemoveDuplicatePods(rs, deschedulerPolicy.Strategies["RemoveDuplicates"], evictionPolicyGroupVersion, nodes, nodePodCount)
strategies.LowNodeUtilization(rs, deschedulerPolicy.Strategies["LowNodeUtilization"], evictionPolicyGroupVersion, nodes, nodePodCount)
strategies.RemovePodsViolatingInterPodAntiAffinity(rs, deschedulerPolicy.Strategies["RemovePodsViolatingInterPodAntiAffinity"], evictionPolicyGroupVersion, nodes, nodePodCount)
strategies.RemovePodsViolatingNodeAffinity(rs, deschedulerPolicy.Strategies["RemovePodsViolatingNodeAffinity"], evictionPolicyGroupVersion, nodes, nodePodCount)
if len(nodes) <= 1 {
klog.V(1).Infof("The cluster size is 0 or 1 meaning eviction causes service disruption or degradation. So aborting..")
close(stopChannel)
return
}
podEvictor := evictions.NewPodEvictor(
rs.Client,
evictionPolicyGroupVersion,
rs.DryRun,
rs.MaxNoOfPodsToEvictPerNode,
nodes,
rs.EvictLocalStoragePods,
)
for name, f := range strategyFuncs {
if strategy := deschedulerPolicy.Strategies[api.StrategyName(name)]; strategy.Enabled {
f(ctx, rs.Client, strategy, nodes, podEvictor)
}
}
// If there was no interval specified, send a signal to the stopChannel to end the wait.Until loop after 1 iteration
if rs.DeschedulingInterval.Seconds() == 0 {
close(stopChannel)
}
}, rs.DeschedulingInterval, stopChannel)
return nil
}

View File

@@ -0,0 +1,94 @@
package descheduler
import (
"context"
"fmt"
"strings"
"testing"
"time"
"k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/wait"
fakeclientset "k8s.io/client-go/kubernetes/fake"
"sigs.k8s.io/descheduler/cmd/descheduler/app/options"
"sigs.k8s.io/descheduler/pkg/api"
"sigs.k8s.io/descheduler/test"
)
func TestTaintsUpdated(t *testing.T) {
ctx := context.Background()
n1 := test.BuildTestNode("n1", 2000, 3000, 10, nil)
n2 := test.BuildTestNode("n2", 2000, 3000, 10, nil)
p1 := test.BuildTestPod(fmt.Sprintf("pod_1_%s", n1.Name), 200, 0, n1.Name, nil)
p1.ObjectMeta.OwnerReferences = []metav1.OwnerReference{
{},
}
client := fakeclientset.NewSimpleClientset(n1, n2, p1)
dp := &api.DeschedulerPolicy{
Strategies: api.StrategyList{
"RemovePodsViolatingNodeTaints": api.DeschedulerStrategy{
Enabled: true,
},
},
}
stopChannel := make(chan struct{})
defer close(stopChannel)
rs := options.NewDeschedulerServer()
rs.Client = client
rs.DeschedulingInterval = 100 * time.Millisecond
go func() {
err := RunDeschedulerStrategies(ctx, rs, dp, "v1beta1", stopChannel)
if err != nil {
t.Fatalf("Unable to run descheduler strategies: %v", err)
}
}()
// Wait for few cycles and then verify the only pod still exists
time.Sleep(300 * time.Millisecond)
pods, err := client.CoreV1().Pods(p1.Namespace).List(ctx, metav1.ListOptions{})
if err != nil {
t.Errorf("Unable to list pods: %v", err)
}
if len(pods.Items) < 1 {
t.Errorf("The pod was evicted before a node was tained")
}
n1WithTaint := n1.DeepCopy()
n1WithTaint.Spec.Taints = []v1.Taint{
{
Key: "key",
Value: "value",
Effect: v1.TaintEffectNoSchedule,
},
}
if _, err := client.CoreV1().Nodes().Update(ctx, n1WithTaint, metav1.UpdateOptions{}); err != nil {
t.Fatalf("Unable to update node: %v\n", err)
}
if err := wait.PollImmediate(100*time.Millisecond, time.Second, func() (bool, error) {
// Get over evicted pod result in panic
//pods, err := client.CoreV1().Pods(p1.Namespace).Get(p1.Name, metav1.GetOptions{})
// List is better, it does not panic.
// Though once the pod is evicted, List starts to error with "can't assign or convert v1beta1.Eviction into v1.Pod"
pods, err := client.CoreV1().Pods(p1.Namespace).List(ctx, metav1.ListOptions{})
if err == nil {
if len(pods.Items) > 0 {
return false, nil
}
return true, nil
}
if strings.Contains(err.Error(), "can't assign or convert v1beta1.Eviction into v1.Pod") {
return true, nil
}
return false, nil
}); err != nil {
t.Fatalf("Unable to evict pod, node taint did not get propagated to descheduler strategies")
}
}

View File

@@ -17,20 +17,147 @@ limitations under the License.
package evictions
import (
"context"
"fmt"
"strings"
"k8s.io/api/core/v1"
v1 "k8s.io/api/core/v1"
policy "k8s.io/api/policy/v1beta1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/errors"
clientset "k8s.io/client-go/kubernetes"
"k8s.io/client-go/kubernetes/scheme"
clientcorev1 "k8s.io/client-go/kubernetes/typed/core/v1"
"k8s.io/client-go/tools/record"
"k8s.io/klog/v2"
podutil "sigs.k8s.io/descheduler/pkg/descheduler/pod"
"sigs.k8s.io/descheduler/pkg/utils"
eutils "github.com/kubernetes-incubator/descheduler/pkg/descheduler/evictions/utils"
eutils "sigs.k8s.io/descheduler/pkg/descheduler/evictions/utils"
)
func EvictPod(client clientset.Interface, pod *v1.Pod, policyGroupVersion string, dryRun bool) (bool, error) {
const (
evictPodAnnotationKey = "descheduler.alpha.kubernetes.io/evict"
)
// nodePodEvictedCount keeps count of pods evicted on node
type nodePodEvictedCount map[*v1.Node]int
type PodEvictor struct {
client clientset.Interface
policyGroupVersion string
dryRun bool
maxPodsToEvictPerNode int
nodepodCount nodePodEvictedCount
evictLocalStoragePods bool
}
func NewPodEvictor(
client clientset.Interface,
policyGroupVersion string,
dryRun bool,
maxPodsToEvictPerNode int,
nodes []*v1.Node,
evictLocalStoragePods bool,
) *PodEvictor {
var nodePodCount = make(nodePodEvictedCount)
for _, node := range nodes {
// Initialize podsEvicted till now with 0.
nodePodCount[node] = 0
}
return &PodEvictor{
client: client,
policyGroupVersion: policyGroupVersion,
dryRun: dryRun,
maxPodsToEvictPerNode: maxPodsToEvictPerNode,
nodepodCount: nodePodCount,
evictLocalStoragePods: evictLocalStoragePods,
}
}
// IsEvictable checks if a pod is evictable or not.
func (pe *PodEvictor) IsEvictable(pod *v1.Pod) bool {
checkErrs := []error{}
if IsCriticalPod(pod) {
checkErrs = append(checkErrs, fmt.Errorf("pod is critical"))
}
ownerRefList := podutil.OwnerRef(pod)
if IsDaemonsetPod(ownerRefList) {
checkErrs = append(checkErrs, fmt.Errorf("pod is a DaemonSet pod"))
}
if len(ownerRefList) == 0 {
checkErrs = append(checkErrs, fmt.Errorf("pod does not have any ownerrefs"))
}
if !pe.evictLocalStoragePods && IsPodWithLocalStorage(pod) {
checkErrs = append(checkErrs, fmt.Errorf("pod has local storage and descheduler is not configured with --evict-local-storage-pods"))
}
if IsMirrorPod(pod) {
checkErrs = append(checkErrs, fmt.Errorf("pod is a mirror pod"))
}
if len(checkErrs) > 0 && !HaveEvictAnnotation(pod) {
klog.V(4).Infof("Pod %s in namespace %s is not evictable: Pod lacks an eviction annotation and fails the following checks: %v", pod.Name, pod.Namespace, errors.NewAggregate(checkErrs).Error())
return false
}
return true
}
// NodeEvicted gives a number of pods evicted for node
func (pe *PodEvictor) NodeEvicted(node *v1.Node) int {
return pe.nodepodCount[node]
}
// TotalEvicted gives a number of pods evicted through all nodes
func (pe *PodEvictor) TotalEvicted() int {
var total int
for _, count := range pe.nodepodCount {
total += count
}
return total
}
// EvictPod returns non-nil error only when evicting a pod on a node is not
// possible (due to maxPodsToEvictPerNode constraint). Success is true when the pod
// is evicted on the server side.
func (pe *PodEvictor) EvictPod(ctx context.Context, pod *v1.Pod, node *v1.Node, reasons ...string) (bool, error) {
var reason string
if len(reasons) > 0 {
reason = " (" + strings.Join(reasons, ", ") + ")"
}
if pe.maxPodsToEvictPerNode > 0 && pe.nodepodCount[node]+1 > pe.maxPodsToEvictPerNode {
return false, fmt.Errorf("Maximum number %v of evicted pods per %q node reached", pe.maxPodsToEvictPerNode, node.Name)
}
err := evictPod(ctx, pe.client, pod, pe.policyGroupVersion, pe.dryRun)
if err != nil {
// err is used only for logging purposes
klog.Errorf("Error evicting pod: %#v in namespace %#v%s: %#v", pod.Name, pod.Namespace, reason, err)
return false, nil
}
pe.nodepodCount[node]++
if pe.dryRun {
klog.V(1).Infof("Evicted pod in dry run mode: %#v in namespace %#v%s", pod.Name, pod.Namespace, reason)
} else {
klog.V(1).Infof("Evicted pod: %#v in namespace %#v%s", pod.Name, pod.Namespace, reason)
eventBroadcaster := record.NewBroadcaster()
eventBroadcaster.StartLogging(klog.V(3).Infof)
eventBroadcaster.StartRecordingToSink(&clientcorev1.EventSinkImpl{Interface: pe.client.CoreV1().Events(pod.Namespace)})
r := eventBroadcaster.NewRecorder(scheme.Scheme, v1.EventSource{Component: "sigs.k8s.io.descheduler"})
r.Event(pod, v1.EventTypeNormal, "Descheduled", fmt.Sprintf("pod evicted by sigs.k8s.io/descheduler%s", reason))
}
return true, nil
}
func evictPod(ctx context.Context, client clientset.Interface, pod *v1.Pod, policyGroupVersion string, dryRun bool) error {
if dryRun {
return true, nil
return nil
}
deleteOptions := &metav1.DeleteOptions{}
// GracePeriodSeconds ?
@@ -45,15 +172,47 @@ func EvictPod(client clientset.Interface, pod *v1.Pod, policyGroupVersion string
},
DeleteOptions: deleteOptions,
}
err := client.Policy().Evictions(eviction.Namespace).Evict(eviction)
err := client.PolicyV1beta1().Evictions(eviction.Namespace).Evict(ctx, eviction)
if err == nil {
return true, nil
} else if apierrors.IsTooManyRequests(err) {
return false, fmt.Errorf("error when evicting pod (ignoring) %q: %v", pod.Name, err)
} else if apierrors.IsNotFound(err) {
return true, fmt.Errorf("pod not found when evicting %q: %v", pod.Name, err)
} else {
return false, err
if apierrors.IsTooManyRequests(err) {
return fmt.Errorf("error when evicting pod (ignoring) %q: %v", pod.Name, err)
}
if apierrors.IsNotFound(err) {
return fmt.Errorf("pod not found when evicting %q: %v", pod.Name, err)
}
return err
}
func IsCriticalPod(pod *v1.Pod) bool {
return utils.IsCriticalPod(pod)
}
func IsDaemonsetPod(ownerRefList []metav1.OwnerReference) bool {
for _, ownerRef := range ownerRefList {
if ownerRef.Kind == "DaemonSet" {
return true
}
}
return false
}
// IsMirrorPod checks whether the pod is a mirror pod.
func IsMirrorPod(pod *v1.Pod) bool {
return utils.IsMirrorPod(pod)
}
// HaveEvictAnnotation checks if the pod have evict annotation
func HaveEvictAnnotation(pod *v1.Pod) bool {
_, found := pod.ObjectMeta.Annotations[evictPodAnnotationKey]
return found
}
func IsPodWithLocalStorage(pod *v1.Pod) bool {
for _, volume := range pod.Spec.Volumes {
if volume.HostPath != nil || volume.EmptyDir != nil {
return true
}
}
return false
}

View File

@@ -17,24 +17,268 @@ limitations under the License.
package evictions
import (
"github.com/kubernetes-incubator/descheduler/test"
"k8s.io/api/core/v1"
"context"
"testing"
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/kubernetes/fake"
core "k8s.io/client-go/testing"
"testing"
podutil "sigs.k8s.io/descheduler/pkg/descheduler/pod"
"sigs.k8s.io/descheduler/pkg/utils"
"sigs.k8s.io/descheduler/test"
)
func TestEvictPod(t *testing.T) {
n1 := test.BuildTestNode("node1", 1000, 2000, 9)
p1 := test.BuildTestPod("p1", 400, 0, n1.Name)
fakeClient := &fake.Clientset{}
fakeClient.Fake.AddReactor("list", "pods", func(action core.Action) (bool, runtime.Object, error) {
return true, &v1.PodList{Items: []v1.Pod{*p1}}, nil
})
evicted, _ := EvictPod(fakeClient, p1, "v1", false)
if !evicted {
t.Errorf("Expected %v pod to be evicted", p1.Name)
ctx := context.Background()
node1 := test.BuildTestNode("node1", 1000, 2000, 9, nil)
pod1 := test.BuildTestPod("p1", 400, 0, "node1", nil)
tests := []struct {
description string
node *v1.Node
pod *v1.Pod
pods []v1.Pod
want error
}{
{
description: "test pod eviction - pod present",
node: node1,
pod: pod1,
pods: []v1.Pod{*pod1},
want: nil,
},
{
description: "test pod eviction - pod absent",
node: node1,
pod: pod1,
pods: []v1.Pod{*test.BuildTestPod("p2", 400, 0, "node1", nil), *test.BuildTestPod("p3", 450, 0, "node1", nil)},
want: nil,
},
}
for _, test := range tests {
fakeClient := &fake.Clientset{}
fakeClient.Fake.AddReactor("list", "pods", func(action core.Action) (bool, runtime.Object, error) {
return true, &v1.PodList{Items: test.pods}, nil
})
got := evictPod(ctx, fakeClient, test.pod, "v1", false)
if got != test.want {
t.Errorf("Test error for Desc: %s. Expected %v pod eviction to be %v, got %v", test.description, test.pod.Name, test.want, got)
}
}
}
func TestIsEvictable(t *testing.T) {
n1 := test.BuildTestNode("node1", 1000, 2000, 13, nil)
type testCase struct {
pod *v1.Pod
runBefore func(*v1.Pod)
evictLocalStoragePods bool
result bool
}
testCases := []testCase{
{
pod: test.BuildTestPod("p1", 400, 0, n1.Name, nil),
runBefore: func(pod *v1.Pod) {
pod.ObjectMeta.OwnerReferences = test.GetNormalPodOwnerRefList()
},
evictLocalStoragePods: false,
result: true,
}, {
pod: test.BuildTestPod("p2", 400, 0, n1.Name, nil),
runBefore: func(pod *v1.Pod) {
pod.Annotations = map[string]string{"descheduler.alpha.kubernetes.io/evict": "true"}
pod.ObjectMeta.OwnerReferences = test.GetNormalPodOwnerRefList()
},
evictLocalStoragePods: false,
result: true,
}, {
pod: test.BuildTestPod("p3", 400, 0, n1.Name, nil),
runBefore: func(pod *v1.Pod) {
pod.ObjectMeta.OwnerReferences = test.GetReplicaSetOwnerRefList()
},
evictLocalStoragePods: false,
result: true,
}, {
pod: test.BuildTestPod("p4", 400, 0, n1.Name, nil),
runBefore: func(pod *v1.Pod) {
pod.Annotations = map[string]string{"descheduler.alpha.kubernetes.io/evict": "true"}
pod.ObjectMeta.OwnerReferences = test.GetReplicaSetOwnerRefList()
},
evictLocalStoragePods: false,
result: true,
}, {
pod: test.BuildTestPod("p5", 400, 0, n1.Name, nil),
runBefore: func(pod *v1.Pod) {
pod.ObjectMeta.OwnerReferences = test.GetNormalPodOwnerRefList()
pod.Spec.Volumes = []v1.Volume{
{
Name: "sample",
VolumeSource: v1.VolumeSource{
HostPath: &v1.HostPathVolumeSource{Path: "somePath"},
EmptyDir: &v1.EmptyDirVolumeSource{
SizeLimit: resource.NewQuantity(int64(10), resource.BinarySI)},
},
},
}
},
evictLocalStoragePods: false,
result: false,
}, {
pod: test.BuildTestPod("p6", 400, 0, n1.Name, nil),
runBefore: func(pod *v1.Pod) {
pod.ObjectMeta.OwnerReferences = test.GetNormalPodOwnerRefList()
pod.Spec.Volumes = []v1.Volume{
{
Name: "sample",
VolumeSource: v1.VolumeSource{
HostPath: &v1.HostPathVolumeSource{Path: "somePath"},
EmptyDir: &v1.EmptyDirVolumeSource{
SizeLimit: resource.NewQuantity(int64(10), resource.BinarySI)},
},
},
}
},
evictLocalStoragePods: true,
result: true,
}, {
pod: test.BuildTestPod("p7", 400, 0, n1.Name, nil),
runBefore: func(pod *v1.Pod) {
pod.Annotations = map[string]string{"descheduler.alpha.kubernetes.io/evict": "true"}
pod.ObjectMeta.OwnerReferences = test.GetNormalPodOwnerRefList()
pod.Spec.Volumes = []v1.Volume{
{
Name: "sample",
VolumeSource: v1.VolumeSource{
HostPath: &v1.HostPathVolumeSource{Path: "somePath"},
EmptyDir: &v1.EmptyDirVolumeSource{
SizeLimit: resource.NewQuantity(int64(10), resource.BinarySI)},
},
},
}
},
evictLocalStoragePods: false,
result: true,
}, {
pod: test.BuildTestPod("p8", 400, 0, n1.Name, nil),
runBefore: func(pod *v1.Pod) {
pod.ObjectMeta.OwnerReferences = test.GetDaemonSetOwnerRefList()
},
evictLocalStoragePods: false,
result: false,
}, {
pod: test.BuildTestPod("p9", 400, 0, n1.Name, nil),
runBefore: func(pod *v1.Pod) {
pod.Annotations = map[string]string{"descheduler.alpha.kubernetes.io/evict": "true"}
pod.ObjectMeta.OwnerReferences = test.GetDaemonSetOwnerRefList()
},
evictLocalStoragePods: false,
result: true,
}, {
pod: test.BuildTestPod("p10", 400, 0, n1.Name, nil),
runBefore: func(pod *v1.Pod) {
pod.Annotations = test.GetMirrorPodAnnotation()
},
evictLocalStoragePods: false,
result: false,
}, {
pod: test.BuildTestPod("p11", 400, 0, n1.Name, nil),
runBefore: func(pod *v1.Pod) {
pod.Annotations = test.GetMirrorPodAnnotation()
pod.Annotations["descheduler.alpha.kubernetes.io/evict"] = "true"
},
evictLocalStoragePods: false,
result: true,
}, {
pod: test.BuildTestPod("p12", 400, 0, n1.Name, nil),
runBefore: func(pod *v1.Pod) {
priority := utils.SystemCriticalPriority
pod.Spec.Priority = &priority
},
evictLocalStoragePods: false,
result: false,
}, {
pod: test.BuildTestPod("p13", 400, 0, n1.Name, nil),
runBefore: func(pod *v1.Pod) {
priority := utils.SystemCriticalPriority
pod.Spec.Priority = &priority
pod.Annotations = map[string]string{
"descheduler.alpha.kubernetes.io/evict": "true",
}
},
evictLocalStoragePods: false,
result: true,
},
}
for _, test := range testCases {
test.runBefore(test.pod)
podEvictor := &PodEvictor{
evictLocalStoragePods: test.evictLocalStoragePods,
}
result := podEvictor.IsEvictable(test.pod)
if result != test.result {
t.Errorf("IsEvictable should return for pod %s %t, but it returns %t", test.pod.Name, test.result, result)
}
}
}
func TestPodTypes(t *testing.T) {
n1 := test.BuildTestNode("node1", 1000, 2000, 9, nil)
p1 := test.BuildTestPod("p1", 400, 0, n1.Name, nil)
// These won't be evicted.
p2 := test.BuildTestPod("p2", 400, 0, n1.Name, nil)
p3 := test.BuildTestPod("p3", 400, 0, n1.Name, nil)
p4 := test.BuildTestPod("p4", 400, 0, n1.Name, nil)
p5 := test.BuildTestPod("p5", 400, 0, n1.Name, nil)
p6 := test.BuildTestPod("p6", 400, 0, n1.Name, nil)
p6.ObjectMeta.OwnerReferences = test.GetNormalPodOwnerRefList()
p1.ObjectMeta.OwnerReferences = test.GetReplicaSetOwnerRefList()
// The following 4 pods won't get evicted.
// A daemonset.
//p2.Annotations = test.GetDaemonSetAnnotation()
p2.ObjectMeta.OwnerReferences = test.GetDaemonSetOwnerRefList()
// A pod with local storage.
p3.ObjectMeta.OwnerReferences = test.GetNormalPodOwnerRefList()
p3.Spec.Volumes = []v1.Volume{
{
Name: "sample",
VolumeSource: v1.VolumeSource{
HostPath: &v1.HostPathVolumeSource{Path: "somePath"},
EmptyDir: &v1.EmptyDirVolumeSource{
SizeLimit: resource.NewQuantity(int64(10), resource.BinarySI)},
},
},
}
// A Mirror Pod.
p4.Annotations = test.GetMirrorPodAnnotation()
// A Critical Pod.
p5.Namespace = "kube-system"
priority := utils.SystemCriticalPriority
p5.Spec.Priority = &priority
systemCriticalPriority := utils.SystemCriticalPriority
p5.Spec.Priority = &systemCriticalPriority
if !IsMirrorPod(p4) {
t.Errorf("Expected p4 to be a mirror pod.")
}
if !IsCriticalPod(p5) {
t.Errorf("Expected p5 to be a critical pod.")
}
if !IsPodWithLocalStorage(p3) {
t.Errorf("Expected p3 to be a pod with local storage.")
}
ownerRefList := podutil.OwnerRef(p2)
if !IsDaemonsetPod(ownerRefList) {
t.Errorf("Expected p2 to be a daemonset pod.")
}
ownerRefList = podutil.OwnerRef(p1)
if IsDaemonsetPod(ownerRefList) || IsPodWithLocalStorage(p1) || IsCriticalPod(p1) || IsMirrorPod(p1) {
t.Errorf("Expected p1 to be a normal pod.")
}
}

View File

@@ -17,40 +17,34 @@ limitations under the License.
package node
import (
"time"
"github.com/golang/glog"
"github.com/kubernetes-incubator/descheduler/pkg/utils"
"context"
"k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/labels"
coreinformers "k8s.io/client-go/informers/core/v1"
clientset "k8s.io/client-go/kubernetes"
corelisters "k8s.io/client-go/listers/core/v1"
"k8s.io/client-go/tools/cache"
"k8s.io/klog/v2"
"sigs.k8s.io/descheduler/pkg/utils"
)
// ReadyNodes returns ready nodes irrespective of whether they are
// schedulable or not.
func ReadyNodes(client clientset.Interface, nodeSelector string, stopChannel <-chan struct{}) ([]*v1.Node, error) {
func ReadyNodes(ctx context.Context, client clientset.Interface, nodeInformer coreinformers.NodeInformer, nodeSelector string, stopChannel <-chan struct{}) ([]*v1.Node, error) {
ns, err := labels.Parse(nodeSelector)
if err != nil {
return []*v1.Node{}, err
}
var nodes []*v1.Node
nl := GetNodeLister(client, stopChannel)
if nl != nil {
// err is defined above
if nodes, err = nl.List(ns); err != nil {
return []*v1.Node{}, err
}
// err is defined above
if nodes, err = nodeInformer.Lister().List(ns); err != nil {
return []*v1.Node{}, err
}
if len(nodes) == 0 {
glog.V(2).Infof("node lister returned empty list, now fetch directly")
klog.V(2).Infof("node lister returned empty list, now fetch directly")
nItems, err := client.Core().Nodes().List(metav1.ListOptions{LabelSelector: nodeSelector})
nItems, err := client.CoreV1().Nodes().List(ctx, metav1.ListOptions{LabelSelector: nodeSelector})
if err != nil {
return []*v1.Node{}, err
}
@@ -74,22 +68,6 @@ func ReadyNodes(client clientset.Interface, nodeSelector string, stopChannel <-c
return readyNodes, nil
}
func GetNodeLister(client clientset.Interface, stopChannel <-chan struct{}) corelisters.NodeLister {
if stopChannel == nil {
return nil
}
listWatcher := cache.NewListWatchFromClient(client.Core().RESTClient(), "nodes", v1.NamespaceAll, fields.Everything())
store := cache.NewIndexer(cache.MetaNamespaceKeyFunc, cache.Indexers{cache.NamespaceIndex: cache.MetaNamespaceIndexFunc})
nodeLister := corelisters.NewNodeLister(store)
reflector := cache.NewReflector(listWatcher, &v1.Node{}, store, time.Hour)
go reflector.Run(stopChannel)
// To give some time so that listing works, chosen randomly
time.Sleep(100 * time.Millisecond)
return nodeLister
}
// IsReady checks if the descheduler could run against given node.
func IsReady(node *v1.Node) bool {
for i := range node.Status.Conditions {
@@ -99,31 +77,28 @@ func IsReady(node *v1.Node) bool {
// - NodeOutOfDisk condition status is ConditionFalse,
// - NodeNetworkUnavailable condition status is ConditionFalse.
if cond.Type == v1.NodeReady && cond.Status != v1.ConditionTrue {
glog.V(1).Infof("Ignoring node %v with %v condition status %v", node.Name, cond.Type, cond.Status)
klog.V(1).Infof("Ignoring node %v with %v condition status %v", node.Name, cond.Type, cond.Status)
return false
} /*else if cond.Type == v1.NodeOutOfDisk && cond.Status != v1.ConditionFalse {
glog.V(4).Infof("Ignoring node %v with %v condition status %v", node.Name, cond.Type, cond.Status)
klog.V(4).Infof("Ignoring node %v with %v condition status %v", node.Name, cond.Type, cond.Status)
return false
} else if cond.Type == v1.NodeNetworkUnavailable && cond.Status != v1.ConditionFalse {
glog.V(4).Infof("Ignoring node %v with %v condition status %v", node.Name, cond.Type, cond.Status)
klog.V(4).Infof("Ignoring node %v with %v condition status %v", node.Name, cond.Type, cond.Status)
return false
}*/
}
// Ignore nodes that are marked unschedulable
/*if node.Spec.Unschedulable {
glog.V(4).Infof("Ignoring node %v since it is unschedulable", node.Name)
klog.V(4).Infof("Ignoring node %v since it is unschedulable", node.Name)
return false
}*/
return true
}
// IsNodeUschedulable checks if the node is unschedulable. This is helper function to check only in case of
// IsNodeUnschedulable checks if the node is unschedulable. This is helper function to check only in case of
// underutilized node so that they won't be accounted for.
func IsNodeUschedulable(node *v1.Node) bool {
if node.Spec.Unschedulable {
return true
}
return false
func IsNodeUnschedulable(node *v1.Node) bool {
return node.Spec.Unschedulable
}
// PodFitsAnyNode checks if the given pod fits any of the given nodes, based on
@@ -136,12 +111,9 @@ func PodFitsAnyNode(pod *v1.Pod, nodes []*v1.Node) bool {
if err != nil || !ok {
continue
}
if ok {
if !IsNodeUschedulable(node) {
glog.V(2).Infof("Pod %v can possibly be scheduled on %v", pod.Name, node.Name)
return true
}
return false
if !IsNodeUnschedulable(node) {
klog.V(2).Infof("Pod %v can possibly be scheduled on %v", pod.Name, node.Name)
return true
}
}
return false
@@ -153,15 +125,15 @@ func PodFitsCurrentNode(pod *v1.Pod, node *v1.Node) bool {
ok, err := utils.PodMatchNodeSelector(pod, node)
if err != nil {
glog.Error(err)
klog.Error(err)
return false
}
if !ok {
glog.V(1).Infof("Pod %v does not fit on node %v", pod.Name, node.Name)
klog.V(2).Infof("Pod %v does not fit on node %v", pod.Name, node.Name)
return false
}
glog.V(3).Infof("Pod %v fits on node %v", pod.Name, node.Name)
klog.V(2).Infof("Pod %v fits on node %v", pod.Name, node.Name)
return true
}

View File

@@ -17,64 +17,71 @@ limitations under the License.
package node
import (
"context"
"testing"
"github.com/kubernetes-incubator/descheduler/test"
"k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/informers"
"k8s.io/client-go/kubernetes/fake"
"sigs.k8s.io/descheduler/test"
)
func TestReadyNodes(t *testing.T) {
node1 := test.BuildTestNode("node1", 1000, 2000, 9)
node1.Status.Conditions = []v1.NodeCondition{{Type: v1.NodeOutOfDisk, Status: v1.ConditionTrue}}
node2 := test.BuildTestNode("node2", 1000, 2000, 9)
node3 := test.BuildTestNode("node3", 1000, 2000, 9)
node3.Status.Conditions = []v1.NodeCondition{{Type: v1.NodeMemoryPressure, Status: v1.ConditionTrue}}
node4 := test.BuildTestNode("node4", 1000, 2000, 9)
node4.Status.Conditions = []v1.NodeCondition{{Type: v1.NodeNetworkUnavailable, Status: v1.ConditionTrue}}
node5 := test.BuildTestNode("node5", 1000, 2000, 9)
node5.Spec.Unschedulable = true
node6 := test.BuildTestNode("node6", 1000, 2000, 9)
node6.Status.Conditions = []v1.NodeCondition{{Type: v1.NodeReady, Status: v1.ConditionFalse}}
node1 := test.BuildTestNode("node2", 1000, 2000, 9, nil)
node2 := test.BuildTestNode("node3", 1000, 2000, 9, nil)
node2.Status.Conditions = []v1.NodeCondition{{Type: v1.NodeMemoryPressure, Status: v1.ConditionTrue}}
node3 := test.BuildTestNode("node4", 1000, 2000, 9, nil)
node3.Status.Conditions = []v1.NodeCondition{{Type: v1.NodeNetworkUnavailable, Status: v1.ConditionTrue}}
node4 := test.BuildTestNode("node5", 1000, 2000, 9, nil)
node4.Spec.Unschedulable = true
node5 := test.BuildTestNode("node6", 1000, 2000, 9, nil)
node5.Status.Conditions = []v1.NodeCondition{{Type: v1.NodeReady, Status: v1.ConditionFalse}}
if !IsReady(node1) {
t.Errorf("Expected %v to be ready", node1.Name)
}
if !IsReady(node2) {
t.Errorf("Expected %v to be ready", node2.Name)
}
if !IsReady(node3) {
if !IsReady(node2) {
t.Errorf("Expected %v to be ready", node3.Name)
}
if !IsReady(node4) {
if !IsReady(node3) {
t.Errorf("Expected %v to be ready", node4.Name)
}
if !IsReady(node5) {
if !IsReady(node4) {
t.Errorf("Expected %v to be ready", node5.Name)
}
if IsReady(node6) {
t.Errorf("Expected %v to be not ready", node6.Name)
if IsReady(node5) {
t.Errorf("Expected %v to be not ready", node5.Name)
}
}
func TestReadyNodesWithNodeSelector(t *testing.T) {
node1 := test.BuildTestNode("node1", 1000, 2000, 9)
ctx := context.Background()
node1 := test.BuildTestNode("node1", 1000, 2000, 9, nil)
node1.Labels = map[string]string{"type": "compute"}
node2 := test.BuildTestNode("node2", 1000, 2000, 9)
node2 := test.BuildTestNode("node2", 1000, 2000, 9, nil)
node2.Labels = map[string]string{"type": "infra"}
fakeClient := fake.NewSimpleClientset(node1, node2)
nodeSelector := "type=compute"
nodes, _ := ReadyNodes(fakeClient, nodeSelector, nil)
sharedInformerFactory := informers.NewSharedInformerFactory(fakeClient, 0)
nodeInformer := sharedInformerFactory.Core().V1().Nodes()
stopChannel := make(chan struct{}, 0)
sharedInformerFactory.Start(stopChannel)
sharedInformerFactory.WaitForCacheSync(stopChannel)
defer close(stopChannel)
nodes, _ := ReadyNodes(ctx, fakeClient, nodeInformer, nodeSelector, nil)
if nodes[0].Name != "node1" {
t.Errorf("Expected node1, got %s", nodes[0].Name)
}
}
func TestIsNodeUschedulable(t *testing.T) {
func TestIsNodeUnschedulable(t *testing.T) {
tests := []struct {
description string
node *v1.Node
@@ -96,7 +103,7 @@ func TestIsNodeUschedulable(t *testing.T) {
},
}
for _, test := range tests {
actualUnSchedulable := IsNodeUschedulable(test.node)
actualUnSchedulable := IsNodeUnschedulable(test.node)
if actualUnSchedulable != test.IsUnSchedulable {
t.Errorf("Test %#v failed", test.description)
}
@@ -247,6 +254,49 @@ func TestPodFitsAnyNode(t *testing.T) {
},
success: true,
},
{
description: "Pod expected to fit one of the nodes (error node first)",
pod: &v1.Pod{
Spec: v1.PodSpec{
Affinity: &v1.Affinity{
NodeAffinity: &v1.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &v1.NodeSelector{
NodeSelectorTerms: []v1.NodeSelectorTerm{
{
MatchExpressions: []v1.NodeSelectorRequirement{
{
Key: nodeLabelKey,
Operator: "In",
Values: []string{
nodeLabelValue,
},
},
},
},
},
},
},
},
},
},
nodes: []*v1.Node{
{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
nodeLabelKey: "no",
},
},
},
{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
nodeLabelKey: nodeLabelValue,
},
},
},
},
success: true,
},
{
description: "Pod expected to fit none of the nodes",
pod: &v1.Pod{

View File

@@ -17,73 +17,27 @@ limitations under the License.
package pod
import (
"k8s.io/api/core/v1"
"context"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/fields"
clientset "k8s.io/client-go/kubernetes"
api "k8s.io/kubernetes/pkg/apis/core"
"k8s.io/kubernetes/pkg/apis/core/v1/helper/qos"
"k8s.io/kubernetes/pkg/kubelet/types"
"sigs.k8s.io/descheduler/pkg/utils"
"sort"
)
// checkLatencySensitiveResourcesForAContainer checks if there are any latency sensitive resources like GPUs.
func checkLatencySensitiveResourcesForAContainer(rl v1.ResourceList) bool {
if rl == nil {
return false
}
for rName := range rl {
if rName == v1.ResourceNvidiaGPU {
return true
}
// TODO: Add support for other high value resources like hugepages etc. once kube is rebased to 1.8.
}
return false
}
// IsLatencySensitivePod checks if a pod consumes high value devices like GPUs, hugepages or when cpu pinning enabled.
func IsLatencySensitivePod(pod *v1.Pod) bool {
for _, container := range pod.Spec.Containers {
resourceList := container.Resources.Requests
if checkLatencySensitiveResourcesForAContainer(resourceList) {
return true
}
}
return false
}
// IsEvictable checks if a pod is evictable or not.
func IsEvictable(pod *v1.Pod) bool {
ownerRefList := OwnerRef(pod)
if IsMirrorPod(pod) || IsPodWithLocalStorage(pod) || len(ownerRefList) == 0 || IsDaemonsetPod(ownerRefList) || IsCriticalPod(pod) {
return false
}
return true
}
// ListEvictablePodsOnNode returns the list of evictable pods on node.
func ListEvictablePodsOnNode(client clientset.Interface, node *v1.Node) ([]*v1.Pod, error) {
pods, err := ListPodsOnANode(client, node)
if err != nil {
return []*v1.Pod{}, err
}
evictablePods := make([]*v1.Pod, 0)
for _, pod := range pods {
if !IsEvictable(pod) {
continue
} else {
evictablePods = append(evictablePods, pod)
}
}
return evictablePods, nil
}
func ListPodsOnANode(client clientset.Interface, node *v1.Node) ([]*v1.Pod, error) {
fieldSelector, err := fields.ParseSelector("spec.nodeName=" + node.Name + ",status.phase!=" + string(api.PodSucceeded) + ",status.phase!=" + string(api.PodFailed))
// ListPodsOnANode lists all of the pods on a node
// It also accepts an optional "filter" function which can be used to further limit the pods that are returned.
// (Usually this is podEvictor.IsEvictable, in order to only list the evictable pods on a node, but can
// be used by strategies to extend IsEvictable if there are further restrictions, such as with NodeAffinity).
// The filter function should return true if the pod should be returned from ListPodsOnANode
func ListPodsOnANode(ctx context.Context, client clientset.Interface, node *v1.Node, filter func(pod *v1.Pod) bool) ([]*v1.Pod, error) {
fieldSelector, err := fields.ParseSelector("spec.nodeName=" + node.Name + ",status.phase!=" + string(v1.PodSucceeded) + ",status.phase!=" + string(v1.PodFailed))
if err != nil {
return []*v1.Pod{}, err
}
podList, err := client.CoreV1().Pods(v1.NamespaceAll).List(
podList, err := client.CoreV1().Pods(v1.NamespaceAll).List(ctx,
metav1.ListOptions{FieldSelector: fieldSelector.String()})
if err != nil {
return []*v1.Pod{}, err
@@ -91,53 +45,51 @@ func ListPodsOnANode(client clientset.Interface, node *v1.Node) ([]*v1.Pod, erro
pods := make([]*v1.Pod, 0)
for i := range podList.Items {
if filter != nil && !filter(&podList.Items[i]) {
continue
}
pods = append(pods, &podList.Items[i])
}
return pods, nil
}
func IsCriticalPod(pod *v1.Pod) bool {
return types.IsCriticalPod(pod)
}
func IsBestEffortPod(pod *v1.Pod) bool {
return qos.GetPodQOS(pod) == v1.PodQOSBestEffort
}
func IsBurstablePod(pod *v1.Pod) bool {
return qos.GetPodQOS(pod) == v1.PodQOSBurstable
}
func IsGuaranteedPod(pod *v1.Pod) bool {
return qos.GetPodQOS(pod) == v1.PodQOSGuaranteed
}
func IsDaemonsetPod(ownerRefList []metav1.OwnerReference) bool {
for _, ownerRef := range ownerRefList {
if ownerRef.Kind == "DaemonSet" {
return true
}
}
return false
}
// IsMirrorPod checks whether the pod is a mirror pod.
func IsMirrorPod(pod *v1.Pod) bool {
_, found := pod.ObjectMeta.Annotations[types.ConfigMirrorAnnotationKey]
return found
}
func IsPodWithLocalStorage(pod *v1.Pod) bool {
for _, volume := range pod.Spec.Volumes {
if volume.HostPath != nil || volume.EmptyDir != nil {
return true
}
}
return false
}
// OwnerRef returns the ownerRefList for the pod.
func OwnerRef(pod *v1.Pod) []metav1.OwnerReference {
return pod.ObjectMeta.GetOwnerReferences()
}
func IsBestEffortPod(pod *v1.Pod) bool {
return utils.GetPodQOS(pod) == v1.PodQOSBestEffort
}
func IsBurstablePod(pod *v1.Pod) bool {
return utils.GetPodQOS(pod) == v1.PodQOSBurstable
}
func IsGuaranteedPod(pod *v1.Pod) bool {
return utils.GetPodQOS(pod) == v1.PodQOSGuaranteed
}
// SortPodsBasedOnPriorityLowToHigh sorts pods based on their priorities from low to high.
// If pods have same priorities, they will be sorted by QoS in the following order:
// BestEffort, Burstable, Guaranteed
func SortPodsBasedOnPriorityLowToHigh(pods []*v1.Pod) {
sort.Slice(pods, func(i, j int) bool {
if pods[i].Spec.Priority == nil && pods[j].Spec.Priority != nil {
return true
}
if pods[j].Spec.Priority == nil && pods[i].Spec.Priority != nil {
return false
}
if (pods[j].Spec.Priority == nil && pods[i].Spec.Priority == nil) || (*pods[i].Spec.Priority == *pods[j].Spec.Priority) {
if IsBestEffortPod(pods[i]) {
return true
}
if IsBurstablePod(pods[i]) && IsGuaranteedPod(pods[j]) {
return true
}
return false
}
return *pods[i].Spec.Priority < *pods[j].Spec.Priority
})
}

View File

@@ -17,68 +17,99 @@ limitations under the License.
package pod
import (
"context"
"fmt"
"reflect"
"strings"
"testing"
"github.com/kubernetes-incubator/descheduler/test"
"k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/kubernetes/fake"
core "k8s.io/client-go/testing"
"sigs.k8s.io/descheduler/test"
)
func TestPodTypes(t *testing.T) {
n1 := test.BuildTestNode("node1", 1000, 2000, 9)
p1 := test.BuildTestPod("p1", 400, 0, n1.Name)
var (
lowPriority = int32(0)
highPriority = int32(10000)
)
// These won't be evicted.
p2 := test.BuildTestPod("p2", 400, 0, n1.Name)
p3 := test.BuildTestPod("p3", 400, 0, n1.Name)
p4 := test.BuildTestPod("p4", 400, 0, n1.Name)
p5 := test.BuildTestPod("p5", 400, 0, n1.Name)
p6 := test.BuildTestPod("p6", 400, 0, n1.Name)
p6.Spec.Containers[0].Resources.Requests[v1.ResourceNvidiaGPU] = *resource.NewMilliQuantity(3, resource.DecimalSI)
p6.ObjectMeta.OwnerReferences = test.GetNormalPodOwnerRefList()
p1.ObjectMeta.OwnerReferences = test.GetReplicaSetOwnerRefList()
// The following 4 pods won't get evicted.
// A daemonset.
//p2.Annotations = test.GetDaemonSetAnnotation()
p2.ObjectMeta.OwnerReferences = test.GetDaemonSetOwnerRefList()
// A pod with local storage.
p3.ObjectMeta.OwnerReferences = test.GetNormalPodOwnerRefList()
p3.Spec.Volumes = []v1.Volume{
func TestListPodsOnANode(t *testing.T) {
testCases := []struct {
name string
pods map[string][]v1.Pod
node *v1.Node
expectedPodCount int
}{
{
Name: "sample",
VolumeSource: v1.VolumeSource{
HostPath: &v1.HostPathVolumeSource{Path: "somePath"},
EmptyDir: &v1.EmptyDirVolumeSource{
SizeLimit: resource.NewQuantity(int64(10), resource.BinarySI)},
name: "test listing pods on a node",
pods: map[string][]v1.Pod{
"n1": {
*test.BuildTestPod("pod1", 100, 0, "n1", nil),
*test.BuildTestPod("pod2", 100, 0, "n1", nil),
},
"n2": {*test.BuildTestPod("pod3", 100, 0, "n2", nil)},
},
node: test.BuildTestNode("n1", 2000, 3000, 10, nil),
expectedPodCount: 2,
},
}
// A Mirror Pod.
p4.Annotations = test.GetMirrorPodAnnotation()
// A Critical Pod.
p5.Namespace = "kube-system"
p5.Annotations = test.GetCriticalPodAnnotation()
if !IsMirrorPod(p4) {
t.Errorf("Expected p4 to be a mirror pod.")
for _, testCase := range testCases {
fakeClient := &fake.Clientset{}
fakeClient.Fake.AddReactor("list", "pods", func(action core.Action) (bool, runtime.Object, error) {
list := action.(core.ListAction)
fieldString := list.GetListRestrictions().Fields.String()
if strings.Contains(fieldString, "n1") {
return true, &v1.PodList{Items: testCase.pods["n1"]}, nil
} else if strings.Contains(fieldString, "n2") {
return true, &v1.PodList{Items: testCase.pods["n2"]}, nil
}
return true, nil, fmt.Errorf("Failed to list: %v", list)
})
pods, _ := ListPodsOnANode(context.TODO(), fakeClient, testCase.node, nil)
if len(pods) != testCase.expectedPodCount {
t.Errorf("expected %v pods on node %v, got %+v", testCase.expectedPodCount, testCase.node.Name, len(pods))
}
}
}
func TestSortPodsBasedOnPriorityLowToHigh(t *testing.T) {
n1 := test.BuildTestNode("n1", 4000, 3000, 9, nil)
p1 := test.BuildTestPod("p1", 400, 0, n1.Name, func(pod *v1.Pod) {
test.SetPodPriority(pod, lowPriority)
})
// BestEffort
p2 := test.BuildTestPod("p2", 400, 0, n1.Name, func(pod *v1.Pod) {
test.SetPodPriority(pod, highPriority)
test.MakeBestEffortPod(pod)
})
// Burstable
p3 := test.BuildTestPod("p3", 400, 0, n1.Name, func(pod *v1.Pod) {
test.SetPodPriority(pod, highPriority)
test.MakeBurstablePod(pod)
})
// Guaranteed
p4 := test.BuildTestPod("p4", 400, 100, n1.Name, func(pod *v1.Pod) {
test.SetPodPriority(pod, highPriority)
test.MakeGuaranteedPod(pod)
})
// Best effort with nil priorities.
p5 := test.BuildTestPod("p5", 400, 100, n1.Name, test.MakeBestEffortPod)
p5.Spec.Priority = nil
p6 := test.BuildTestPod("p6", 400, 100, n1.Name, test.MakeGuaranteedPod)
p6.Spec.Priority = nil
podList := []*v1.Pod{p4, p3, p2, p1, p6, p5}
SortPodsBasedOnPriorityLowToHigh(podList)
if !reflect.DeepEqual(podList[len(podList)-1], p4) {
t.Errorf("Expected last pod in sorted list to be %v which of highest priority and guaranteed but got %v", p4, podList[len(podList)-1])
}
if !IsCriticalPod(p5) {
t.Errorf("Expected p5 to be a critical pod.")
}
if !IsPodWithLocalStorage(p3) {
t.Errorf("Expected p3 to be a pod with local storage.")
}
ownerRefList := OwnerRef(p2)
if !IsDaemonsetPod(ownerRefList) {
t.Errorf("Expected p2 to be a daemonset pod.")
}
ownerRefList = OwnerRef(p1)
if IsDaemonsetPod(ownerRefList) || IsPodWithLocalStorage(p1) || IsCriticalPod(p1) || IsMirrorPod(p1) {
t.Errorf("Expected p1 to be a normal pod.")
}
if !IsLatencySensitivePod(p6) {
t.Errorf("Expected p6 to be latency sensitive pod")
}
}

View File

@@ -21,17 +21,16 @@ import (
"io/ioutil"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/klog/v2"
"github.com/golang/glog"
"github.com/kubernetes-incubator/descheduler/pkg/api"
_ "github.com/kubernetes-incubator/descheduler/pkg/api/install"
"github.com/kubernetes-incubator/descheduler/pkg/api/v1alpha1"
"github.com/kubernetes-incubator/descheduler/pkg/descheduler/scheme"
"sigs.k8s.io/descheduler/pkg/api"
"sigs.k8s.io/descheduler/pkg/api/v1alpha1"
"sigs.k8s.io/descheduler/pkg/descheduler/scheme"
)
func LoadPolicyConfig(policyConfigFile string) (*api.DeschedulerPolicy, error) {
if policyConfigFile == "" {
glog.V(1).Infof("policy config file not specified")
klog.V(1).Infof("policy config file not specified")
return nil, nil
}

View File

@@ -17,17 +17,11 @@ limitations under the License.
package scheme
import (
"os"
"k8s.io/apimachinery/pkg/apimachinery/announced"
"k8s.io/apimachinery/pkg/apimachinery/registered"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/serializer"
)
var (
GroupFactoryRegistry = make(announced.APIGroupFactoryRegistry)
Registry = registered.NewOrDie(os.Getenv("DESCHEDULER_API_VERSIONS"))
Scheme = runtime.NewScheme()
Codecs = serializer.NewCodecFactory(Scheme)
Scheme = runtime.NewScheme()
Codecs = serializer.NewCodecFactory(Scheme)
)

View File

@@ -17,82 +17,111 @@ limitations under the License.
package strategies
import (
"context"
"reflect"
"sort"
"strings"
"github.com/golang/glog"
"k8s.io/api/core/v1"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/sets"
clientset "k8s.io/client-go/kubernetes"
"k8s.io/klog/v2"
"github.com/kubernetes-incubator/descheduler/cmd/descheduler/app/options"
"github.com/kubernetes-incubator/descheduler/pkg/api"
"github.com/kubernetes-incubator/descheduler/pkg/descheduler/evictions"
podutil "github.com/kubernetes-incubator/descheduler/pkg/descheduler/pod"
"sigs.k8s.io/descheduler/pkg/api"
"sigs.k8s.io/descheduler/pkg/descheduler/evictions"
podutil "sigs.k8s.io/descheduler/pkg/descheduler/pod"
)
//type creator string
type DuplicatePodsMap map[string][]*v1.Pod
// RemoveDuplicatePods removes the duplicate pods on node. This strategy evicts all duplicate pods on node.
// A pod is said to be a duplicate of other if both of them are from same creator, kind and are within the same
// namespace. As of now, this strategy won't evict daemonsets, mirror pods, critical pods and pods with local storages.
func RemoveDuplicatePods(ds *options.DeschedulerServer, strategy api.DeschedulerStrategy, policyGroupVersion string, nodes []*v1.Node, nodepodCount nodePodEvictedCount) {
if !strategy.Enabled {
return
}
deleteDuplicatePods(ds.Client, policyGroupVersion, nodes, ds.DryRun, nodepodCount, ds.MaxNoOfPodsToEvictPerNode)
}
// deleteDuplicatePods evicts the pod from node and returns the count of evicted pods.
func deleteDuplicatePods(client clientset.Interface, policyGroupVersion string, nodes []*v1.Node, dryRun bool, nodepodCount nodePodEvictedCount, maxPodsToEvict int) int {
podsEvicted := 0
// namespace, and have at least one container with the same image.
// As of now, this strategy won't evict daemonsets, mirror pods, critical pods and pods with local storages.
func RemoveDuplicatePods(
ctx context.Context,
client clientset.Interface,
strategy api.DeschedulerStrategy,
nodes []*v1.Node,
podEvictor *evictions.PodEvictor,
) {
for _, node := range nodes {
glog.V(1).Infof("Processing node: %#v", node.Name)
dpm := ListDuplicatePodsOnANode(client, node)
for creator, pods := range dpm {
if len(pods) > 1 {
glog.V(1).Infof("%#v", creator)
// i = 0 does not evict the first pod
for i := 1; i < len(pods); i++ {
if maxPodsToEvict > 0 && nodepodCount[node]+1 > maxPodsToEvict {
break
}
success, err := evictions.EvictPod(client, pods[i], policyGroupVersion, dryRun)
if !success {
glog.Infof("Error when evicting pod: %#v (%#v)", pods[i].Name, err)
} else {
nodepodCount[node]++
glog.V(1).Infof("Evicted pod: %#v (%#v)", pods[i].Name, err)
}
klog.V(1).Infof("Processing node: %#v", node.Name)
pods, err := podutil.ListPodsOnANode(ctx, client, node, podEvictor.IsEvictable)
if err != nil {
klog.Errorf("error listing evictable pods on node %s: %+v", node.Name, err)
continue
}
duplicatePods := make([]*v1.Pod, 0, len(pods))
// Each pod has a list of owners and a list of containers, and each container has 1 image spec.
// For each pod, we go through all the OwnerRef/Image mappings and represent them as a "key" string.
// All of those mappings together makes a list of "key" strings that essentially represent that pod's uniqueness.
// This list of keys representing a single pod is then sorted alphabetically.
// If any other pod has a list that matches that pod's list, those pods are undeniably duplicates for the following reasons:
// - The 2 pods have the exact same ownerrefs
// - The 2 pods have the exact same container images
//
// duplicateKeysMap maps the first Namespace/Kind/Name/Image in a pod's list to a 2D-slice of all the other lists where that is the first key
// (Since we sort each pod's list, we only need to key the map on the first entry in each list. Any pod that doesn't have
// the same first entry is clearly not a duplicate. This makes lookup quick and minimizes storage needed).
// If any of the existing lists for that first key matches the current pod's list, the current pod is a duplicate.
// If not, then we add this pod's list to the list of lists for that key.
duplicateKeysMap := map[string][][]string{}
for _, pod := range pods {
ownerRefList := podutil.OwnerRef(pod)
if hasExcludedOwnerRefKind(ownerRefList, strategy) {
continue
}
podContainerKeys := make([]string, 0, len(ownerRefList)*len(pod.Spec.Containers))
for _, ownerRef := range ownerRefList {
for _, container := range pod.Spec.Containers {
// Namespace/Kind/Name should be unique for the cluster.
// We also consider the image, as 2 pods could have the same owner but serve different purposes
// So any non-unique Namespace/Kind/Name/Image pattern is a duplicate pod.
s := strings.Join([]string{pod.ObjectMeta.Namespace, ownerRef.Kind, ownerRef.Name, container.Image}, "/")
podContainerKeys = append(podContainerKeys, s)
}
}
sort.Strings(podContainerKeys)
// If there have been any other pods with the same first "key", look through all the lists to see if any match
if existing, ok := duplicateKeysMap[podContainerKeys[0]]; ok {
matched := false
for _, keys := range existing {
if reflect.DeepEqual(keys, podContainerKeys) {
matched = true
duplicatePods = append(duplicatePods, pod)
break
}
}
if !matched {
// Found no matches, add this list of keys to the list of lists that have the same first key
duplicateKeysMap[podContainerKeys[0]] = append(duplicateKeysMap[podContainerKeys[0]], podContainerKeys)
}
} else {
// This is the first pod we've seen that has this first "key" entry
duplicateKeysMap[podContainerKeys[0]] = [][]string{podContainerKeys}
}
}
podsEvicted += nodepodCount[node]
}
return podsEvicted
}
// ListDuplicatePodsOnANode lists duplicate pods on a given node.
func ListDuplicatePodsOnANode(client clientset.Interface, node *v1.Node) DuplicatePodsMap {
pods, err := podutil.ListEvictablePodsOnNode(client, node)
if err != nil {
return nil
}
return FindDuplicatePods(pods)
}
// FindDuplicatePods takes a list of pods and returns a duplicatePodsMap.
func FindDuplicatePods(pods []*v1.Pod) DuplicatePodsMap {
dpm := DuplicatePodsMap{}
for _, pod := range pods {
// Ignoring the error here as in the ListDuplicatePodsOnNode function we call ListEvictablePodsOnNode
// which checks for error.
ownerRefList := podutil.OwnerRef(pod)
for _, ownerRef := range ownerRefList {
// ownerRef doesn't need namespace since the assumption is owner needs to be in the same namespace.
s := strings.Join([]string{ownerRef.Kind, ownerRef.Name}, "/")
dpm[s] = append(dpm[s], pod)
for _, pod := range duplicatePods {
if _, err := podEvictor.EvictPod(ctx, pod, node, "RemoveDuplicatePods"); err != nil {
klog.Errorf("Error evicting pod: (%#v)", err)
break
}
}
}
return dpm
}
func hasExcludedOwnerRefKind(ownerRefs []metav1.OwnerReference, strategy api.DeschedulerStrategy) bool {
if strategy.Params == nil || strategy.Params.RemoveDuplicates == nil {
return false
}
exclude := sets.NewString(strategy.Params.RemoveDuplicates.ExcludeOwnerKinds...)
for _, owner := range ownerRefs {
if exclude.Has(owner.Kind) {
return true
}
}
return false
}

View File

@@ -17,40 +17,70 @@ limitations under the License.
package strategies
import (
"context"
"testing"
"github.com/kubernetes-incubator/descheduler/test"
"k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/kubernetes/fake"
core "k8s.io/client-go/testing"
"sigs.k8s.io/descheduler/pkg/api"
"sigs.k8s.io/descheduler/pkg/descheduler/evictions"
"sigs.k8s.io/descheduler/pkg/utils"
"sigs.k8s.io/descheduler/test"
)
//TODO:@ravisantoshgudimetla This could be made table driven.
func TestFindDuplicatePods(t *testing.T) {
node := test.BuildTestNode("n1", 2000, 3000, 10)
p1 := test.BuildTestPod("p1", 100, 0, node.Name)
p2 := test.BuildTestPod("p2", 100, 0, node.Name)
p3 := test.BuildTestPod("p3", 100, 0, node.Name)
p4 := test.BuildTestPod("p4", 100, 0, node.Name)
p5 := test.BuildTestPod("p5", 100, 0, node.Name)
p6 := test.BuildTestPod("p6", 100, 0, node.Name)
p7 := test.BuildTestPod("p7", 100, 0, node.Name)
p8 := test.BuildTestPod("p8", 100, 0, node.Name)
p9 := test.BuildTestPod("p9", 100, 0, node.Name)
ctx := context.Background()
// first setup pods
node := test.BuildTestNode("n1", 2000, 3000, 10, nil)
p1 := test.BuildTestPod("p1", 100, 0, node.Name, nil)
p1.Namespace = "dev"
p2 := test.BuildTestPod("p2", 100, 0, node.Name, nil)
p2.Namespace = "dev"
p3 := test.BuildTestPod("p3", 100, 0, node.Name, nil)
p3.Namespace = "dev"
p4 := test.BuildTestPod("p4", 100, 0, node.Name, nil)
p5 := test.BuildTestPod("p5", 100, 0, node.Name, nil)
p6 := test.BuildTestPod("p6", 100, 0, node.Name, nil)
p7 := test.BuildTestPod("p7", 100, 0, node.Name, nil)
p7.Namespace = "kube-system"
p8 := test.BuildTestPod("p8", 100, 0, node.Name, nil)
p8.Namespace = "test"
p9 := test.BuildTestPod("p9", 100, 0, node.Name, nil)
p9.Namespace = "test"
p10 := test.BuildTestPod("p10", 100, 0, node.Name, nil)
p10.Namespace = "test"
p11 := test.BuildTestPod("p11", 100, 0, node.Name, nil)
p11.Namespace = "different-images"
p12 := test.BuildTestPod("p12", 100, 0, node.Name, nil)
p12.Namespace = "different-images"
p13 := test.BuildTestPod("p13", 100, 0, node.Name, nil)
p13.Namespace = "different-images"
p14 := test.BuildTestPod("p14", 100, 0, node.Name, nil)
p14.Namespace = "different-images"
// All the following pods expect for one will be evicted.
p1.ObjectMeta.OwnerReferences = test.GetReplicaSetOwnerRefList()
p2.ObjectMeta.OwnerReferences = test.GetReplicaSetOwnerRefList()
p3.ObjectMeta.OwnerReferences = test.GetReplicaSetOwnerRefList()
p8.ObjectMeta.OwnerReferences = test.GetReplicaSetOwnerRefList()
p9.ObjectMeta.OwnerReferences = test.GetReplicaSetOwnerRefList()
// ### Evictable Pods ###
// The following 4 pods won't get evicted.
// A daemonset.
// Three Pods in the "default" Namespace, bound to same ReplicaSet. 2 should be evicted.
ownerRef1 := test.GetReplicaSetOwnerRefList()
p1.ObjectMeta.OwnerReferences = ownerRef1
p2.ObjectMeta.OwnerReferences = ownerRef1
p3.ObjectMeta.OwnerReferences = ownerRef1
// Three Pods in the "test" Namespace, bound to same ReplicaSet. 2 should be evicted.
ownerRef2 := test.GetReplicaSetOwnerRefList()
p8.ObjectMeta.OwnerReferences = ownerRef2
p9.ObjectMeta.OwnerReferences = ownerRef2
p10.ObjectMeta.OwnerReferences = ownerRef2
// ### Non-evictable Pods ###
// A DaemonSet.
p4.ObjectMeta.OwnerReferences = test.GetDaemonSetOwnerRefList()
// A pod with local storage.
// A Pod with local storage.
p5.ObjectMeta.OwnerReferences = test.GetNormalPodOwnerRefList()
p5.Spec.Volumes = []v1.Volume{
{
@@ -62,24 +92,121 @@ func TestFindDuplicatePods(t *testing.T) {
},
},
}
// A Mirror Pod.
p6.Annotations = test.GetMirrorPodAnnotation()
// A Critical Pod.
p7.Namespace = "kube-system"
p7.Annotations = test.GetCriticalPodAnnotation()
expectedEvictedPodCount := 2
fakeClient := &fake.Clientset{}
fakeClient.Fake.AddReactor("list", "pods", func(action core.Action) (bool, runtime.Object, error) {
return true, &v1.PodList{Items: []v1.Pod{*p1, *p2, *p3, *p4, *p5, *p6, *p7, *p8, *p9}}, nil
priority := utils.SystemCriticalPriority
p7.Spec.Priority = &priority
// Same owners, but different images
p11.Spec.Containers[0].Image = "foo"
p11.ObjectMeta.OwnerReferences = ownerRef1
p12.Spec.Containers[0].Image = "bar"
p12.ObjectMeta.OwnerReferences = ownerRef1
// Multiple containers
p13.ObjectMeta.OwnerReferences = ownerRef1
p13.Spec.Containers = append(p13.Spec.Containers, v1.Container{
Name: "foo",
Image: "foo",
})
fakeClient.Fake.AddReactor("get", "nodes", func(action core.Action) (bool, runtime.Object, error) {
return true, node, nil
})
npe := nodePodEvictedCount{}
npe[node] = 0
podsEvicted := deleteDuplicatePods(fakeClient, "v1", []*v1.Node{node}, false, npe, 2)
if podsEvicted != expectedEvictedPodCount {
t.Errorf("Unexpected no of pods evicted")
testCases := []struct {
description string
maxPodsToEvictPerNode int
pods []v1.Pod
expectedEvictedPodCount int
strategy api.DeschedulerStrategy
}{
{
description: "Three pods in the `dev` Namespace, bound to same ReplicaSet. 2 should be evicted.",
maxPodsToEvictPerNode: 5,
pods: []v1.Pod{*p1, *p2, *p3},
expectedEvictedPodCount: 2,
strategy: api.DeschedulerStrategy{},
},
{
description: "Three pods in the `dev` Namespace, bound to same ReplicaSet, but ReplicaSet kind is excluded. 0 should be evicted.",
maxPodsToEvictPerNode: 5,
pods: []v1.Pod{*p1, *p2, *p3},
expectedEvictedPodCount: 0,
strategy: api.DeschedulerStrategy{Params: &api.StrategyParameters{RemoveDuplicates: &api.RemoveDuplicates{ExcludeOwnerKinds: []string{"ReplicaSet"}}}},
},
{
description: "Three Pods in the `test` Namespace, bound to same ReplicaSet. 2 should be evicted.",
maxPodsToEvictPerNode: 5,
pods: []v1.Pod{*p8, *p9, *p10},
expectedEvictedPodCount: 2,
strategy: api.DeschedulerStrategy{},
},
{
description: "Three Pods in the `dev` Namespace, three Pods in the `test` Namespace. Bound to ReplicaSet with same name. 4 should be evicted.",
maxPodsToEvictPerNode: 5,
pods: []v1.Pod{*p1, *p2, *p3, *p8, *p9, *p10},
expectedEvictedPodCount: 4,
strategy: api.DeschedulerStrategy{},
},
{
description: "Pods are: part of DaemonSet, with local storage, mirror pod annotation, critical pod annotation - none should be evicted.",
maxPodsToEvictPerNode: 2,
pods: []v1.Pod{*p4, *p5, *p6, *p7},
expectedEvictedPodCount: 0,
strategy: api.DeschedulerStrategy{},
},
{
description: "Test all Pods: 4 should be evicted.",
maxPodsToEvictPerNode: 5,
pods: []v1.Pod{*p1, *p2, *p3, *p4, *p5, *p6, *p7, *p8, *p9, *p10},
expectedEvictedPodCount: 4,
strategy: api.DeschedulerStrategy{},
},
{
description: "Pods with the same owner but different images should not be evicted",
maxPodsToEvictPerNode: 5,
pods: []v1.Pod{*p11, *p12},
expectedEvictedPodCount: 0,
strategy: api.DeschedulerStrategy{},
},
{
description: "Pods with multiple containers should not match themselves",
maxPodsToEvictPerNode: 5,
pods: []v1.Pod{*p13},
expectedEvictedPodCount: 0,
strategy: api.DeschedulerStrategy{},
},
{
description: "Pods with matching ownerrefs and at not all matching image should not trigger an eviction",
maxPodsToEvictPerNode: 5,
pods: []v1.Pod{*p11, *p13},
expectedEvictedPodCount: 0,
strategy: api.DeschedulerStrategy{},
},
}
for _, testCase := range testCases {
fakeClient := &fake.Clientset{}
fakeClient.Fake.AddReactor("list", "pods", func(action core.Action) (bool, runtime.Object, error) {
return true, &v1.PodList{Items: testCase.pods}, nil
})
fakeClient.Fake.AddReactor("get", "nodes", func(action core.Action) (bool, runtime.Object, error) {
return true, node, nil
})
podEvictor := evictions.NewPodEvictor(
fakeClient,
"v1",
false,
testCase.maxPodsToEvictPerNode,
[]*v1.Node{node},
false,
)
RemoveDuplicatePods(ctx, fakeClient, testCase.strategy, []*v1.Node{node}, podEvictor)
podsEvicted := podEvictor.TotalEvicted()
if podsEvicted != testCase.expectedEvictedPodCount {
t.Errorf("Test error for description: %s. Expected evicted pods count %v, got %v", testCase.description, testCase.expectedEvictedPodCount, podsEvicted)
}
}
}

View File

@@ -17,115 +17,150 @@ limitations under the License.
package strategies
import (
"context"
"fmt"
"sort"
"github.com/golang/glog"
"k8s.io/api/core/v1"
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
clientset "k8s.io/client-go/kubernetes"
helper "k8s.io/kubernetes/pkg/api/v1/resource"
"k8s.io/klog/v2"
"github.com/kubernetes-incubator/descheduler/cmd/descheduler/app/options"
"github.com/kubernetes-incubator/descheduler/pkg/api"
"github.com/kubernetes-incubator/descheduler/pkg/descheduler/evictions"
nodeutil "github.com/kubernetes-incubator/descheduler/pkg/descheduler/node"
podutil "github.com/kubernetes-incubator/descheduler/pkg/descheduler/pod"
"sigs.k8s.io/descheduler/pkg/api"
"sigs.k8s.io/descheduler/pkg/descheduler/evictions"
nodeutil "sigs.k8s.io/descheduler/pkg/descheduler/node"
podutil "sigs.k8s.io/descheduler/pkg/descheduler/pod"
"sigs.k8s.io/descheduler/pkg/utils"
)
// NodeUsageMap stores a node's info, pods on it and its resource usage
type NodeUsageMap struct {
node *v1.Node
usage api.ResourceThresholds
allPods []*v1.Pod
nonRemovablePods []*v1.Pod
bePods []*v1.Pod
bPods []*v1.Pod
gPods []*v1.Pod
node *v1.Node
usage api.ResourceThresholds
allPods []*v1.Pod
}
// NodePodsMap is a set of (node, pods) pairs
type NodePodsMap map[*v1.Node][]*v1.Pod
func LowNodeUtilization(ds *options.DeschedulerServer, strategy api.DeschedulerStrategy, evictionPolicyGroupVersion string, nodes []*v1.Node, nodepodCount nodePodEvictedCount) {
if !strategy.Enabled {
return
}
const (
// MinResourcePercentage is the minimum value of a resource's percentage
MinResourcePercentage = 0
// MaxResourcePercentage is the maximum value of a resource's percentage
MaxResourcePercentage = 100
)
// LowNodeUtilization evicts pods from overutilized nodes to underutilized nodes. Note that CPU/Memory requests are used
// to calculate nodes' utilization and not the actual resource usage.
func LowNodeUtilization(ctx context.Context, client clientset.Interface, strategy api.DeschedulerStrategy, nodes []*v1.Node, podEvictor *evictions.PodEvictor) {
// todo: move to config validation?
// TODO: May be create a struct for the strategy as well, so that we don't have to pass along the all the params?
if strategy.Params == nil || strategy.Params.NodeResourceUtilizationThresholds == nil {
klog.V(1).Infof("NodeResourceUtilizationThresholds not set")
return
}
thresholds := strategy.Params.NodeResourceUtilizationThresholds.Thresholds
if !validateThresholds(thresholds) {
targetThresholds := strategy.Params.NodeResourceUtilizationThresholds.TargetThresholds
if err := validateStrategyConfig(thresholds, targetThresholds); err != nil {
klog.Errorf("LowNodeUtilization config is not valid: %v", err)
return
}
targetThresholds := strategy.Params.NodeResourceUtilizationThresholds.TargetThresholds
if !validateTargetThresholds(targetThresholds) {
return
// check if Pods/CPU/Mem are set, if not, set them to 100
if _, ok := thresholds[v1.ResourcePods]; !ok {
thresholds[v1.ResourcePods] = MaxResourcePercentage
targetThresholds[v1.ResourcePods] = MaxResourcePercentage
}
if _, ok := thresholds[v1.ResourceCPU]; !ok {
thresholds[v1.ResourceCPU] = MaxResourcePercentage
targetThresholds[v1.ResourceCPU] = MaxResourcePercentage
}
if _, ok := thresholds[v1.ResourceMemory]; !ok {
thresholds[v1.ResourceMemory] = MaxResourcePercentage
targetThresholds[v1.ResourceMemory] = MaxResourcePercentage
}
npm := CreateNodePodsMap(ds.Client, nodes)
npm := createNodePodsMap(ctx, client, nodes)
lowNodes, targetNodes := classifyNodes(npm, thresholds, targetThresholds)
glog.V(1).Infof("Criteria for a node under utilization: CPU: %v, Mem: %v, Pods: %v",
klog.V(1).Infof("Criteria for a node under utilization: CPU: %v, Mem: %v, Pods: %v",
thresholds[v1.ResourceCPU], thresholds[v1.ResourceMemory], thresholds[v1.ResourcePods])
if len(lowNodes) == 0 {
glog.V(1).Infof("No node is underutilized, nothing to do here, you might tune your thersholds further")
klog.V(1).Infof("No node is underutilized, nothing to do here, you might tune your thresholds further")
return
}
glog.V(1).Infof("Total number of underutilized nodes: %v", len(lowNodes))
klog.V(1).Infof("Total number of underutilized nodes: %v", len(lowNodes))
if len(lowNodes) < strategy.Params.NodeResourceUtilizationThresholds.NumberOfNodes {
glog.V(1).Infof("number of nodes underutilized (%v) is less than NumberOfNodes (%v), nothing to do here", len(lowNodes), strategy.Params.NodeResourceUtilizationThresholds.NumberOfNodes)
klog.V(1).Infof("number of nodes underutilized (%v) is less than NumberOfNodes (%v), nothing to do here", len(lowNodes), strategy.Params.NodeResourceUtilizationThresholds.NumberOfNodes)
return
}
if len(lowNodes) == len(nodes) {
glog.V(1).Infof("all nodes are underutilized, nothing to do here")
klog.V(1).Infof("all nodes are underutilized, nothing to do here")
return
}
if len(targetNodes) == 0 {
glog.V(1).Infof("all nodes are under target utilization, nothing to do here")
klog.V(1).Infof("all nodes are under target utilization, nothing to do here")
return
}
glog.V(1).Infof("Criteria for a node above target utilization: CPU: %v, Mem: %v, Pods: %v",
klog.V(1).Infof("Criteria for a node above target utilization: CPU: %v, Mem: %v, Pods: %v",
targetThresholds[v1.ResourceCPU], targetThresholds[v1.ResourceMemory], targetThresholds[v1.ResourcePods])
glog.V(1).Infof("Total number of nodes above target utilization: %v", len(targetNodes))
klog.V(1).Infof("Total number of nodes above target utilization: %v", len(targetNodes))
totalPodsEvicted := evictPodsFromTargetNodes(ds.Client, evictionPolicyGroupVersion, targetNodes, lowNodes, targetThresholds, ds.DryRun, ds.MaxNoOfPodsToEvictPerNode, nodepodCount)
glog.V(1).Infof("Total number of pods evicted: %v", totalPodsEvicted)
evictPodsFromTargetNodes(
ctx,
targetNodes,
lowNodes,
targetThresholds,
podEvictor)
klog.V(1).Infof("Total number of pods evicted: %v", podEvictor.TotalEvicted())
}
func validateThresholds(thresholds api.ResourceThresholds) bool {
if thresholds == nil || len(thresholds) == 0 {
glog.V(1).Infof("no resource threshold is configured")
return false
// validateStrategyConfig checks if the strategy's config is valid
func validateStrategyConfig(thresholds, targetThresholds api.ResourceThresholds) error {
// validate thresholds and targetThresholds config
if err := validateThresholds(thresholds); err != nil {
return fmt.Errorf("thresholds config is not valid: %v", err)
}
for name := range thresholds {
switch name {
case v1.ResourceCPU:
continue
case v1.ResourceMemory:
continue
case v1.ResourcePods:
continue
default:
glog.Errorf("only cpu, memory, or pods thresholds can be specified")
return false
if err := validateThresholds(targetThresholds); err != nil {
return fmt.Errorf("targetThresholds config is not valid: %v", err)
}
// validate if thresholds and targetThresholds have same resources configured
if len(thresholds) != len(targetThresholds) {
return fmt.Errorf("thresholds and targetThresholds configured different resources")
}
for resourceName, value := range thresholds {
if targetValue, ok := targetThresholds[resourceName]; !ok {
return fmt.Errorf("thresholds and targetThresholds configured different resources")
} else if value > targetValue {
return fmt.Errorf("thresholds' %v percentage is greater than targetThresholds'", resourceName)
}
}
return true
return nil
}
//This function could be merged into above once we are clear.
func validateTargetThresholds(targetThresholds api.ResourceThresholds) bool {
if targetThresholds == nil {
glog.V(1).Infof("no target resource threshold is configured")
return false
} else if _, ok := targetThresholds[v1.ResourcePods]; !ok {
glog.V(1).Infof("no target resource threshold for pods is configured")
return false
// validateThresholds checks if thresholds have valid resource name and resource percentage configured
func validateThresholds(thresholds api.ResourceThresholds) error {
if thresholds == nil || len(thresholds) == 0 {
return fmt.Errorf("no resource threshold is configured")
}
return true
for name, percent := range thresholds {
switch name {
case v1.ResourceCPU, v1.ResourceMemory, v1.ResourcePods:
if percent < MinResourcePercentage || percent > MaxResourcePercentage {
return fmt.Errorf("%v threshold not in [%v, %v] range", name, MinResourcePercentage, MaxResourcePercentage)
}
default:
return fmt.Errorf("only cpu, memory, or pods thresholds can be specified")
}
}
return nil
}
// classifyNodes classifies the nodes into low-utilization or high-utilization nodes. If a node lies between
@@ -133,32 +168,43 @@ func validateTargetThresholds(targetThresholds api.ResourceThresholds) bool {
func classifyNodes(npm NodePodsMap, thresholds api.ResourceThresholds, targetThresholds api.ResourceThresholds) ([]NodeUsageMap, []NodeUsageMap) {
lowNodes, targetNodes := []NodeUsageMap{}, []NodeUsageMap{}
for node, pods := range npm {
usage, allPods, nonRemovablePods, bePods, bPods, gPods := NodeUtilization(node, pods)
nuMap := NodeUsageMap{node, usage, allPods, nonRemovablePods, bePods, bPods, gPods}
usage := nodeUtilization(node, pods)
nuMap := NodeUsageMap{
node: node,
usage: usage,
allPods: pods,
}
// Check if node is underutilized and if we can schedule pods on it.
if !nodeutil.IsNodeUschedulable(node) && IsNodeWithLowUtilization(usage, thresholds) {
glog.V(2).Infof("Node %#v is under utilized with usage: %#v", node.Name, usage)
if !nodeutil.IsNodeUnschedulable(node) && isNodeWithLowUtilization(usage, thresholds) {
klog.V(2).Infof("Node %#v is under utilized with usage: %#v", node.Name, usage)
lowNodes = append(lowNodes, nuMap)
} else if IsNodeAboveTargetUtilization(usage, targetThresholds) {
glog.V(2).Infof("Node %#v is over utilized with usage: %#v", node.Name, usage)
} else if isNodeAboveTargetUtilization(usage, targetThresholds) {
klog.V(2).Infof("Node %#v is over utilized with usage: %#v", node.Name, usage)
targetNodes = append(targetNodes, nuMap)
} else {
glog.V(2).Infof("Node %#v is appropriately utilized with usage: %#v", node.Name, usage)
klog.V(2).Infof("Node %#v is appropriately utilized with usage: %#v", node.Name, usage)
}
glog.V(2).Infof("allPods:%v, nonRemovablePods:%v, bePods:%v, bPods:%v, gPods:%v", len(allPods), len(nonRemovablePods), len(bePods), len(bPods), len(gPods))
}
return lowNodes, targetNodes
}
func evictPodsFromTargetNodes(client clientset.Interface, evictionPolicyGroupVersion string, targetNodes, lowNodes []NodeUsageMap, targetThresholds api.ResourceThresholds, dryRun bool, maxPodsToEvict int, nodepodCount nodePodEvictedCount) int {
podsEvicted := 0
// evictPodsFromTargetNodes evicts pods based on priority, if all the pods on the node have priority, if not
// evicts them based on QoS as fallback option.
// TODO: @ravig Break this function into smaller functions.
func evictPodsFromTargetNodes(
ctx context.Context,
targetNodes, lowNodes []NodeUsageMap,
targetThresholds api.ResourceThresholds,
podEvictor *evictions.PodEvictor,
) {
SortNodesByUsage(targetNodes)
sortNodesByUsage(targetNodes)
// upper bound on total number of pods/cpu/memory to be moved
var totalPods, totalCpu, totalMem float64
var totalPods, totalCPU, totalMem float64
var taintsOfLowNodes = make(map[string][]v1.Taint, len(lowNodes))
for _, node := range lowNodes {
taintsOfLowNodes[node.node.Name] = node.node.Spec.Taints
nodeCapacity := node.node.Status.Capacity
if len(node.node.Status.Allocatable) > 0 {
nodeCapacity = node.node.Status.Allocatable
@@ -168,82 +214,88 @@ func evictPodsFromTargetNodes(client clientset.Interface, evictionPolicyGroupVer
totalPods += ((float64(podsPercentage) * float64(nodeCapacity.Pods().Value())) / 100)
// totalCPU capacity to be moved
if _, ok := targetThresholds[v1.ResourceCPU]; ok {
cpuPercentage := targetThresholds[v1.ResourceCPU] - node.usage[v1.ResourceCPU]
totalCpu += ((float64(cpuPercentage) * float64(nodeCapacity.Cpu().MilliValue())) / 100)
}
cpuPercentage := targetThresholds[v1.ResourceCPU] - node.usage[v1.ResourceCPU]
totalCPU += ((float64(cpuPercentage) * float64(nodeCapacity.Cpu().MilliValue())) / 100)
// totalMem capacity to be moved
if _, ok := targetThresholds[v1.ResourceMemory]; ok {
memPercentage := targetThresholds[v1.ResourceMemory] - node.usage[v1.ResourceMemory]
totalMem += ((float64(memPercentage) * float64(nodeCapacity.Memory().Value())) / 100)
}
memPercentage := targetThresholds[v1.ResourceMemory] - node.usage[v1.ResourceMemory]
totalMem += ((float64(memPercentage) * float64(nodeCapacity.Memory().Value())) / 100)
}
glog.V(1).Infof("Total capacity to be moved: CPU:%v, Mem:%v, Pods:%v", totalCpu, totalMem, totalPods)
glog.V(1).Infof("********Number of pods evicted from each node:***********")
klog.V(1).Infof("Total capacity to be moved: CPU:%v, Mem:%v, Pods:%v", totalCPU, totalMem, totalPods)
klog.V(1).Infof("********Number of pods evicted from each node:***********")
for _, node := range targetNodes {
nodeCapacity := node.node.Status.Capacity
if len(node.node.Status.Allocatable) > 0 {
nodeCapacity = node.node.Status.Allocatable
}
glog.V(3).Infof("evicting pods from node %#v with usage: %#v", node.node.Name, node.usage)
currentPodsEvicted := nodepodCount[node.node]
klog.V(3).Infof("evicting pods from node %#v with usage: %#v", node.node.Name, node.usage)
// evict best effort pods
evictPods(node.bePods, client, evictionPolicyGroupVersion, targetThresholds, nodeCapacity, node.usage, &totalPods, &totalCpu, &totalMem, &currentPodsEvicted, dryRun, maxPodsToEvict)
// evict burstable pods
evictPods(node.bPods, client, evictionPolicyGroupVersion, targetThresholds, nodeCapacity, node.usage, &totalPods, &totalCpu, &totalMem, &currentPodsEvicted, dryRun, maxPodsToEvict)
// evict guaranteed pods
evictPods(node.gPods, client, evictionPolicyGroupVersion, targetThresholds, nodeCapacity, node.usage, &totalPods, &totalCpu, &totalMem, &currentPodsEvicted, dryRun, maxPodsToEvict)
nodepodCount[node.node] = currentPodsEvicted
podsEvicted = podsEvicted + nodepodCount[node.node]
glog.V(1).Infof("%v pods evicted from node %#v with usage %v", nodepodCount[node.node], node.node.Name, node.usage)
nonRemovablePods, removablePods := classifyPods(node.allPods, podEvictor)
klog.V(2).Infof("allPods:%v, nonRemovablePods:%v, removablePods:%v", len(node.allPods), len(nonRemovablePods), len(removablePods))
if len(removablePods) == 0 {
klog.V(1).Infof("no removable pods on node %#v, try next node", node.node.Name)
continue
}
klog.V(1).Infof("evicting pods based on priority, if they have same priority, they'll be evicted based on QoS tiers")
// sort the evictable Pods based on priority. This also sorts them based on QoS. If there are multiple pods with same priority, they are sorted based on QoS tiers.
podutil.SortPodsBasedOnPriorityLowToHigh(removablePods)
evictPods(ctx, removablePods, targetThresholds, nodeCapacity, node.usage, &totalPods, &totalCPU, &totalMem, taintsOfLowNodes, podEvictor, node.node)
klog.V(1).Infof("%v pods evicted from node %#v with usage %v", podEvictor.NodeEvicted(node.node), node.node.Name, node.usage)
}
return podsEvicted
}
func evictPods(inputPods []*v1.Pod,
client clientset.Interface,
evictionPolicyGroupVersion string,
func evictPods(
ctx context.Context,
inputPods []*v1.Pod,
targetThresholds api.ResourceThresholds,
nodeCapacity v1.ResourceList,
nodeUsage api.ResourceThresholds,
totalPods *float64,
totalCpu *float64,
totalCPU *float64,
totalMem *float64,
podsEvicted *int,
dryRun bool, maxPodsToEvict int) {
if IsNodeAboveTargetUtilization(nodeUsage, targetThresholds) && (*totalPods > 0 || *totalCpu > 0 || *totalMem > 0) {
taintsOfLowNodes map[string][]v1.Taint,
podEvictor *evictions.PodEvictor,
node *v1.Node) {
// stop if node utilization drops below target threshold or any of required capacity (cpu, memory, pods) is moved
if isNodeAboveTargetUtilization(nodeUsage, targetThresholds) && *totalPods > 0 && *totalCPU > 0 && *totalMem > 0 {
onePodPercentage := api.Percentage((float64(1) * 100) / float64(nodeCapacity.Pods().Value()))
for _, pod := range inputPods {
if maxPodsToEvict > 0 && *podsEvicted+1 > maxPodsToEvict {
if !utils.PodToleratesTaints(pod, taintsOfLowNodes) {
klog.V(3).Infof("Skipping eviction for Pod: %#v, doesn't tolerate node taint", pod.Name)
continue
}
cUsage := utils.GetResourceRequest(pod, v1.ResourceCPU)
mUsage := utils.GetResourceRequest(pod, v1.ResourceMemory)
success, err := podEvictor.EvictPod(ctx, pod, node, "LowNodeUtilization")
if err != nil {
klog.Errorf("Error evicting pod: (%#v)", err)
break
}
cUsage := helper.GetResourceRequest(pod, v1.ResourceCPU)
mUsage := helper.GetResourceRequest(pod, v1.ResourceMemory)
success, err := evictions.EvictPod(client, pod, evictionPolicyGroupVersion, dryRun)
if !success {
glog.Warningf("Error when evicting pod: %#v (%#v)", pod.Name, err)
} else {
glog.V(3).Infof("Evicted pod: %#v (%#v)", pod.Name, err)
if success {
klog.V(3).Infof("Evicted pod: %#v", pod.Name)
// update remaining pods
*podsEvicted++
nodeUsage[v1.ResourcePods] -= onePodPercentage
*totalPods--
// update remaining cpu
*totalCpu -= float64(cUsage)
*totalCPU -= float64(cUsage)
nodeUsage[v1.ResourceCPU] -= api.Percentage((float64(cUsage) * 100) / float64(nodeCapacity.Cpu().MilliValue()))
// update remaining memory
*totalMem -= float64(mUsage)
nodeUsage[v1.ResourceMemory] -= api.Percentage(float64(mUsage) / float64(nodeCapacity.Memory().Value()) * 100)
glog.V(3).Infof("updated node usage: %#v", nodeUsage)
// check if node utilization drops below target threshold or required capacity (cpu, memory, pods) is moved
if !IsNodeAboveTargetUtilization(nodeUsage, targetThresholds) || (*totalPods <= 0 && *totalCpu <= 0 && *totalMem <= 0) {
klog.V(3).Infof("updated node usage: %#v", nodeUsage)
// check if node utilization drops below target threshold or any required capacity (cpu, memory, pods) is moved
if !isNodeAboveTargetUtilization(nodeUsage, targetThresholds) || *totalPods <= 0 || *totalCPU <= 0 || *totalMem <= 0 {
break
}
}
@@ -251,7 +303,8 @@ func evictPods(inputPods []*v1.Pod,
}
}
func SortNodesByUsage(nodes []NodeUsageMap) {
// sortNodesByUsage sorts nodes based on usage in descending order
func sortNodesByUsage(nodes []NodeUsageMap) {
sort.Slice(nodes, func(i, j int) bool {
var ti, tj api.Percentage
for name, value := range nodes[i].usage {
@@ -269,12 +322,13 @@ func SortNodesByUsage(nodes []NodeUsageMap) {
})
}
func CreateNodePodsMap(client clientset.Interface, nodes []*v1.Node) NodePodsMap {
// createNodePodsMap returns nodepodsmap with evictable pods on node.
func createNodePodsMap(ctx context.Context, client clientset.Interface, nodes []*v1.Node) NodePodsMap {
npm := NodePodsMap{}
for _, node := range nodes {
pods, err := podutil.ListPodsOnANode(client, node)
pods, err := podutil.ListPodsOnANode(ctx, client, node, nil)
if err != nil {
glog.Warningf("node %s will not be processed, error in accessing its pods (%#v)", node.Name, err)
klog.Warningf("node %s will not be processed, error in accessing its pods (%#v)", node.Name, err)
} else {
npm[node] = pods
}
@@ -282,7 +336,8 @@ func CreateNodePodsMap(client clientset.Interface, nodes []*v1.Node) NodePodsMap
return npm
}
func IsNodeAboveTargetUtilization(nodeThresholds api.ResourceThresholds, thresholds api.ResourceThresholds) bool {
// isNodeAboveTargetUtilization checks if a node is overutilized
func isNodeAboveTargetUtilization(nodeThresholds api.ResourceThresholds, thresholds api.ResourceThresholds) bool {
for name, nodeValue := range nodeThresholds {
if name == v1.ResourceCPU || name == v1.ResourceMemory || name == v1.ResourcePods {
if value, ok := thresholds[name]; !ok {
@@ -295,7 +350,8 @@ func IsNodeAboveTargetUtilization(nodeThresholds api.ResourceThresholds, thresho
return false
}
func IsNodeWithLowUtilization(nodeThresholds api.ResourceThresholds, thresholds api.ResourceThresholds) bool {
// isNodeWithLowUtilization checks if a node is underutilized
func isNodeWithLowUtilization(nodeThresholds api.ResourceThresholds, thresholds api.ResourceThresholds) bool {
for name, nodeValue := range nodeThresholds {
if name == v1.ResourceCPU || name == v1.ResourceMemory || name == v1.ResourcePods {
if value, ok := thresholds[name]; !ok {
@@ -308,37 +364,18 @@ func IsNodeWithLowUtilization(nodeThresholds api.ResourceThresholds, thresholds
return true
}
func NodeUtilization(node *v1.Node, pods []*v1.Pod) (api.ResourceThresholds, []*v1.Pod, []*v1.Pod, []*v1.Pod, []*v1.Pod, []*v1.Pod) {
bePods := []*v1.Pod{}
nonRemovablePods := []*v1.Pod{}
bPods := []*v1.Pod{}
gPods := []*v1.Pod{}
totalReqs := map[v1.ResourceName]resource.Quantity{}
func nodeUtilization(node *v1.Node, pods []*v1.Pod) api.ResourceThresholds {
totalReqs := map[v1.ResourceName]*resource.Quantity{
v1.ResourceCPU: {},
v1.ResourceMemory: {},
}
for _, pod := range pods {
// We need to compute the usage of nonRemovablePods unless it is a best effort pod. So, cannot use podutil.ListEvictablePodsOnNode
if !podutil.IsEvictable(pod) {
nonRemovablePods = append(nonRemovablePods, pod)
if podutil.IsBestEffortPod(pod) {
continue
}
} else if podutil.IsBestEffortPod(pod) {
bePods = append(bePods, pod)
continue
} else if podutil.IsBurstablePod(pod) {
bPods = append(bPods, pod)
} else {
gPods = append(gPods, pod)
}
req, _ := helper.PodRequestsAndLimits(pod)
req, _ := utils.PodRequestsAndLimits(pod)
for name, quantity := range req {
if name == v1.ResourceCPU || name == v1.ResourceMemory {
if value, ok := totalReqs[name]; !ok {
totalReqs[name] = *quantity.Copy()
} else {
value.Add(quantity)
totalReqs[name] = value
}
// As Quantity.Add says: Add adds the provided y quantity to the current value. If the current value is zero,
// the format of the quantity will be updated to the format of y.
totalReqs[name].Add(quantity)
}
}
}
@@ -348,12 +385,24 @@ func NodeUtilization(node *v1.Node, pods []*v1.Pod) (api.ResourceThresholds, []*
nodeCapacity = node.Status.Allocatable
}
usage := api.ResourceThresholds{}
totalCPUReq := totalReqs[v1.ResourceCPU]
totalMemReq := totalReqs[v1.ResourceMemory]
totalPods := len(pods)
usage[v1.ResourceCPU] = api.Percentage((float64(totalCPUReq.MilliValue()) * 100) / float64(nodeCapacity.Cpu().MilliValue()))
usage[v1.ResourceMemory] = api.Percentage(float64(totalMemReq.Value()) / float64(nodeCapacity.Memory().Value()) * 100)
usage[v1.ResourcePods] = api.Percentage((float64(totalPods) * 100) / float64(nodeCapacity.Pods().Value()))
return usage, pods, nonRemovablePods, bePods, bPods, gPods
return api.ResourceThresholds{
v1.ResourceCPU: api.Percentage((float64(totalReqs[v1.ResourceCPU].MilliValue()) * 100) / float64(nodeCapacity.Cpu().MilliValue())),
v1.ResourceMemory: api.Percentage(float64(totalReqs[v1.ResourceMemory].Value()) / float64(nodeCapacity.Memory().Value()) * 100),
v1.ResourcePods: api.Percentage((float64(totalPods) * 100) / float64(nodeCapacity.Pods().Value())),
}
}
func classifyPods(pods []*v1.Pod, evictor *evictions.PodEvictor) ([]*v1.Pod, []*v1.Pod) {
var nonRemovablePods, removablePods []*v1.Pod
for _, pod := range pods {
if !evictor.IsEvictable(pod) {
nonRemovablePods = append(nonRemovablePods, pod)
} else {
removablePods = append(removablePods, pod)
}
}
return nonRemovablePods, removablePods
}

View File

@@ -17,130 +17,575 @@ limitations under the License.
package strategies
import (
"context"
"fmt"
"strings"
"testing"
"github.com/kubernetes-incubator/descheduler/pkg/api"
"github.com/kubernetes-incubator/descheduler/test"
"k8s.io/api/core/v1"
v1 "k8s.io/api/core/v1"
"k8s.io/api/policy/v1beta1"
"k8s.io/apimachinery/pkg/api/resource"
"k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/runtime/serializer"
"k8s.io/apimachinery/pkg/watch"
"k8s.io/client-go/kubernetes/fake"
core "k8s.io/client-go/testing"
"sigs.k8s.io/descheduler/pkg/api"
"sigs.k8s.io/descheduler/pkg/descheduler/evictions"
"sigs.k8s.io/descheduler/pkg/utils"
"sigs.k8s.io/descheduler/test"
)
var (
lowPriority = int32(0)
highPriority = int32(10000)
)
// TODO: Make this table driven.
func TestLowNodeUtilization(t *testing.T) {
var thresholds = make(api.ResourceThresholds)
var targetThresholds = make(api.ResourceThresholds)
thresholds[v1.ResourceCPU] = 30
thresholds[v1.ResourcePods] = 30
targetThresholds[v1.ResourceCPU] = 50
targetThresholds[v1.ResourcePods] = 50
ctx := context.Background()
n1NodeName := "n1"
n2NodeName := "n2"
n3NodeName := "n3"
n1 := test.BuildTestNode("n1", 4000, 3000, 9)
n2 := test.BuildTestNode("n2", 4000, 3000, 10)
n3 := test.BuildTestNode("n3", 4000, 3000, 10)
// Making n3 node unschedulable so that it won't counted in lowUtilized nodes list.
n3.Spec.Unschedulable = true
p1 := test.BuildTestPod("p1", 400, 0, n1.Name)
p2 := test.BuildTestPod("p2", 400, 0, n1.Name)
p3 := test.BuildTestPod("p3", 400, 0, n1.Name)
p4 := test.BuildTestPod("p4", 400, 0, n1.Name)
p5 := test.BuildTestPod("p5", 400, 0, n1.Name)
// These won't be evicted.
p6 := test.BuildTestPod("p6", 400, 0, n1.Name)
p7 := test.BuildTestPod("p7", 400, 0, n1.Name)
p8 := test.BuildTestPod("p8", 400, 0, n1.Name)
p1.ObjectMeta.OwnerReferences = test.GetReplicaSetOwnerRefList()
p2.ObjectMeta.OwnerReferences = test.GetReplicaSetOwnerRefList()
p3.ObjectMeta.OwnerReferences = test.GetReplicaSetOwnerRefList()
p4.ObjectMeta.OwnerReferences = test.GetReplicaSetOwnerRefList()
p5.ObjectMeta.OwnerReferences = test.GetReplicaSetOwnerRefList()
// The following 4 pods won't get evicted.
// A daemonset.
p6.ObjectMeta.OwnerReferences = test.GetDaemonSetOwnerRefList()
// A pod with local storage.
p7.ObjectMeta.OwnerReferences = test.GetNormalPodOwnerRefList()
p7.Spec.Volumes = []v1.Volume{
testCases := []struct {
name string
thresholds, targetThresholds api.ResourceThresholds
nodes map[string]*v1.Node
pods map[string]*v1.PodList
maxPodsToEvictPerNode int
expectedPodsEvicted int
evictedPods []string
}{
{
Name: "sample",
VolumeSource: v1.VolumeSource{
HostPath: &v1.HostPathVolumeSource{Path: "somePath"},
EmptyDir: &v1.EmptyDirVolumeSource{
SizeLimit: resource.NewQuantity(int64(10), resource.BinarySI)},
name: "no evictable pods",
thresholds: api.ResourceThresholds{
v1.ResourceCPU: 30,
v1.ResourcePods: 30,
},
targetThresholds: api.ResourceThresholds{
v1.ResourceCPU: 50,
v1.ResourcePods: 50,
},
nodes: map[string]*v1.Node{
n1NodeName: test.BuildTestNode(n1NodeName, 4000, 3000, 9, nil),
n2NodeName: test.BuildTestNode(n2NodeName, 4000, 3000, 10, nil),
n3NodeName: test.BuildTestNode(n3NodeName, 4000, 3000, 10, test.SetNodeUnschedulable),
},
pods: map[string]*v1.PodList{
n1NodeName: {
Items: []v1.Pod{
// These won't be evicted.
*test.BuildTestPod("p1", 400, 0, n1NodeName, test.SetDSOwnerRef),
*test.BuildTestPod("p2", 400, 0, n1NodeName, test.SetDSOwnerRef),
*test.BuildTestPod("p3", 400, 0, n1NodeName, test.SetDSOwnerRef),
*test.BuildTestPod("p4", 400, 0, n1NodeName, test.SetDSOwnerRef),
*test.BuildTestPod("p5", 400, 0, n1NodeName, func(pod *v1.Pod) {
// A pod with local storage.
test.SetNormalOwnerRef(pod)
pod.Spec.Volumes = []v1.Volume{
{
Name: "sample",
VolumeSource: v1.VolumeSource{
HostPath: &v1.HostPathVolumeSource{Path: "somePath"},
EmptyDir: &v1.EmptyDirVolumeSource{
SizeLimit: resource.NewQuantity(int64(10), resource.BinarySI)},
},
},
}
// A Mirror Pod.
pod.Annotations = test.GetMirrorPodAnnotation()
}),
*test.BuildTestPod("p6", 400, 0, n1NodeName, func(pod *v1.Pod) {
// A Critical Pod.
pod.Namespace = "kube-system"
priority := utils.SystemCriticalPriority
pod.Spec.Priority = &priority
}),
},
},
n2NodeName: {
Items: []v1.Pod{
*test.BuildTestPod("p9", 400, 0, n1NodeName, test.SetRSOwnerRef),
},
},
n3NodeName: {},
},
maxPodsToEvictPerNode: 0,
expectedPodsEvicted: 0,
},
{
name: "without priorities",
thresholds: api.ResourceThresholds{
v1.ResourceCPU: 30,
v1.ResourcePods: 30,
},
targetThresholds: api.ResourceThresholds{
v1.ResourceCPU: 50,
v1.ResourcePods: 50,
},
nodes: map[string]*v1.Node{
n1NodeName: test.BuildTestNode(n1NodeName, 4000, 3000, 9, nil),
n2NodeName: test.BuildTestNode(n2NodeName, 4000, 3000, 10, nil),
n3NodeName: test.BuildTestNode(n3NodeName, 4000, 3000, 10, test.SetNodeUnschedulable),
},
pods: map[string]*v1.PodList{
n1NodeName: {
Items: []v1.Pod{
*test.BuildTestPod("p1", 400, 0, n1NodeName, test.SetRSOwnerRef),
*test.BuildTestPod("p2", 400, 0, n1NodeName, test.SetRSOwnerRef),
*test.BuildTestPod("p3", 400, 0, n1NodeName, test.SetRSOwnerRef),
*test.BuildTestPod("p4", 400, 0, n1NodeName, test.SetRSOwnerRef),
*test.BuildTestPod("p5", 400, 0, n1NodeName, test.SetRSOwnerRef),
// These won't be evicted.
*test.BuildTestPod("p6", 400, 0, n1NodeName, test.SetDSOwnerRef),
*test.BuildTestPod("p7", 400, 0, n1NodeName, func(pod *v1.Pod) {
// A pod with local storage.
test.SetNormalOwnerRef(pod)
pod.Spec.Volumes = []v1.Volume{
{
Name: "sample",
VolumeSource: v1.VolumeSource{
HostPath: &v1.HostPathVolumeSource{Path: "somePath"},
EmptyDir: &v1.EmptyDirVolumeSource{
SizeLimit: resource.NewQuantity(int64(10), resource.BinarySI)},
},
},
}
// A Mirror Pod.
pod.Annotations = test.GetMirrorPodAnnotation()
}),
*test.BuildTestPod("p8", 400, 0, n1NodeName, func(pod *v1.Pod) {
// A Critical Pod.
pod.Namespace = "kube-system"
priority := utils.SystemCriticalPriority
pod.Spec.Priority = &priority
}),
},
},
n2NodeName: {
Items: []v1.Pod{
*test.BuildTestPod("p9", 400, 0, n1NodeName, test.SetRSOwnerRef),
},
},
n3NodeName: {},
},
maxPodsToEvictPerNode: 0,
expectedPodsEvicted: 4,
},
{
name: "without priorities stop when cpu capacity is depleted",
thresholds: api.ResourceThresholds{
v1.ResourceCPU: 30,
v1.ResourcePods: 30,
},
targetThresholds: api.ResourceThresholds{
v1.ResourceCPU: 50,
v1.ResourcePods: 50,
},
nodes: map[string]*v1.Node{
n1NodeName: test.BuildTestNode(n1NodeName, 4000, 3000, 9, nil),
n2NodeName: test.BuildTestNode(n2NodeName, 4000, 3000, 10, nil),
n3NodeName: test.BuildTestNode(n3NodeName, 4000, 3000, 10, test.SetNodeUnschedulable),
},
pods: map[string]*v1.PodList{
n1NodeName: {
Items: []v1.Pod{
*test.BuildTestPod("p1", 400, 300, n1NodeName, test.SetRSOwnerRef),
*test.BuildTestPod("p2", 400, 300, n1NodeName, test.SetRSOwnerRef),
*test.BuildTestPod("p3", 400, 300, n1NodeName, test.SetRSOwnerRef),
*test.BuildTestPod("p4", 400, 300, n1NodeName, test.SetRSOwnerRef),
*test.BuildTestPod("p5", 400, 300, n1NodeName, test.SetRSOwnerRef),
// These won't be evicted.
*test.BuildTestPod("p6", 400, 300, n1NodeName, test.SetDSOwnerRef),
*test.BuildTestPod("p7", 400, 300, n1NodeName, func(pod *v1.Pod) {
// A pod with local storage.
test.SetNormalOwnerRef(pod)
pod.Spec.Volumes = []v1.Volume{
{
Name: "sample",
VolumeSource: v1.VolumeSource{
HostPath: &v1.HostPathVolumeSource{Path: "somePath"},
EmptyDir: &v1.EmptyDirVolumeSource{
SizeLimit: resource.NewQuantity(int64(10), resource.BinarySI)},
},
},
}
// A Mirror Pod.
pod.Annotations = test.GetMirrorPodAnnotation()
}),
*test.BuildTestPod("p8", 400, 300, n1NodeName, func(pod *v1.Pod) {
// A Critical Pod.
pod.Namespace = "kube-system"
priority := utils.SystemCriticalPriority
pod.Spec.Priority = &priority
}),
},
},
n2NodeName: {
Items: []v1.Pod{
*test.BuildTestPod("p9", 400, 2100, n1NodeName, test.SetRSOwnerRef),
},
},
n3NodeName: {},
},
maxPodsToEvictPerNode: 0,
// 4 pods available for eviction based on v1.ResourcePods, only 3 pods can be evicted before cpu is depleted
expectedPodsEvicted: 3,
},
{
name: "with priorities",
thresholds: api.ResourceThresholds{
v1.ResourceCPU: 30,
v1.ResourcePods: 30,
},
targetThresholds: api.ResourceThresholds{
v1.ResourceCPU: 50,
v1.ResourcePods: 50,
},
nodes: map[string]*v1.Node{
n1NodeName: test.BuildTestNode(n1NodeName, 4000, 3000, 9, nil),
n2NodeName: test.BuildTestNode(n2NodeName, 4000, 3000, 10, nil),
n3NodeName: test.BuildTestNode(n3NodeName, 4000, 3000, 10, test.SetNodeUnschedulable),
},
pods: map[string]*v1.PodList{
n1NodeName: {
Items: []v1.Pod{
*test.BuildTestPod("p1", 400, 0, n1NodeName, func(pod *v1.Pod) {
test.SetRSOwnerRef(pod)
test.SetPodPriority(pod, highPriority)
}),
*test.BuildTestPod("p2", 400, 0, n1NodeName, func(pod *v1.Pod) {
test.SetRSOwnerRef(pod)
test.SetPodPriority(pod, highPriority)
}),
*test.BuildTestPod("p3", 400, 0, n1NodeName, func(pod *v1.Pod) {
test.SetRSOwnerRef(pod)
test.SetPodPriority(pod, highPriority)
}),
*test.BuildTestPod("p4", 400, 0, n1NodeName, func(pod *v1.Pod) {
test.SetRSOwnerRef(pod)
test.SetPodPriority(pod, highPriority)
}),
*test.BuildTestPod("p5", 400, 0, n1NodeName, func(pod *v1.Pod) {
test.SetRSOwnerRef(pod)
test.SetPodPriority(pod, lowPriority)
}),
// These won't be evicted.
*test.BuildTestPod("p6", 400, 0, n1NodeName, func(pod *v1.Pod) {
test.SetDSOwnerRef(pod)
test.SetPodPriority(pod, highPriority)
}),
*test.BuildTestPod("p7", 400, 0, n1NodeName, func(pod *v1.Pod) {
// A pod with local storage.
test.SetNormalOwnerRef(pod)
test.SetPodPriority(pod, lowPriority)
pod.Spec.Volumes = []v1.Volume{
{
Name: "sample",
VolumeSource: v1.VolumeSource{
HostPath: &v1.HostPathVolumeSource{Path: "somePath"},
EmptyDir: &v1.EmptyDirVolumeSource{
SizeLimit: resource.NewQuantity(int64(10), resource.BinarySI)},
},
},
}
// A Mirror Pod.
pod.Annotations = test.GetMirrorPodAnnotation()
}),
*test.BuildTestPod("p8", 400, 0, n1NodeName, func(pod *v1.Pod) {
// A Critical Pod.
pod.Namespace = "kube-system"
priority := utils.SystemCriticalPriority
pod.Spec.Priority = &priority
}),
},
},
n2NodeName: {
Items: []v1.Pod{
*test.BuildTestPod("p9", 400, 0, n1NodeName, test.SetRSOwnerRef),
},
},
n3NodeName: {},
},
maxPodsToEvictPerNode: 0,
expectedPodsEvicted: 4,
},
{
name: "without priorities evicting best-effort pods only",
thresholds: api.ResourceThresholds{
v1.ResourceCPU: 30,
v1.ResourcePods: 30,
},
targetThresholds: api.ResourceThresholds{
v1.ResourceCPU: 50,
v1.ResourcePods: 50,
},
nodes: map[string]*v1.Node{
n1NodeName: test.BuildTestNode(n1NodeName, 4000, 3000, 9, nil),
n2NodeName: test.BuildTestNode(n2NodeName, 4000, 3000, 10, nil),
n3NodeName: test.BuildTestNode(n3NodeName, 4000, 3000, 10, test.SetNodeUnschedulable),
},
// All pods are assumed to be burstable (test.BuildTestNode always sets both cpu/memory resource requests to some value)
pods: map[string]*v1.PodList{
n1NodeName: {
Items: []v1.Pod{
*test.BuildTestPod("p1", 400, 0, n1NodeName, func(pod *v1.Pod) {
test.SetRSOwnerRef(pod)
test.MakeBestEffortPod(pod)
}),
*test.BuildTestPod("p2", 400, 0, n1NodeName, func(pod *v1.Pod) {
test.SetRSOwnerRef(pod)
test.MakeBestEffortPod(pod)
}),
*test.BuildTestPod("p3", 400, 0, n1NodeName, func(pod *v1.Pod) {
test.SetRSOwnerRef(pod)
}),
*test.BuildTestPod("p4", 400, 0, n1NodeName, func(pod *v1.Pod) {
test.SetRSOwnerRef(pod)
test.MakeBestEffortPod(pod)
}),
*test.BuildTestPod("p5", 400, 0, n1NodeName, func(pod *v1.Pod) {
test.SetRSOwnerRef(pod)
test.MakeBestEffortPod(pod)
}),
// These won't be evicted.
*test.BuildTestPod("p6", 400, 0, n1NodeName, func(pod *v1.Pod) {
test.SetDSOwnerRef(pod)
}),
*test.BuildTestPod("p7", 400, 0, n1NodeName, func(pod *v1.Pod) {
// A pod with local storage.
test.SetNormalOwnerRef(pod)
pod.Spec.Volumes = []v1.Volume{
{
Name: "sample",
VolumeSource: v1.VolumeSource{
HostPath: &v1.HostPathVolumeSource{Path: "somePath"},
EmptyDir: &v1.EmptyDirVolumeSource{
SizeLimit: resource.NewQuantity(int64(10), resource.BinarySI)},
},
},
}
// A Mirror Pod.
pod.Annotations = test.GetMirrorPodAnnotation()
}),
*test.BuildTestPod("p8", 400, 0, n1NodeName, func(pod *v1.Pod) {
// A Critical Pod.
pod.Namespace = "kube-system"
priority := utils.SystemCriticalPriority
pod.Spec.Priority = &priority
}),
},
},
n2NodeName: {
Items: []v1.Pod{
*test.BuildTestPod("p9", 400, 0, n1NodeName, test.SetRSOwnerRef),
},
},
n3NodeName: {},
},
maxPodsToEvictPerNode: 0,
expectedPodsEvicted: 4,
evictedPods: []string{"p1", "p2", "p4", "p5"},
},
}
// A Mirror Pod.
p7.Annotations = test.GetMirrorPodAnnotation()
// A Critical Pod.
p8.Namespace = "kube-system"
p8.Annotations = test.GetCriticalPodAnnotation()
p9 := test.BuildTestPod("p9", 400, 0, n1.Name)
p9.ObjectMeta.OwnerReferences = test.GetReplicaSetOwnerRefList()
fakeClient := &fake.Clientset{}
fakeClient.Fake.AddReactor("list", "pods", func(action core.Action) (bool, runtime.Object, error) {
list := action.(core.ListAction)
fieldString := list.GetListRestrictions().Fields.String()
if strings.Contains(fieldString, "n1") {
return true, &v1.PodList{Items: []v1.Pod{*p1, *p2, *p3, *p4, *p5, *p6, *p7, *p8}}, nil
}
if strings.Contains(fieldString, "n2") {
return true, &v1.PodList{Items: []v1.Pod{*p9}}, nil
}
if strings.Contains(fieldString, "n3") {
return true, &v1.PodList{Items: []v1.Pod{}}, nil
}
return true, nil, fmt.Errorf("Failed to list: %v", list)
})
fakeClient.Fake.AddReactor("get", "nodes", func(action core.Action) (bool, runtime.Object, error) {
getAction := action.(core.GetAction)
switch getAction.GetName() {
case n1.Name:
return true, n1, nil
case n2.Name:
return true, n2, nil
case n3.Name:
return true, n3, nil
}
return true, nil, fmt.Errorf("Wrong node: %v", getAction.GetName())
})
expectedPodsEvicted := 3
npm := CreateNodePodsMap(fakeClient, []*v1.Node{n1, n2, n3})
lowNodes, targetNodes := classifyNodes(npm, thresholds, targetThresholds)
if len(lowNodes) != 1 {
t.Errorf("After ignoring unschedulable nodes, expected only one node to be under utilized.")
for _, test := range testCases {
t.Run(test.name, func(t *testing.T) {
fakeClient := &fake.Clientset{}
fakeClient.Fake.AddReactor("list", "pods", func(action core.Action) (bool, runtime.Object, error) {
list := action.(core.ListAction)
fieldString := list.GetListRestrictions().Fields.String()
if strings.Contains(fieldString, n1NodeName) {
return true, test.pods[n1NodeName], nil
}
if strings.Contains(fieldString, n2NodeName) {
return true, test.pods[n2NodeName], nil
}
if strings.Contains(fieldString, n3NodeName) {
return true, test.pods[n3NodeName], nil
}
return true, nil, fmt.Errorf("Failed to list: %v", list)
})
fakeClient.Fake.AddReactor("get", "nodes", func(action core.Action) (bool, runtime.Object, error) {
getAction := action.(core.GetAction)
if node, exists := test.nodes[getAction.GetName()]; exists {
return true, node, nil
}
return true, nil, fmt.Errorf("Wrong node: %v", getAction.GetName())
})
podsForEviction := make(map[string]struct{})
for _, pod := range test.evictedPods {
podsForEviction[pod] = struct{}{}
}
evictionFailed := false
if len(test.evictedPods) > 0 {
fakeClient.Fake.AddReactor("create", "pods", func(action core.Action) (bool, runtime.Object, error) {
getAction := action.(core.CreateAction)
obj := getAction.GetObject()
if eviction, ok := obj.(*v1beta1.Eviction); ok {
if _, exists := podsForEviction[eviction.Name]; exists {
return true, obj, nil
}
evictionFailed = true
return true, nil, fmt.Errorf("pod %q was unexpectedly evicted", eviction.Name)
}
return true, obj, nil
})
}
var nodes []*v1.Node
for _, node := range test.nodes {
nodes = append(nodes, node)
}
podEvictor := evictions.NewPodEvictor(
fakeClient,
"v1",
false,
test.maxPodsToEvictPerNode,
nodes,
false,
)
strategy := api.DeschedulerStrategy{
Enabled: true,
Params: &api.StrategyParameters{
NodeResourceUtilizationThresholds: &api.NodeResourceUtilizationThresholds{
Thresholds: test.thresholds,
TargetThresholds: test.targetThresholds,
},
},
}
LowNodeUtilization(ctx, fakeClient, strategy, nodes, podEvictor)
podsEvicted := podEvictor.TotalEvicted()
if test.expectedPodsEvicted != podsEvicted {
t.Errorf("Expected %#v pods to be evicted but %#v got evicted", test.expectedPodsEvicted, podsEvicted)
}
if evictionFailed {
t.Errorf("Pod evictions failed unexpectedly")
}
})
}
npe := nodePodEvictedCount{}
npe[n1] = 0
npe[n2] = 0
npe[n3] = 0
podsEvicted := evictPodsFromTargetNodes(fakeClient, "v1", targetNodes, lowNodes, targetThresholds, false, 3, npe)
if expectedPodsEvicted != podsEvicted {
t.Errorf("Expected %#v pods to be evicted but %#v got evicted", expectedPodsEvicted, podsEvicted)
}
func TestValidateStrategyConfig(t *testing.T) {
tests := []struct {
name string
thresholds api.ResourceThresholds
targetThresholds api.ResourceThresholds
errInfo error
}{
{
name: "passing invalid thresholds",
thresholds: api.ResourceThresholds{
v1.ResourceCPU: 20,
v1.ResourceMemory: 120,
},
targetThresholds: api.ResourceThresholds{
v1.ResourceCPU: 80,
v1.ResourceMemory: 80,
},
errInfo: fmt.Errorf("thresholds config is not valid: %v", fmt.Errorf(
"%v threshold not in [%v, %v] range", v1.ResourceMemory, MinResourcePercentage, MaxResourcePercentage)),
},
{
name: "passing invalid targetThresholds",
thresholds: api.ResourceThresholds{
v1.ResourceCPU: 20,
v1.ResourceMemory: 20,
},
targetThresholds: api.ResourceThresholds{
v1.ResourceCPU: 80,
"resourceInvalid": 80,
},
errInfo: fmt.Errorf("targetThresholds config is not valid: %v",
fmt.Errorf("only cpu, memory, or pods thresholds can be specified")),
},
{
name: "thresholds and targetThresholds configured different num of resources",
thresholds: api.ResourceThresholds{
v1.ResourceCPU: 20,
v1.ResourceMemory: 20,
},
targetThresholds: api.ResourceThresholds{
v1.ResourceCPU: 80,
v1.ResourceMemory: 80,
v1.ResourcePods: 80,
},
errInfo: fmt.Errorf("thresholds and targetThresholds configured different resources"),
},
{
name: "thresholds and targetThresholds configured different resources",
thresholds: api.ResourceThresholds{
v1.ResourceCPU: 20,
v1.ResourceMemory: 20,
},
targetThresholds: api.ResourceThresholds{
v1.ResourceCPU: 80,
v1.ResourcePods: 80,
},
errInfo: fmt.Errorf("thresholds and targetThresholds configured different resources"),
},
{
name: "thresholds' CPU config value is greater than targetThresholds'",
thresholds: api.ResourceThresholds{
v1.ResourceCPU: 90,
v1.ResourceMemory: 20,
},
targetThresholds: api.ResourceThresholds{
v1.ResourceCPU: 80,
v1.ResourceMemory: 80,
},
errInfo: fmt.Errorf("thresholds' %v percentage is greater than targetThresholds'", v1.ResourceCPU),
},
{
name: "passing valid strategy config",
thresholds: api.ResourceThresholds{
v1.ResourceCPU: 20,
v1.ResourceMemory: 20,
},
targetThresholds: api.ResourceThresholds{
v1.ResourceCPU: 80,
v1.ResourceMemory: 80,
},
errInfo: nil,
},
}
for _, testCase := range tests {
validateErr := validateStrategyConfig(testCase.thresholds, testCase.targetThresholds)
if validateErr == nil || testCase.errInfo == nil {
if validateErr != testCase.errInfo {
t.Errorf("expected validity of strategy config: thresholds %#v targetThresholds %#v\nto be %v but got %v instead",
testCase.thresholds, testCase.targetThresholds, testCase.errInfo, validateErr)
}
} else if validateErr.Error() != testCase.errInfo.Error() {
t.Errorf("expected validity of strategy config: thresholds %#v targetThresholds %#v\nto be %v but got %v instead",
testCase.thresholds, testCase.targetThresholds, testCase.errInfo, validateErr)
}
}
}
func TestValidateThresholds(t *testing.T) {
tests := []struct {
name string
input api.ResourceThresholds
succeed bool
errInfo error
}{
{
name: "passing nil map for threshold",
input: nil,
succeed: false,
errInfo: fmt.Errorf("no resource threshold is configured"),
},
{
name: "passing no threshold",
input: api.ResourceThresholds{},
succeed: false,
errInfo: fmt.Errorf("no resource threshold is configured"),
},
{
name: "passing unsupported resource name",
@@ -148,7 +593,7 @@ func TestValidateThresholds(t *testing.T) {
v1.ResourceCPU: 40,
v1.ResourceStorage: 25.5,
},
succeed: false,
errInfo: fmt.Errorf("only cpu, memory, or pods thresholds can be specified"),
},
{
name: "passing invalid resource name",
@@ -156,7 +601,30 @@ func TestValidateThresholds(t *testing.T) {
v1.ResourceCPU: 40,
"coolResource": 42.0,
},
succeed: false,
errInfo: fmt.Errorf("only cpu, memory, or pods thresholds can be specified"),
},
{
name: "passing invalid resource value",
input: api.ResourceThresholds{
v1.ResourceCPU: 110,
v1.ResourceMemory: 80,
},
errInfo: fmt.Errorf("%v threshold not in [%v, %v] range", v1.ResourceCPU, MinResourcePercentage, MaxResourcePercentage),
},
{
name: "passing a valid threshold with max and min resource value",
input: api.ResourceThresholds{
v1.ResourceCPU: 100,
v1.ResourceMemory: 0,
},
errInfo: nil,
},
{
name: "passing a valid threshold with only cpu",
input: api.ResourceThresholds{
v1.ResourceCPU: 80,
},
errInfo: nil,
},
{
name: "passing a valid threshold with cpu, memory and pods",
@@ -165,15 +633,200 @@ func TestValidateThresholds(t *testing.T) {
v1.ResourceMemory: 30,
v1.ResourcePods: 40,
},
succeed: true,
errInfo: nil,
},
}
for _, test := range tests {
isValid := validateThresholds(test.input)
validateErr := validateThresholds(test.input)
if isValid != test.succeed {
t.Errorf("expected validity of threshold: %#v\nto be %v but got %v instead", test.input, test.succeed, isValid)
if validateErr == nil || test.errInfo == nil {
if validateErr != test.errInfo {
t.Errorf("expected validity of threshold: %#v\nto be %v but got %v instead", test.input, test.errInfo, validateErr)
}
} else if validateErr.Error() != test.errInfo.Error() {
t.Errorf("expected validity of threshold: %#v\nto be %v but got %v instead", test.input, test.errInfo, validateErr)
}
}
}
func newFake(objects ...runtime.Object) *core.Fake {
scheme := runtime.NewScheme()
codecs := serializer.NewCodecFactory(scheme)
fake.AddToScheme(scheme)
o := core.NewObjectTracker(scheme, codecs.UniversalDecoder())
for _, obj := range objects {
if err := o.Add(obj); err != nil {
panic(err)
}
}
fakePtr := core.Fake{}
fakePtr.AddReactor("list", "pods", func(action core.Action) (bool, runtime.Object, error) {
objs, err := o.List(
schema.GroupVersionResource{Group: "", Version: "v1", Resource: "pods"},
schema.GroupVersionKind{Group: "", Version: "v1", Kind: "Pod"},
action.GetNamespace(),
)
if err != nil {
return true, nil, err
}
obj := &v1.PodList{
Items: []v1.Pod{},
}
for _, pod := range objs.(*v1.PodList).Items {
podFieldSet := fields.Set(map[string]string{
"spec.nodeName": pod.Spec.NodeName,
"status.phase": string(pod.Status.Phase),
})
match := action.(core.ListAction).GetListRestrictions().Fields.Matches(podFieldSet)
if !match {
continue
}
obj.Items = append(obj.Items, *pod.DeepCopy())
}
return true, obj, nil
})
fakePtr.AddReactor("*", "*", core.ObjectReaction(o))
fakePtr.AddWatchReactor("*", core.DefaultWatchReactor(watch.NewFake(), nil))
return &fakePtr
}
func TestWithTaints(t *testing.T) {
ctx := context.Background()
strategy := api.DeschedulerStrategy{
Enabled: true,
Params: &api.StrategyParameters{
NodeResourceUtilizationThresholds: &api.NodeResourceUtilizationThresholds{
Thresholds: api.ResourceThresholds{
v1.ResourcePods: 20,
},
TargetThresholds: api.ResourceThresholds{
v1.ResourcePods: 70,
},
},
},
}
n1 := test.BuildTestNode("n1", 2000, 3000, 10, nil)
n2 := test.BuildTestNode("n2", 1000, 3000, 10, nil)
n3 := test.BuildTestNode("n3", 1000, 3000, 10, nil)
n3withTaints := n3.DeepCopy()
n3withTaints.Spec.Taints = []v1.Taint{
{
Key: "key",
Value: "value",
Effect: v1.TaintEffectNoSchedule,
},
}
podThatToleratesTaint := test.BuildTestPod("tolerate_pod", 200, 0, n1.Name, test.SetRSOwnerRef)
podThatToleratesTaint.Spec.Tolerations = []v1.Toleration{
{
Key: "key",
Value: "value",
},
}
tests := []struct {
name string
nodes []*v1.Node
pods []*v1.Pod
evictionsExpected int
}{
{
name: "No taints",
nodes: []*v1.Node{n1, n2, n3},
pods: []*v1.Pod{
//Node 1 pods
test.BuildTestPod(fmt.Sprintf("pod_1_%s", n1.Name), 200, 0, n1.Name, test.SetRSOwnerRef),
test.BuildTestPod(fmt.Sprintf("pod_2_%s", n1.Name), 200, 0, n1.Name, test.SetRSOwnerRef),
test.BuildTestPod(fmt.Sprintf("pod_3_%s", n1.Name), 200, 0, n1.Name, test.SetRSOwnerRef),
test.BuildTestPod(fmt.Sprintf("pod_4_%s", n1.Name), 200, 0, n1.Name, test.SetRSOwnerRef),
test.BuildTestPod(fmt.Sprintf("pod_5_%s", n1.Name), 200, 0, n1.Name, test.SetRSOwnerRef),
test.BuildTestPod(fmt.Sprintf("pod_6_%s", n1.Name), 200, 0, n1.Name, test.SetRSOwnerRef),
test.BuildTestPod(fmt.Sprintf("pod_7_%s", n1.Name), 200, 0, n1.Name, test.SetRSOwnerRef),
test.BuildTestPod(fmt.Sprintf("pod_8_%s", n1.Name), 200, 0, n1.Name, test.SetRSOwnerRef),
// Node 2 pods
test.BuildTestPod(fmt.Sprintf("pod_9_%s", n2.Name), 200, 0, n2.Name, test.SetRSOwnerRef),
},
evictionsExpected: 1,
},
{
name: "No pod tolerates node taint",
nodes: []*v1.Node{n1, n3withTaints},
pods: []*v1.Pod{
//Node 1 pods
test.BuildTestPod(fmt.Sprintf("pod_1_%s", n1.Name), 200, 0, n1.Name, test.SetRSOwnerRef),
test.BuildTestPod(fmt.Sprintf("pod_2_%s", n1.Name), 200, 0, n1.Name, test.SetRSOwnerRef),
test.BuildTestPod(fmt.Sprintf("pod_3_%s", n1.Name), 200, 0, n1.Name, test.SetRSOwnerRef),
test.BuildTestPod(fmt.Sprintf("pod_4_%s", n1.Name), 200, 0, n1.Name, test.SetRSOwnerRef),
test.BuildTestPod(fmt.Sprintf("pod_5_%s", n1.Name), 200, 0, n1.Name, test.SetRSOwnerRef),
test.BuildTestPod(fmt.Sprintf("pod_6_%s", n1.Name), 200, 0, n1.Name, test.SetRSOwnerRef),
test.BuildTestPod(fmt.Sprintf("pod_7_%s", n1.Name), 200, 0, n1.Name, test.SetRSOwnerRef),
test.BuildTestPod(fmt.Sprintf("pod_8_%s", n1.Name), 200, 0, n1.Name, test.SetRSOwnerRef),
// Node 3 pods
test.BuildTestPod(fmt.Sprintf("pod_9_%s", n3withTaints.Name), 200, 0, n3withTaints.Name, test.SetRSOwnerRef),
},
evictionsExpected: 0,
},
{
name: "Pod which tolerates node taint",
nodes: []*v1.Node{n1, n3withTaints},
pods: []*v1.Pod{
//Node 1 pods
test.BuildTestPod(fmt.Sprintf("pod_1_%s", n1.Name), 200, 0, n1.Name, test.SetRSOwnerRef),
test.BuildTestPod(fmt.Sprintf("pod_2_%s", n1.Name), 200, 0, n1.Name, test.SetRSOwnerRef),
test.BuildTestPod(fmt.Sprintf("pod_3_%s", n1.Name), 200, 0, n1.Name, test.SetRSOwnerRef),
test.BuildTestPod(fmt.Sprintf("pod_4_%s", n1.Name), 200, 0, n1.Name, test.SetRSOwnerRef),
test.BuildTestPod(fmt.Sprintf("pod_5_%s", n1.Name), 200, 0, n1.Name, test.SetRSOwnerRef),
test.BuildTestPod(fmt.Sprintf("pod_6_%s", n1.Name), 200, 0, n1.Name, test.SetRSOwnerRef),
test.BuildTestPod(fmt.Sprintf("pod_7_%s", n1.Name), 200, 0, n1.Name, test.SetRSOwnerRef),
podThatToleratesTaint,
// Node 3 pods
test.BuildTestPod(fmt.Sprintf("pod_9_%s", n3withTaints.Name), 200, 0, n3withTaints.Name, test.SetRSOwnerRef),
},
evictionsExpected: 1,
},
}
for _, item := range tests {
t.Run(item.name, func(t *testing.T) {
var objs []runtime.Object
for _, node := range item.nodes {
objs = append(objs, node)
}
for _, pod := range item.pods {
objs = append(objs, pod)
}
fakePtr := newFake(objs...)
var evictionCounter int
fakePtr.PrependReactor("create", "pods", func(action core.Action) (bool, runtime.Object, error) {
if action.GetSubresource() != "eviction" || action.GetResource().Resource != "pods" {
return false, nil, nil
}
evictionCounter++
return true, nil, nil
})
podEvictor := evictions.NewPodEvictor(
&fake.Clientset{Fake: *fakePtr},
"policy/v1",
false,
item.evictionsExpected,
item.nodes,
false,
)
LowNodeUtilization(ctx, &fake.Clientset{Fake: *fakePtr}, strategy, item.nodes, podEvictor)
if item.evictionsExpected != evictionCounter {
t.Errorf("Expected %v evictions, got %v", item.evictionsExpected, evictionCounter)
}
})
}
}

View File

@@ -17,58 +17,54 @@ limitations under the License.
package strategies
import (
"github.com/golang/glog"
"github.com/kubernetes-incubator/descheduler/cmd/descheduler/app/options"
"github.com/kubernetes-incubator/descheduler/pkg/api"
"github.com/kubernetes-incubator/descheduler/pkg/descheduler/evictions"
nodeutil "github.com/kubernetes-incubator/descheduler/pkg/descheduler/node"
podutil "github.com/kubernetes-incubator/descheduler/pkg/descheduler/pod"
"k8s.io/api/core/v1"
"context"
v1 "k8s.io/api/core/v1"
clientset "k8s.io/client-go/kubernetes"
"k8s.io/klog/v2"
"sigs.k8s.io/descheduler/pkg/api"
"sigs.k8s.io/descheduler/pkg/descheduler/evictions"
nodeutil "sigs.k8s.io/descheduler/pkg/descheduler/node"
podutil "sigs.k8s.io/descheduler/pkg/descheduler/pod"
)
func RemovePodsViolatingNodeAffinity(ds *options.DeschedulerServer, strategy api.DeschedulerStrategy, evictionPolicyGroupVersion string, nodes []*v1.Node, nodePodCount nodePodEvictedCount) {
removePodsViolatingNodeAffinityCount(ds, strategy, evictionPolicyGroupVersion, nodes, nodePodCount, ds.MaxNoOfPodsToEvictPerNode)
}
func removePodsViolatingNodeAffinityCount(ds *options.DeschedulerServer, strategy api.DeschedulerStrategy, evictionPolicyGroupVersion string, nodes []*v1.Node, nodepodCount nodePodEvictedCount, maxPodsToEvict int) int {
evictedPodCount := 0
if !strategy.Enabled {
return evictedPodCount
// RemovePodsViolatingNodeAffinity evicts pods on nodes which violate node affinity
func RemovePodsViolatingNodeAffinity(ctx context.Context, client clientset.Interface, strategy api.DeschedulerStrategy, nodes []*v1.Node, podEvictor *evictions.PodEvictor) {
if strategy.Params == nil {
klog.V(1).Infof("NodeAffinityType not set")
return
}
for _, nodeAffinity := range strategy.Params.NodeAffinityType {
glog.V(2).Infof("Executing for nodeAffinityType: %v", nodeAffinity)
klog.V(2).Infof("Executing for nodeAffinityType: %v", nodeAffinity)
switch nodeAffinity {
case "requiredDuringSchedulingIgnoredDuringExecution":
for _, node := range nodes {
glog.V(1).Infof("Processing node: %#v\n", node.Name)
klog.V(1).Infof("Processing node: %#v\n", node.Name)
pods, err := podutil.ListEvictablePodsOnNode(ds.Client, node)
pods, err := podutil.ListPodsOnANode(ctx, client, node, func(pod *v1.Pod) bool {
return podEvictor.IsEvictable(pod) &&
!nodeutil.PodFitsCurrentNode(pod, node) &&
nodeutil.PodFitsAnyNode(pod, nodes)
})
if err != nil {
glog.Errorf("failed to get pods from %v: %v", node.Name, err)
klog.Errorf("failed to get pods from %v: %v", node.Name, err)
}
for _, pod := range pods {
if maxPodsToEvict > 0 && nodepodCount[node]+1 > maxPodsToEvict {
break
}
if pod.Spec.Affinity != nil && pod.Spec.Affinity.NodeAffinity != nil && pod.Spec.Affinity.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution != nil {
if !nodeutil.PodFitsCurrentNode(pod, node) && nodeutil.PodFitsAnyNode(pod, nodes) {
glog.V(1).Infof("Evicting pod: %v", pod.Name)
evictions.EvictPod(ds.Client, pod, evictionPolicyGroupVersion, ds.DryRun)
nodepodCount[node]++
klog.V(1).Infof("Evicting pod: %v", pod.Name)
if _, err := podEvictor.EvictPod(ctx, pod, node, "NodeAffinity"); err != nil {
klog.Errorf("Error evicting pod: (%#v)", err)
break
}
}
}
evictedPodCount += nodepodCount[node]
}
default:
glog.Errorf("invalid nodeAffinityType: %v", nodeAffinity)
return evictedPodCount
klog.Errorf("invalid nodeAffinityType: %v", nodeAffinity)
}
}
glog.V(1).Infof("Evicted %v pods", evictedPodCount)
return evictedPodCount
klog.V(1).Infof("Evicted %v pods", podEvictor.TotalEvicted())
}

View File

@@ -17,22 +17,23 @@ limitations under the License.
package strategies
import (
"context"
"testing"
"github.com/kubernetes-incubator/descheduler/cmd/descheduler/app/options"
"github.com/kubernetes-incubator/descheduler/pkg/api"
"github.com/kubernetes-incubator/descheduler/test"
"k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/kubernetes/fake"
core "k8s.io/client-go/testing"
"sigs.k8s.io/descheduler/pkg/api"
"sigs.k8s.io/descheduler/pkg/descheduler/evictions"
"sigs.k8s.io/descheduler/test"
)
func TestRemovePodsViolatingNodeAffinity(t *testing.T) {
ctx := context.Background()
requiredDuringSchedulingIgnoredDuringExecutionStrategy := api.DeschedulerStrategy{
Enabled: true,
Params: api.StrategyParameters{
Params: &api.StrategyParameters{
NodeAffinityType: []string{
"requiredDuringSchedulingIgnoredDuringExecution",
},
@@ -41,17 +42,17 @@ func TestRemovePodsViolatingNodeAffinity(t *testing.T) {
nodeLabelKey := "kubernetes.io/desiredNode"
nodeLabelValue := "yes"
nodeWithLabels := test.BuildTestNode("nodeWithLabels", 2000, 3000, 10)
nodeWithLabels := test.BuildTestNode("nodeWithLabels", 2000, 3000, 10, nil)
nodeWithLabels.Labels[nodeLabelKey] = nodeLabelValue
nodeWithoutLabels := test.BuildTestNode("nodeWithoutLabels", 2000, 3000, 10)
nodeWithoutLabels := test.BuildTestNode("nodeWithoutLabels", 2000, 3000, 10, nil)
unschedulableNodeWithLabels := test.BuildTestNode("unschedulableNodeWithLabels", 2000, 3000, 10)
unschedulableNodeWithLabels := test.BuildTestNode("unschedulableNodeWithLabels", 2000, 3000, 10, nil)
nodeWithLabels.Labels[nodeLabelKey] = nodeLabelValue
unschedulableNodeWithLabels.Spec.Unschedulable = true
addPodsToNode := func(node *v1.Node) []v1.Pod {
podWithNodeAffinity := test.BuildTestPod("podWithNodeAffinity", 100, 0, node.Name)
podWithNodeAffinity := test.BuildTestPod("podWithNodeAffinity", 100, 0, node.Name, nil)
podWithNodeAffinity.Spec.Affinity = &v1.Affinity{
NodeAffinity: &v1.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &v1.NodeSelector{
@@ -72,8 +73,8 @@ func TestRemovePodsViolatingNodeAffinity(t *testing.T) {
},
}
pod1 := test.BuildTestPod("pod1", 100, 0, node.Name)
pod2 := test.BuildTestPod("pod2", 100, 0, node.Name)
pod1 := test.BuildTestPod("pod1", 100, 0, node.Name, nil)
pod2 := test.BuildTestPod("pod2", 100, 0, node.Name, nil)
podWithNodeAffinity.ObjectMeta.OwnerReferences = test.GetNormalPodOwnerRefList()
pod1.ObjectMeta.OwnerReferences = test.GetNormalPodOwnerRefList()
@@ -92,49 +93,30 @@ func TestRemovePodsViolatingNodeAffinity(t *testing.T) {
pods []v1.Pod
strategy api.DeschedulerStrategy
expectedEvictedPodCount int
npe nodePodEvictedCount
maxPodsToEvict int
maxPodsToEvictPerNode int
}{
{
description: "Strategy disabled, should not evict any pods",
strategy: api.DeschedulerStrategy{
Enabled: false,
Params: api.StrategyParameters{
NodeAffinityType: []string{
"requiredDuringSchedulingIgnoredDuringExecution",
},
},
},
expectedEvictedPodCount: 0,
pods: addPodsToNode(nodeWithoutLabels),
nodes: []*v1.Node{nodeWithoutLabels, nodeWithLabels},
npe: nodePodEvictedCount{nodeWithoutLabels: 0, nodeWithLabels: 0},
maxPodsToEvict: 0,
},
{
description: "Invalid strategy type, should not evict any pods",
strategy: api.DeschedulerStrategy{
Enabled: true,
Params: api.StrategyParameters{
Params: &api.StrategyParameters{
NodeAffinityType: []string{
"requiredDuringSchedulingRequiredDuringExecution",
},
},
},
expectedEvictedPodCount: 0,
pods: addPodsToNode(nodeWithoutLabels),
nodes: []*v1.Node{nodeWithoutLabels, nodeWithLabels},
npe: nodePodEvictedCount{nodeWithoutLabels: 0, nodeWithLabels: 0},
maxPodsToEvict: 0,
pods: addPodsToNode(nodeWithoutLabels),
nodes: []*v1.Node{nodeWithoutLabels, nodeWithLabels},
maxPodsToEvictPerNode: 0,
},
{
description: "Pod is correctly scheduled on node, no eviction expected",
strategy: requiredDuringSchedulingIgnoredDuringExecutionStrategy,
expectedEvictedPodCount: 0,
pods: addPodsToNode(nodeWithLabels),
nodes: []*v1.Node{nodeWithLabels},
npe: nodePodEvictedCount{nodeWithLabels: 0},
maxPodsToEvict: 0,
pods: addPodsToNode(nodeWithLabels),
nodes: []*v1.Node{nodeWithLabels},
maxPodsToEvictPerNode: 0,
},
{
description: "Pod is scheduled on node without matching labels, another schedulable node available, should be evicted",
@@ -142,17 +124,15 @@ func TestRemovePodsViolatingNodeAffinity(t *testing.T) {
strategy: requiredDuringSchedulingIgnoredDuringExecutionStrategy,
pods: addPodsToNode(nodeWithoutLabels),
nodes: []*v1.Node{nodeWithoutLabels, nodeWithLabels},
npe: nodePodEvictedCount{nodeWithoutLabels: 0, nodeWithLabels: 0},
maxPodsToEvict: 0,
maxPodsToEvictPerNode: 0,
},
{
description: "Pod is scheduled on node without matching labels, another schedulable node available, maxPodsToEvict set to 1, should not be evicted",
description: "Pod is scheduled on node without matching labels, another schedulable node available, maxPodsToEvictPerNode set to 1, should not be evicted",
expectedEvictedPodCount: 1,
strategy: requiredDuringSchedulingIgnoredDuringExecutionStrategy,
pods: addPodsToNode(nodeWithoutLabels),
nodes: []*v1.Node{nodeWithoutLabels, nodeWithLabels},
npe: nodePodEvictedCount{nodeWithoutLabels: 0, nodeWithLabels: 0},
maxPodsToEvict: 1,
maxPodsToEvictPerNode: 1,
},
{
description: "Pod is scheduled on node without matching labels, but no node where pod fits is available, should not evict",
@@ -160,8 +140,7 @@ func TestRemovePodsViolatingNodeAffinity(t *testing.T) {
strategy: requiredDuringSchedulingIgnoredDuringExecutionStrategy,
pods: addPodsToNode(nodeWithoutLabels),
nodes: []*v1.Node{nodeWithoutLabels, unschedulableNodeWithLabels},
npe: nodePodEvictedCount{nodeWithoutLabels: 0, unschedulableNodeWithLabels: 0},
maxPodsToEvict: 0,
maxPodsToEvictPerNode: 0,
},
}
@@ -172,11 +151,17 @@ func TestRemovePodsViolatingNodeAffinity(t *testing.T) {
return true, &v1.PodList{Items: tc.pods}, nil
})
ds := options.DeschedulerServer{
Client: fakeClient,
}
podEvictor := evictions.NewPodEvictor(
fakeClient,
"v1",
false,
tc.maxPodsToEvictPerNode,
tc.nodes,
false,
)
actualEvictedPodCount := removePodsViolatingNodeAffinityCount(&ds, tc.strategy, "v1", tc.nodes, tc.npe, tc.maxPodsToEvict)
RemovePodsViolatingNodeAffinity(ctx, fakeClient, tc.strategy, tc.nodes, podEvictor)
actualEvictedPodCount := podEvictor.TotalEvicted()
if actualEvictedPodCount != tc.expectedEvictedPodCount {
t.Errorf("Test %#v failed, expected %v pod evictions, but got %v pod evictions\n", tc.description, tc.expectedEvictedPodCount, actualEvictedPodCount)
}

View File

@@ -0,0 +1,56 @@
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package strategies
import (
"context"
"sigs.k8s.io/descheduler/pkg/api"
"sigs.k8s.io/descheduler/pkg/descheduler/evictions"
podutil "sigs.k8s.io/descheduler/pkg/descheduler/pod"
"sigs.k8s.io/descheduler/pkg/utils"
v1 "k8s.io/api/core/v1"
clientset "k8s.io/client-go/kubernetes"
"k8s.io/klog/v2"
)
// RemovePodsViolatingNodeTaints evicts pods on the node which violate NoSchedule Taints on nodes
func RemovePodsViolatingNodeTaints(ctx context.Context, client clientset.Interface, strategy api.DeschedulerStrategy, nodes []*v1.Node, podEvictor *evictions.PodEvictor) {
for _, node := range nodes {
klog.V(1).Infof("Processing node: %#v\n", node.Name)
pods, err := podutil.ListPodsOnANode(ctx, client, node, podEvictor.IsEvictable)
if err != nil {
//no pods evicted as error encountered retrieving evictable Pods
return
}
totalPods := len(pods)
for i := 0; i < totalPods; i++ {
if !utils.TolerationsTolerateTaintsWithFilter(
pods[i].Spec.Tolerations,
node.Spec.Taints,
func(taint *v1.Taint) bool { return taint.Effect == v1.TaintEffectNoSchedule },
) {
klog.V(2).Infof("Not all taints with NoSchedule effect are tolerated after update for pod %v on node %v", pods[i].Name, node.Name)
if _, err := podEvictor.EvictPod(ctx, pods[i], node, "NodeTaint"); err != nil {
klog.Errorf("Error evicting pod: (%#v)", err)
break
}
}
}
}
}

View File

@@ -0,0 +1,285 @@
package strategies
import (
"context"
"fmt"
"testing"
"k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/kubernetes/fake"
core "k8s.io/client-go/testing"
"sigs.k8s.io/descheduler/pkg/api"
"sigs.k8s.io/descheduler/pkg/descheduler/evictions"
"sigs.k8s.io/descheduler/pkg/utils"
"sigs.k8s.io/descheduler/test"
)
func createNoScheduleTaint(key, value string, index int) v1.Taint {
return v1.Taint{
Key: "testTaint" + fmt.Sprintf("%v", index),
Value: "test" + fmt.Sprintf("%v", index),
Effect: v1.TaintEffectNoSchedule,
}
}
func addTaintsToNode(node *v1.Node, key, value string, indices []int) *v1.Node {
taints := []v1.Taint{}
for _, index := range indices {
taints = append(taints, createNoScheduleTaint(key, value, index))
}
node.Spec.Taints = taints
return node
}
func addTolerationToPod(pod *v1.Pod, key, value string, index int) *v1.Pod {
if pod.Annotations == nil {
pod.Annotations = map[string]string{}
}
pod.Spec.Tolerations = []v1.Toleration{{Key: key + fmt.Sprintf("%v", index), Value: value + fmt.Sprintf("%v", index), Effect: v1.TaintEffectNoSchedule}}
return pod
}
func TestDeletePodsViolatingNodeTaints(t *testing.T) {
ctx := context.Background()
node1 := test.BuildTestNode("n1", 2000, 3000, 10, nil)
node1 = addTaintsToNode(node1, "testTaint", "test", []int{1})
node2 := test.BuildTestNode("n2", 2000, 3000, 10, nil)
node1 = addTaintsToNode(node2, "testingTaint", "testing", []int{1})
p1 := test.BuildTestPod("p1", 100, 0, node1.Name, nil)
p2 := test.BuildTestPod("p2", 100, 0, node1.Name, nil)
p3 := test.BuildTestPod("p3", 100, 0, node1.Name, nil)
p4 := test.BuildTestPod("p4", 100, 0, node1.Name, nil)
p5 := test.BuildTestPod("p5", 100, 0, node1.Name, nil)
p6 := test.BuildTestPod("p6", 100, 0, node1.Name, nil)
p1.ObjectMeta.OwnerReferences = test.GetNormalPodOwnerRefList()
p2.ObjectMeta.OwnerReferences = test.GetNormalPodOwnerRefList()
p3.ObjectMeta.OwnerReferences = test.GetNormalPodOwnerRefList()
p4.ObjectMeta.OwnerReferences = test.GetNormalPodOwnerRefList()
p5.ObjectMeta.OwnerReferences = test.GetNormalPodOwnerRefList()
p6.ObjectMeta.OwnerReferences = test.GetNormalPodOwnerRefList()
p7 := test.BuildTestPod("p7", 100, 0, node2.Name, nil)
p8 := test.BuildTestPod("p8", 100, 0, node2.Name, nil)
p9 := test.BuildTestPod("p9", 100, 0, node2.Name, nil)
p10 := test.BuildTestPod("p10", 100, 0, node2.Name, nil)
p11 := test.BuildTestPod("p11", 100, 0, node2.Name, nil)
p11.ObjectMeta.OwnerReferences = test.GetNormalPodOwnerRefList()
// The following 4 pods won't get evicted.
// A Critical Pod.
p7.Namespace = "kube-system"
priority := utils.SystemCriticalPriority
p7.Spec.Priority = &priority
// A daemonset.
p8.ObjectMeta.OwnerReferences = test.GetDaemonSetOwnerRefList()
// A pod with local storage.
p9.ObjectMeta.OwnerReferences = test.GetNormalPodOwnerRefList()
p9.Spec.Volumes = []v1.Volume{
{
Name: "sample",
VolumeSource: v1.VolumeSource{
HostPath: &v1.HostPathVolumeSource{Path: "somePath"},
EmptyDir: &v1.EmptyDirVolumeSource{
SizeLimit: resource.NewQuantity(int64(10), resource.BinarySI)},
},
},
}
// A Mirror Pod.
p10.Annotations = test.GetMirrorPodAnnotation()
p1 = addTolerationToPod(p1, "testTaint", "test", 1)
p3 = addTolerationToPod(p3, "testTaint", "test", 1)
p4 = addTolerationToPod(p4, "testTaintX", "testX", 1)
tests := []struct {
description string
nodes []*v1.Node
pods []v1.Pod
evictLocalStoragePods bool
maxPodsToEvictPerNode int
expectedEvictedPodCount int
}{
{
description: "Pods not tolerating node taint should be evicted",
pods: []v1.Pod{*p1, *p2, *p3},
nodes: []*v1.Node{node1},
evictLocalStoragePods: false,
maxPodsToEvictPerNode: 0,
expectedEvictedPodCount: 1, //p2 gets evicted
},
{
description: "Pods with tolerations but not tolerating node taint should be evicted",
pods: []v1.Pod{*p1, *p3, *p4},
nodes: []*v1.Node{node1},
evictLocalStoragePods: false,
maxPodsToEvictPerNode: 0,
expectedEvictedPodCount: 1, //p4 gets evicted
},
{
description: "Only <maxPodsToEvictPerNode> number of Pods not tolerating node taint should be evicted",
pods: []v1.Pod{*p1, *p5, *p6},
nodes: []*v1.Node{node1},
evictLocalStoragePods: false,
maxPodsToEvictPerNode: 1,
expectedEvictedPodCount: 1, //p5 or p6 gets evicted
},
{
description: "Critical pods not tolerating node taint should not be evicted",
pods: []v1.Pod{*p7, *p8, *p9, *p10},
nodes: []*v1.Node{node2},
evictLocalStoragePods: false,
maxPodsToEvictPerNode: 0,
expectedEvictedPodCount: 0,
},
{
description: "Critical pods except storage pods not tolerating node taint should not be evicted",
pods: []v1.Pod{*p7, *p8, *p9, *p10},
nodes: []*v1.Node{node2},
evictLocalStoragePods: true,
maxPodsToEvictPerNode: 0,
expectedEvictedPodCount: 1,
},
{
description: "Critical and non critical pods, only non critical pods not tolerating node taint should be evicted",
pods: []v1.Pod{*p7, *p8, *p10, *p11},
nodes: []*v1.Node{node2},
evictLocalStoragePods: false,
maxPodsToEvictPerNode: 0,
expectedEvictedPodCount: 1,
},
}
for _, tc := range tests {
// create fake client
fakeClient := &fake.Clientset{}
fakeClient.Fake.AddReactor("list", "pods", func(action core.Action) (bool, runtime.Object, error) {
return true, &v1.PodList{Items: tc.pods}, nil
})
podEvictor := evictions.NewPodEvictor(
fakeClient,
"v1",
false,
tc.maxPodsToEvictPerNode,
tc.nodes,
tc.evictLocalStoragePods,
)
RemovePodsViolatingNodeTaints(ctx, fakeClient, api.DeschedulerStrategy{}, tc.nodes, podEvictor)
actualEvictedPodCount := podEvictor.TotalEvicted()
if actualEvictedPodCount != tc.expectedEvictedPodCount {
t.Errorf("Test %#v failed, Unexpected no of pods evicted: pods evicted: %d, expected: %d", tc.description, actualEvictedPodCount, tc.expectedEvictedPodCount)
}
}
}
func TestToleratesTaint(t *testing.T) {
testCases := []struct {
description string
toleration v1.Toleration
taint v1.Taint
expectTolerated bool
}{
{
description: "toleration and taint have the same key and effect, and operator is Exists, and taint has no value, expect tolerated",
toleration: v1.Toleration{
Key: "foo",
Operator: v1.TolerationOpExists,
Effect: v1.TaintEffectNoSchedule,
},
taint: v1.Taint{
Key: "foo",
Effect: v1.TaintEffectNoSchedule,
},
expectTolerated: true,
},
{
description: "toleration and taint have the same key and effect, and operator is Exists, and taint has some value, expect tolerated",
toleration: v1.Toleration{
Key: "foo",
Operator: v1.TolerationOpExists,
Effect: v1.TaintEffectNoSchedule,
},
taint: v1.Taint{
Key: "foo",
Value: "bar",
Effect: v1.TaintEffectNoSchedule,
},
expectTolerated: true,
},
{
description: "toleration and taint have the same effect, toleration has empty key and operator is Exists, means match all taints, expect tolerated",
toleration: v1.Toleration{
Key: "",
Operator: v1.TolerationOpExists,
Effect: v1.TaintEffectNoSchedule,
},
taint: v1.Taint{
Key: "foo",
Value: "bar",
Effect: v1.TaintEffectNoSchedule,
},
expectTolerated: true,
},
{
description: "toleration and taint have the same key, effect and value, and operator is Equal, expect tolerated",
toleration: v1.Toleration{
Key: "foo",
Operator: v1.TolerationOpEqual,
Value: "bar",
Effect: v1.TaintEffectNoSchedule,
},
taint: v1.Taint{
Key: "foo",
Value: "bar",
Effect: v1.TaintEffectNoSchedule,
},
expectTolerated: true,
},
{
description: "toleration and taint have the same key and effect, but different values, and operator is Equal, expect not tolerated",
toleration: v1.Toleration{
Key: "foo",
Operator: v1.TolerationOpEqual,
Value: "value1",
Effect: v1.TaintEffectNoSchedule,
},
taint: v1.Taint{
Key: "foo",
Value: "value2",
Effect: v1.TaintEffectNoSchedule,
},
expectTolerated: false,
},
{
description: "toleration and taint have the same key and value, but different effects, and operator is Equal, expect not tolerated",
toleration: v1.Toleration{
Key: "foo",
Operator: v1.TolerationOpEqual,
Value: "bar",
Effect: v1.TaintEffectNoSchedule,
},
taint: v1.Taint{
Key: "foo",
Value: "bar",
Effect: v1.TaintEffectNoExecute,
},
expectTolerated: false,
},
}
for _, tc := range testCases {
if tolerated := tc.toleration.ToleratesTaint(&tc.taint); tc.expectTolerated != tolerated {
t.Errorf("[%s] expect %v, got %v: toleration %+v, taint %s", tc.description, tc.expectTolerated, tolerated, tc.toleration, tc.taint.ToString())
}
}
}

View File

@@ -17,48 +17,38 @@ limitations under the License.
package strategies
import (
"github.com/kubernetes-incubator/descheduler/cmd/descheduler/app/options"
"github.com/kubernetes-incubator/descheduler/pkg/api"
"github.com/kubernetes-incubator/descheduler/pkg/descheduler/evictions"
podutil "github.com/kubernetes-incubator/descheduler/pkg/descheduler/pod"
"github.com/golang/glog"
"context"
"sigs.k8s.io/descheduler/pkg/api"
"sigs.k8s.io/descheduler/pkg/descheduler/evictions"
podutil "sigs.k8s.io/descheduler/pkg/descheduler/pod"
"sigs.k8s.io/descheduler/pkg/utils"
"k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
clientset "k8s.io/client-go/kubernetes"
priorityutil "k8s.io/kubernetes/plugin/pkg/scheduler/algorithm/priorities/util"
"k8s.io/klog/v2"
)
// RemovePodsViolatingInterPodAntiAffinity with elimination strategy
func RemovePodsViolatingInterPodAntiAffinity(ds *options.DeschedulerServer, strategy api.DeschedulerStrategy, policyGroupVersion string, nodes []*v1.Node, nodePodCount nodePodEvictedCount) {
if !strategy.Enabled {
return
}
removePodsWithAffinityRules(ds.Client, policyGroupVersion, nodes, ds.DryRun, nodePodCount, ds.MaxNoOfPodsToEvictPerNode)
}
// removePodsWithAffinityRules evicts pods on the node which are having a pod affinity rules.
func removePodsWithAffinityRules(client clientset.Interface, policyGroupVersion string, nodes []*v1.Node, dryRun bool, nodePodCount nodePodEvictedCount, maxPodsToEvict int) int {
podsEvicted := 0
// RemovePodsViolatingInterPodAntiAffinity evicts pods on the node which are having a pod affinity rules.
func RemovePodsViolatingInterPodAntiAffinity(ctx context.Context, client clientset.Interface, strategy api.DeschedulerStrategy, nodes []*v1.Node, podEvictor *evictions.PodEvictor) {
for _, node := range nodes {
glog.V(1).Infof("Processing node: %#v\n", node.Name)
pods, err := podutil.ListEvictablePodsOnNode(client, node)
klog.V(1).Infof("Processing node: %#v\n", node.Name)
pods, err := podutil.ListPodsOnANode(ctx, client, node, podEvictor.IsEvictable)
if err != nil {
return 0
return
}
// sort the evictable Pods based on priority, if there are multiple pods with same priority, they are sorted based on QoS tiers.
podutil.SortPodsBasedOnPriorityLowToHigh(pods)
totalPods := len(pods)
for i := 0; i < totalPods; i++ {
if maxPodsToEvict > 0 && nodePodCount[node]+1 > maxPodsToEvict {
break
}
if checkPodsWithAntiAffinityExist(pods[i], pods) {
success, err := evictions.EvictPod(client, pods[i], policyGroupVersion, dryRun)
if !success {
glog.Infof("Error when evicting pod: %#v (%#v)\n", pods[i].Name, err)
} else {
nodePodCount[node]++
glog.V(1).Infof("Evicted pod: %#v (%#v)\n because of existing anti-affinity", pods[i].Name, err)
success, err := podEvictor.EvictPod(ctx, pods[i], node, "InterPodAntiAffinity")
if err != nil {
klog.Errorf("Error evicting pod: (%#v)", err)
break
}
if success {
// Since the current pod is evicted all other pods which have anti-affinity with this
// pod need not be evicted.
// Update pods.
@@ -68,9 +58,7 @@ func removePodsWithAffinityRules(client clientset.Interface, policyGroupVersion
}
}
}
podsEvicted += nodePodCount[node]
}
return podsEvicted
}
// checkPodsWithAntiAffinityExist checks if there are other pods on the node that the current pod cannot tolerate.
@@ -78,14 +66,14 @@ func checkPodsWithAntiAffinityExist(pod *v1.Pod, pods []*v1.Pod) bool {
affinity := pod.Spec.Affinity
if affinity != nil && affinity.PodAntiAffinity != nil {
for _, term := range getPodAntiAffinityTerms(affinity.PodAntiAffinity) {
namespaces := priorityutil.GetNamespacesFromPodAffinityTerm(pod, &term)
namespaces := utils.GetNamespacesFromPodAffinityTerm(pod, &term)
selector, err := metav1.LabelSelectorAsSelector(term.LabelSelector)
if err != nil {
glog.Infof("%v", err)
klog.Infof("%v", err)
return false
}
for _, existingPod := range pods {
if existingPod.Name != pod.Name && priorityutil.PodMatchesTermsNamespaceAndSelector(existingPod, namespaces, selector) {
if existingPod.Name != pod.Name && utils.PodMatchesTermsNamespaceAndSelector(existingPod, namespaces, selector) {
return true
}
}

Some files were not shown because too many files have changed in this diff Show More