* Check if Pod matches inter-pod anti-affinity of other pod on node as part of NodeFit()
* Add unit tests for checking inter-pod anti-affinity match in NodeFit()
* Export setPodAntiAffinity() helper func to test utils
* Add docs for inter-pod anti-affinity in README
* Refactor logic for inter-pod anti-affinity to use in multiple pkgs
* Move logic for finding match between pods with antiaffinity out of framework to reuse in other pkgs
* Move interpod antiaffinity funcs to pkg/utils/predicates.go
* Add unit tests for inter-pod anti-affinity check
* Test logic in GroupByNodeName
* Test NodeFit() case where pods matches inter-pod anti-affinity
* Test for inter-pod anti-affinity pods match terms, have label selector
* NodeFit inter-pod anti-affinity check returns early if affinity spec not set
* feat: Implement preferredDuringSchedulingIgnoredDuringExecution for RemovePodsViolatingNodeAffinity
Now, the descheduler can detect and evict pods that are not optimally
allocated according to the "preferred..." node affinity. It only evicts
a pod if it can be scheduled on a node that scores higher in terms of
preferred node affinity than the current one.
This can be activated by enabling the RemovePodsViolatingNodeAffinity
plugin and passing "preferredDuringSchedulingIgnoredDuringExecution" in
the args.
For example, imagine we have a pod that prefers nodes with label "key1:
value1" with a weight of 10. If this pod is scheduled on a node that
doesn't have "key1: value1" as label but there's another node that has
this label and where this pod can potentially run, then the descheduler
will evict the pod.
Another effect of this commit is that the
RemovePodsViolatingNodeAffinity plugin will not remove pods that don't
fit in the current node but for other reasons than violating the node
affinity. Before that, enabling this plugin could cause evictions on
pods that were running on tainted nodes without the necessary
tolerations.
This commit also fixes the wording of some tests from
node_affinity_test.go and some parameters and expectations of these
tests, which were wrong.
* Optimization on RemovePodsViolatingNodeAffinity
Before checking if a pod can be evicted or if it can be scheduled
somewhere else, we first check if it has the corresponding nodeAffinity
field defined. Otherwise, the pod is automatically discarded as a
candidate.
Apart from that, the method that calculates the weight that a pod
gives to a node based on its preferred node affinity has been
renamed to better reflect what it does.
* update helm chart to v0.27.0
* update manifest version and docs
* fix 1.27 release version from README.md
Co-authored-by: Mike Dame <mikedame@google.com>
---------
Co-authored-by: Mike Dame <mikedame@google.com>
* v1alph2 docs
* remove internal toc (gh has this natively)
* fix typo and newlines
* name plugins with less confusing names
* add type column
* fix kv selector and nodeSelector desc
* group plugin types in a table
* link the deprecated doc
* warning signs
* Remove log level from Errors
Every error printed via Errors is expected to be important and always
printable.
* Invoke first Deschedule and then Balance extension points (breaking change)
* Separate plugin arg conversion from pluginsMap
* Seperate profile population from plugin execution
* Convert strategy params into profiles outside the main descheduling loop
Strategy params are static and do not change in time.
* Bump the internal DeschedulerPolicy to v1alpha2
Drop conversion from v1alpha1 to internal
* add tests to v1alpha1 to internal conversion
* add tests to strategyParamsToPluginArgs params wiring
* in v1alpha1 evictableNamespaces are still Namespaces
* add test passing in all params
Co-authored-by: Lucas Severo Alves <lseveroa@redhat.com>
Both LowNode and HighNode utilization strategies evict only as many pods
as there's free resources on other nodes. Thus, the resource fit test
is always true by definition.
Add taint exclusion to RemovePodsViolatingNodeTaints. This permits node
taints to be ignored by allowing users to specify ignored taint keys or
ignored taint key=value pairs.
This patch adds the policy(evictFailedBarePods) to allow the failed
pods without ownerReferences to be evicted. For backward compatibility,
disable the policy by default. Address #644.
calcContainerRestarts sums over containers. The new language makes
that clear, avoiding potential confusion vs. an altenative that looked
for pods where a single container had passed the configured threshold.
For example, with three containers with 50 restarts and a threshold of
100, the actual "sum over containers" logic makes that pod a candidate
for descheduling, but the "largest single container restart count"
hypothetical would not have made it a candidate.
Also shifts labelSelector into the parameter table, because when it
was added in 29ade13ce7 (README and e2e-testcase add for
labelSelector, 2021-03-02, #510), it landed a few lines too high.