Pods that don't pass the nodeFit condition currently log an
unsuppressable error message to logs. This changes the log level to info
as it's a normal operating condition.
Signed-off-by: Antoine Deschênes <antoine.deschenes@linux.com>
* Add handling for node eligibility
* Make tests buildable
* Update topologyspreadconstraint.go
* Updated test cases failing
* squashed changes for test case addition
corrected function name
refactored duplicate TopoContraint check logic
Added more test cases for testing node eligibility scenario
Added 5 test cases for testing scenarios related to node eligibility
* topologySpreadConstraints e2e: `nodeTaintsPolicy` and `nodeAffinityPolicy` constraints
---------
Co-authored-by: Marc Power <marcpow@microsoft.com>
Co-authored-by: nitindagar0 <81955199+nitindagar0@users.noreply.github.com>
* feat: Implement preferredDuringSchedulingIgnoredDuringExecution for RemovePodsViolatingNodeAffinity
Now, the descheduler can detect and evict pods that are not optimally
allocated according to the "preferred..." node affinity. It only evicts
a pod if it can be scheduled on a node that scores higher in terms of
preferred node affinity than the current one.
This can be activated by enabling the RemovePodsViolatingNodeAffinity
plugin and passing "preferredDuringSchedulingIgnoredDuringExecution" in
the args.
For example, imagine we have a pod that prefers nodes with label "key1:
value1" with a weight of 10. If this pod is scheduled on a node that
doesn't have "key1: value1" as label but there's another node that has
this label and where this pod can potentially run, then the descheduler
will evict the pod.
Another effect of this commit is that the
RemovePodsViolatingNodeAffinity plugin will not remove pods that don't
fit in the current node but for other reasons than violating the node
affinity. Before that, enabling this plugin could cause evictions on
pods that were running on tainted nodes without the necessary
tolerations.
This commit also fixes the wording of some tests from
node_affinity_test.go and some parameters and expectations of these
tests, which were wrong.
* Optimization on RemovePodsViolatingNodeAffinity
Before checking if a pod can be evicted or if it can be scheduled
somewhere else, we first check if it has the corresponding nodeAffinity
field defined. Otherwise, the pod is automatically discarded as a
candidate.
Apart from that, the method that calculates the weight that a pod
gives to a node based on its preferred node affinity has been
renamed to better reflect what it does.
1. Enable OTEL configuration and base framework
2. update generated conversion spec
3. enable docker based conversion and deep copy generate
4. fix broken unit tests