This adds a strategy to balance pod topology domains based on the scheduler's
PodTopologySpread constraints. It attempts to find the minimum number of pods
that should be sent for eviction by comparing the largest domains in a topology
with the smallest domains in that topology.
Updated the README with the link to the official descheduler helm chart
on https://hub.helm.sh. This makes it easier for end users to install the
desceduler using helm.
1.Set default CPU/Mem/Pods percentage of thresholds to 100
2.Stop evicting pods if any resource ran out
3.Add thresholds verification method and limit resource percentage within [0, 100]
4.Change testcases and readme
The matrix has been updated with the soon the be released v0.18
details. Also, clarified the descheduler and k8s version compatibility
requirements and recommendations.
This increases the specificity of the RemoveDuplicates strategy by
removing pods which not only have the same owner, but who also
must have the same list of container images. This also adds a
parameter, `ExcludeOwnerKinds` to the RemoveDuplicates strategy
which accepts a list of Kinds. If a pod has any of these Kinds as
an owner, that pod is not considered for eviction.
The new PodLifeTime descheduler strategy can be used to evict pods that
were created more than the configured number of seconds ago.
In the below example pods created more than 24 hours ago will be evicted.
````
apiVersion: "descheduler/v1alpha1"
kind: "DeschedulerPolicy"
strategies:
"PodLifeTime":
enabled: true
params:
maxPodLifeTimeSeconds: 86400
````
Compared to https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity
the strategy kinda implements requiredDuringSchedulingRequiredDuringExecution node affinity type
for kubelets. Only addition to kubelet is the strategy checks whether is at least another node
capable of respecting the node affinity rules.
When requiredDuringSchedulingRequiredDuringExecution node affinity type is implemented in kubelet,
it's likely the strategy either gets removed or re-implemented. Stressing the relation with
requiredDuringSchedulingRequiredDuringExecution will helps consumers of descheduler
to keep in mind the kubelet will eventually take over the strategy when implemented.
The k8s YAML manifests for deploying the descheduler as a k8s job were
duplicated across the "examples" and "kubernetes" directories and also
in README.md. This change consolidates the YAML manifests into the
"kubernetes" directory and simplifies the installation instructions for end
users in README.md.
Additionally a k8s CronJob has been added.
When pod has this annotation, descheduler will always try to evict this pod.
User can control which pods (only pods with this annotation) will be evicted.
Signed-off-by: ksimon1 <ksimon@redhat.com>