Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

capacity scheduling not work #847

Open
1 of 4 tasks
13567436138 opened this issue Jan 1, 2025 · 0 comments
Open
1 of 4 tasks

capacity scheduling not work #847

13567436138 opened this issue Jan 1, 2025 · 0 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@13567436138
Copy link

Area

  • Scheduler
  • Controller
  • Helm Chart
  • Documents

Other components

No response

What happened?

root@k8s-master01:~# cat /etc/kubernetes/sched-cc.yaml 
apiVersion: kubescheduler.config.k8s.io/v1
kind: KubeSchedulerConfiguration
leaderElection:
  leaderElect: false
clientConnection:
  kubeconfig: /etc/kubernetes/scheduler.conf
profiles:
- schedulerName: default-scheduler
  plugins:
    multiPoint:
      enabled:
      - name: CapacityScheduling
    postFilter:
      enabled:
      - name: CapacityScheduling
      disabled:
      - name: "*"
root@k8s-master01:~# cat /etc/kubernetes/manifests/kube-scheduler.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-scheduler
    tier: control-plane
  name: kube-scheduler
  namespace: kube-system
spec:
  containers:
    #- command:
  - args:
    #- kube-scheduler
    - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
    - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
    - --bind-address=127.0.0.1
    - --kubeconfig=/etc/kubernetes/scheduler.conf
    #- --leader-elect=false
    - --config=/etc/kubernetes/sched-cc.yaml
    #image: registry.aliyuncs.com/google_containers/kube-scheduler:v1.31.0
    image: registry.k8s.io/scheduler-plugins/kube-scheduler:v0.29.7
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10259
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    name: kube-scheduler
    resources:
      requests:
        cpu: 100m
    startupProbe:
      failureThreshold: 24
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10259
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    volumeMounts:
    - mountPath: /etc/kubernetes/sched-cc.yaml
      name: sched-cc
      readOnly: true
    - mountPath: /etc/kubernetes/scheduler.conf
      name: kubeconfig
      readOnly: true
  hostNetwork: true
  priority: 2000001000
  priorityClassName: system-node-critical
  securityContext:
    seccompProfile:
      type: RuntimeDefault
  volumes:
  - hostPath:
      path: /etc/kubernetes/sched-cc.yaml
      type: FileOrCreate
    name: sched-cc
  - hostPath:
      path: /etc/kubernetes/scheduler.conf
      type: FileOrCreate
    name: kubeconfig
status: {}
root@k8s-master01:~# kubectl get elasticquotas.scheduling.x-k8s.io -A -o yaml
apiVersion: v1
items:
- apiVersion: scheduling.x-k8s.io/v1alpha1
  kind: ElasticQuota
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"scheduling.x-k8s.io/v1alpha1","kind":"ElasticQuota","metadata":{"annotations":{},"name":"quota1","namespace":"quota1"},"spec":{"max":{"cpu":2},"min":{"cpu":0}}}
    creationTimestamp: "2025-01-01T07:08:34Z"
    generation: 1
    name: quota1
    namespace: quota1
    resourceVersion: "6042249"
    uid: e2069c8e-b3ab-4eda-afd5-1ba5cd3c074d
  spec:
    max:
      cpu: 2
    min:
      cpu: 0
  status:
    used:
      cpu: "1"
- apiVersion: scheduling.x-k8s.io/v1alpha1
  kind: ElasticQuota
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"scheduling.x-k8s.io/v1alpha1","kind":"ElasticQuota","metadata":{"annotations":{},"name":"quota2","namespace":"quota2"},"spec":{"max":{"cpu":2},"min":{"cpu":0}}}
    creationTimestamp: "2025-01-01T07:08:41Z"
    generation: 1
    name: quota2
    namespace: quota2
    resourceVersion: "6042438"
    uid: a4877d80-cc65-4165-981e-740c1146cc40
  spec:
    max:
      cpu: 2
    min:
      cpu: 0
  status:
    used:
      cpu: "1"
- apiVersion: scheduling.x-k8s.io/v1alpha1
  kind: ElasticQuota
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"scheduling.x-k8s.io/v1alpha1","kind":"ElasticQuota","metadata":{"annotations":{},"name":"quota3","namespace":"quota3"},"spec":{"max":{"cpu":2},"min":{"cpu":1}}}
    creationTimestamp: "2025-01-01T07:08:49Z"
    generation: 1
    name: quota3
    namespace: quota3
    resourceVersion: "6041410"
    uid: 1e564f9a-e6e4-41bd-ae66-01cf27f73ce7
  spec:
    max:
      cpu: 2
    min:
      cpu: 1
  status:
    used:
      cpu: "0"
kind: List
metadata:
  resourceVersion: ""
root@k8s-master01:~# kubectl get pod -n quota1 
NAME                     READY   STATUS    RESTARTS   AGE
nginx-86db5c6ff6-d749x   1/1     Running   0          19m
root@k8s-master01:~# kubectl get pod -n quota2
NAME                     READY   STATUS    RESTARTS   AGE
nginx-657f665bb9-22jsh   1/1     Running   0          19m
root@k8s-master01:~# kubectl get pod -n quota3
NAME                     READY   STATUS    RESTARTS   AGE
nginx-657f665bb9-jx8t8   0/1     Pending   0          11m

quota2 pod not pending,quota3 pod not preempting quota1 pod?

What did you expect to happen?

quota2 pod pending
qoota3 pod preempt quota1 pod

How can we reproduce it (as minimally and precisely as possible)?

No response

Anything else we need to know?

No response

Kubernetes version

1.31

Scheduler Plugins version

v0.29.7
@13567436138 13567436138 added the kind/bug Categorizes issue or PR as related to a bug. label Jan 1, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

1 participant