kubernetes - K8S groundnuty/k8s-wait-for 镜像无法作为 init 容器启动(带 helm)

标签 kubernetes kubernetes-helm

我遇到了groundnuty/k8s-wait-for图像问题。项目位于 github并 repo 于 dockerhub .

我非常确定命令参数中存在错误,因为 init 容器失败并显示 Init:CrashLoopBackOff

关于图像: 该镜像用于初始化容器,需要推迟 Pod 部署。镜像中的脚本等待 Pod 或作业完成,完成后让主容器和所有副本开始部署。

在我的示例中,它应该等待名为 {{ .Release.Name }}-os-server-migration-{{ .Release.Revision }} 的作业完成,然后等待检测到它已完成,它应该让主容器启动。使用 Helm 模板。

据我了解,作业名称为 {{ .Release.Name }}-os-server-migration-{{ .Release.Revision }} ,第二个命令参数为Deployment.yml 中的 init 容器需要相同,以便 init 容器可以依赖于指定的作业。对于这种方法还有其他意见或经验吗?

附有模板。

部署.YML:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Release.Name }}-os-{{ .Release.Revision }}
  namespace: {{ .Values.namespace }}
  labels:
    app: {{ .Values.fullname }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app: {{ .Values.fullname }}
  template:
    metadata:
      labels:
        app: {{ .Values.fullname }}
    spec:
      {{- with .Values.imagePullSecrets }}
      imagePullSecrets:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          ports:
            - name: http
              containerPort: 8080
          resources:
            {{- toYaml .Values.resources | nindent 12 }}
      initContainers:
        - name: "{{ .Chart.Name }}-init"
          image: "groundnuty/k8s-wait-for:v1.3"
          imagePullPolicy: "{{ .Values.init.pullPolicy }}"
          args:
            - "job"
            - "{{ .Release.Name }}-os-server-migration-{{ .Release.Revision }}"

JOB.YML:

apiVersion: batch/v1
kind: Job
metadata:
  name: {{ .Release.Name }}-os-server-migration-{{ .Release.Revision }}
  namespace: {{ .Values.migration.namespace }}
spec:
  backoffLimit: {{ .Values.migration.backoffLimit }}
  template:
    spec:
      {{- with .Values.migration.imagePullSecrets }}
      imagePullSecrets:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      containers:
        - name: {{ .Values.migration.fullname }}
          image: "{{ .Values.migration.image.repository }}:{{ .Values.migration.image.tag }}"
          imagePullPolicy: {{ .Values.migration.image.pullPolicy }}
          command:
            - sh
            - /app/migration-entrypoint.sh
      restartPolicy: {{ .Values.migration.restartPolicy }}

日志:

  Normal   Scheduled  46s                default-scheduler  Successfully assigned development/octopus-dev-release-os-1-68cb9549c8-7jggh to minikube
  Normal   Pulled     41s                kubelet            Successfully pulled image "groundnuty/k8s-wait-for:v1.3" in 4.277517553s
  Normal   Pulled     36s                kubelet            Successfully pulled image "groundnuty/k8s-wait-for:v1.3" in 3.083126925s
  Normal   Pulling    20s (x3 over 45s)  kubelet            Pulling image "groundnuty/k8s-wait-for:v1.3"
  Normal   Created    18s (x3 over 41s)  kubelet            Created container os-init
  Normal   Started    18s (x3 over 40s)  kubelet            Started container os-init
  Normal   Pulled     18s                kubelet            Successfully pulled image "groundnuty/k8s-wait-for:v1.3" in 1.827195139s
  Warning  BackOff    4s (x4 over 33s)   kubelet            Back-off restarting failed container

kubectl 获得所有 -n 开发

NAME                                                        READY   STATUS                  RESTARTS   AGE
pod/octopus-dev-release-os-1-68cb9549c8-7jggh   0/1     Init:CrashLoopBackOff   2          44s
pod/octopus-dev-release-os-1-68cb9549c8-9qbdv   0/1     Init:CrashLoopBackOff   2          44s
pod/octopus-dev-release-os-1-68cb9549c8-c8h5k   0/1     Init:Error              2          44s
pod/octopus-dev-release-os-migration-1-9wq76    0/1     Completed               0          44s
......
......
NAME                                                       COMPLETIONS   DURATION   AGE
job.batch/octopus-dev-release-os-migration-1   1/1           26s        44s

最佳答案

对于遇到同样问题的任何人,我将解释我的修复方法。

问题是deployment.yaml中的容器没有使用Kube API的权限。因此,groundnuty/k8s-wait-for:v1.3 容器无法检查是否有作业 {{ .Release.Name }}-os-server-migration-{{ .Release.Revision }}完成或未完成。这就是初始化容器立即失败并出现 CrashLoopError 的原因。

添加服务帐户、角色和角色绑定(bind)后,一切都运行良好,并且 groundnuty/k8s-wait-for:v1.3 成功等待作业(迁移)完成,以便让主容器运行。

以下是解决问题的服务帐户、角色和角色绑定(bind)的代码示例。

sa.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: sa-migration
  namespace: development

role.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: migration-reader
rules:
  - apiGroups: ["batch","extensions"]
    resources: ["jobs"]
    verbs: ["get","watch","list"]

角色绑定(bind).yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: migration-reader
subjects:
- kind: ServiceAccount
  name: sa-migration
roleRef:
  kind: Role
  name: migration-reader
  apiGroup: rbac.authorization.k8s.io

关于kubernetes - K8S groundnuty/k8s-wait-for 镜像无法作为 init 容器启动(带 helm),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/69189695/

相关文章:

Azure AKS - 我禁用了 addon-http-application-routing 但 pod、部署、服务和其他内容仍在集群中

kubernetes - Traefik信息中心-自定义API路径

kubernetes - helm 升级失败,出现 "function "X“未定义”

docker - 将 docker-compose 转换为 Helm chart ?

authentication - Istio 中的授权策略和请求认证有什么区别?

kubernetes - "Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)"

Kubernetes - 一个容器如何启动 2 个进程并绑定(bind)到这两个进程?

kubernetes-helm - 引用先前在 HELM values.yaml 中声明的值

kubernetes-helm - 如何在 helm 中使用 --wait 和安装后钩子(Hook)?

kubernetes - 即使在指定 VolumeClaimTemplate 之后,kube-prometheus-stack 仍将 EmptyDir 用作存储