kubernetes - 使用带有 gitlab runner helm chart 和 ci job 的 helm 拉取私有(private)注册表镜像时访问被拒绝

标签 kubernetes gitlab-ci gitlab-ci-runner kubernetes-helm

我有一个 Kubernetes 集群,有 1 个 master 和 2 个 worker。所有节点都有自己的 IP 地址。让我们这样称呼它们:

  • 主0
  • worker -0
  • worker 1

  • 网络 pod 策略和我所有的节点通信设置正确,一切正常。如果我指定此基础架构,那只是为了更具体地说明我的情况。

    使用 helm 我创建了一个部署基本 nginx 的图表。这是我在私有(private) gitlab 注册表上构建的 docker 镜像。

    使用 gitlab ci,我创建了一个使用两个函数的作业:
    # Init helm client on k8s cluster for using helm with gitlab runner
    function init_helm() {
      docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY"
      mkdir -p /etc/deploy
      echo ${kube_config} | base64 -d > ${KUBECONFIG}
      kubectl config use-context ${K8S_CURRENT_CONTEXT}
      helm init --client-only
      helm repo add stable https://kubernetes-charts.storage.googleapis.com/
      helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/
      helm repo update
    }
    
    # Deploy latest tagged image on k8s cluster
    function deploy_k8s_cluster() {
      echo "Create and apply secret for docker gitlab runner access to gitlab private registry ..."
      kubectl create secret -n "$KUBERNETES_NAMESPACE_OVERWRITE" \
        docker-registry gitlab-registry \
        --docker-server="https://registry.gitlab.com/v2/" \
        --docker-username="${CI_DEPLOY_USER:-$CI_REGISTRY_USER}" \
        --docker-password="${CI_DEPLOY_PASSWORD:-$CI_REGISTRY_PASSWORD}" \
        --docker-email="$GITLAB_USER_EMAIL" \
        -o yaml --dry-run | kubectl replace -n "$KUBERNETES_NAMESPACE_OVERWRITE" --force -f -
      echo "Build helm dependancies in $CHART_TEMPLATE"
      cd $CHART_TEMPLATE/
      helm dep build
      export DEPLOYS="$(helm ls | grep $PROJECT_NAME | wc -l)"
      if [[ ${DEPLOYS}  -eq 0 ]]; then
        echo "Creating the new chart ..."
        helm install --name ${PROJECT_NAME} --namespace=${KUBERNETES_NAMESPACE_OVERWRITE} . -f values.yaml
      else
      echo "Updating the chart ..."
        helm upgrade ${PROJECT_NAME} --namespace=${KUBERNETES_NAMESPACE_OVERWRITE} . -f values.yaml
      fi
    } 
    

    第一个函数允许 gitlabrunner 使用 docker、init helm 和 kubectl 登录。第二个在集群上部署我的图像。

    所有过程都运行良好,例如我的作业在 gitlab ci 上传递,除了部署 pod 之外没有发生错误。

    确实我有这个错误 :
    Failed to pull image "registry.gitlab.com/path/to/repo/project/image:TAG_NUMBER": rpc error: code
    = Unknown desc = Error response from daemon: Get https://registry.gitlab.com/v2/path/to/repo/project/image/manifests/image:TAG_NUMBER: denied: access forbidden
    

    更具体地说,我使用的是 gitlab-runner helm chart这是图表的配置:
    ## GitLab Runner Image
    ##
    ## By default it's using gitlab/gitlab-runner:alpine-v{VERSION}
    ## where {VERSION} is taken from Chart.yaml from appVersion field
    ##
    ## ref: https://hub.docker.com/r/gitlab/gitlab-runner/tags/
    ##
    # image: gitlab/gitlab-runner:alpine-v11.6.0
    
    ## Specify a imagePullPolicy
    ## 'Always' if imageTag is 'latest', else set to 'IfNotPresent'
    ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
    ##
    imagePullPolicy: IfNotPresent
    
    ## The GitLab Server URL (with protocol) that want to register the runner against
    ## ref: https://docs.gitlab.com/runner/commands/README.html#gitlab-runner-register
    ##
    gitlabUrl: https://gitlab.com/
    
    ## The Registration Token for adding new Runners to the GitLab Server. This must
    ## be retrieved from your GitLab Instance.
    ## ref: https://docs.gitlab.com/ce/ci/runners/README.html#creating-and-registering-a-runner
    ##
    runnerRegistrationToken: "<token>"
    
    ## The Runner Token for adding new Runners to the GitLab Server. This must
    ## be retrieved from your GitLab Instance. It is token of already registered runner.
    ## ref: (we don't yet have docs for that, but we want to use existing token)
    ##
    # runnerToken: ""
    #
    ## Unregister all runners before termination
    ##
    ## Updating the runner's chart version or configuration will cause the runner container
    ## to be terminated and created again. This may cause your Gitlab instance to reference
    ## non-existant runners. Un-registering the runner before termination mitigates this issue.
    ## ref: https://docs.gitlab.com/runner/commands/README.html#gitlab-runner-unregister
    ##
    unregisterRunners: true
    
    ## Set the certsSecretName in order to pass custom certficates for GitLab Runner to use
    ## Provide resource name for a Kubernetes Secret Object in the same namespace,
    ## this is used to populate the /etc/gitlab-runner/certs directory
    ## ref: https://docs.gitlab.com/runner/configuration/tls-self-signed.html#supported-options-for-self-signed-certificates
    ##
    # certsSecretName:
    
    ## Configure the maximum number of concurrent jobs
    ## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-global-section
    ##
    concurrent: 10
    
    ## Defines in seconds how often to check GitLab for a new builds
    ## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-global-section
    ##
    checkInterval: 30
    
    ## Configure GitLab Runner's logging level. Available values are: debug, info, warn, error, fatal, panic
    ## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-global-section
    ##
    # logLevel:
    
    ## For RBAC support:
    rbac:
      create: true
    
      ## Run the gitlab-bastion container with the ability to deploy/manage containers of jobs
      ## cluster-wide or only within namespace
      clusterWideAccess: true
    
      ## Use the following Kubernetes Service Account name if RBAC is disabled in this Helm chart (see rbac.create)
      ##
      serviceAccountName: default
    
    ## Configure integrated Prometheus metrics exporter
    ## ref: https://docs.gitlab.com/runner/monitoring/#configuration-of-the-metrics-http-server
    metrics:
      enabled: true
    
    ## Configuration for the Pods that that the runner launches for each new job
    ##
    runners:
      ## Default container image to use for builds when none is specified
      ##
      image: ubuntu:16.04
    
      ## Specify one or more imagePullSecrets
      ##
      ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
      ##
      imagePullSecrets: ["namespace-1", "namespace-2", "default"]
    
      ## Specify the image pull policy: never, if-not-present, always. The cluster default will be used if not set.
      ##
      # imagePullPolicy: ""
    
      ## Specify whether the runner should be locked to a specific project: true, false. Defaults to true.
      ##
      # locked: true
    
      ## Specify the tags associated with the runner. Comma-separated list of tags.
      ##
      ## ref: https://docs.gitlab.com/ce/ci/runners/#using-tags
      ##
      tags: my-tag-1, my-tag-2"
    
      ## Run all containers with the privileged flag enabled
      ## This will allow the docker:dind image to run if you need to run Docker
      ## commands. Please read the docs before turning this on:
      ## ref: https://docs.gitlab.com/runner/executors/kubernetes.html#using-docker-dind
      ##
      privileged: true
    
      ## The name of the secret containing runner-token and runner-registration-token
      # secret: gitlab-runner
    
      ## Namespace to run Kubernetes jobs in (defaults to the same namespace of this release)
      ##
      # namespace:
    
      # Regular expression to validate the contents of the namespace overwrite environment variable (documented following).
      # When empty, it disables the namespace overwrite feature
      namespace_overwrite_allowed: overrided-namespace-*
    
      ## Distributed runners caching
      ## ref: https://gitlab.com/gitlab-org/gitlab-runner/blob/master/docs/configuration/autoscale.md#distributed-runners-caching
      ##
      ## If you want to use s3 based distributing caching:
      ## First of all you need to uncomment General settings and S3 settings sections.
      ##
      ## Create a secret 's3access' containing 'accesskey' & 'secretkey'
      ## ref: https://aws.amazon.com/blogs/security/wheres-my-secret-access-key/
      ##
      ## $ kubectl create secret generic s3access \
      ##   --from-literal=accesskey="YourAccessKey" \
      ##   --from-literal=secretkey="YourSecretKey"
      ## ref: https://kubernetes.io/docs/concepts/configuration/secret/
      ##
      ## If you want to use gcs based distributing caching:
      ## First of all you need to uncomment General settings and GCS settings sections.
      ##
      ## Access using credentials file:
      ## Create a secret 'google-application-credentials' containing your application credentials file.
      ## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-cache-gcs-section
      ## You could configure
      ## $ kubectl create secret generic google-application-credentials \
      ##   --from-file=gcs-applicaton-credentials-file=./path-to-your-google-application-credentials-file.json
      ## ref: https://kubernetes.io/docs/concepts/configuration/secret/
      ##
      ## Access using access-id and private-key:
      ## Create a secret 'gcsaccess' containing 'gcs-access-id' & 'gcs-private-key'.
      ## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-cache-gcs-section
      ## You could configure
      ## $ kubectl create secret generic gcsaccess \
      ##   --from-literal=gcs-access-id="YourAccessID" \
      ##   --from-literal=gcs-private-key="YourPrivateKey"
      ## ref: https://kubernetes.io/docs/concepts/configuration/secret/
      cache: {}
        ## General settings
        # cacheType: s3
        # cachePath: "cache"
        # cacheShared: true
    
        ## S3 settings
        # s3ServerAddress: s3.amazonaws.com
        # s3BucketName:
        # s3BucketLocation:
        # s3CacheInsecure: false
        # secretName: s3access
    
        ## GCS settings
        # gcsBucketName:
        ## Use this line for access using access-id and private-key
        # secretName: gcsaccess
        ## Use this line for access using google-application-credentials file
        # secretName: google-application-credential
    
      ## Build Container specific configuration
      ##
      builds:
        # cpuLimit: 200m
        # memoryLimit: 256Mi
        cpuRequests: 100m
        memoryRequests: 128Mi
    
      ## Service Container specific configuration
      ##
      services:
        # cpuLimit: 200m
        # memoryLimit: 256Mi
        cpuRequests: 100m
        memoryRequests: 128Mi
    
      ## Helper Container specific configuration
      ##
      helpers:
        # cpuLimit: 200m
        # memoryLimit: 256Mi
        cpuRequests: 100m
        memoryRequests: 128Mi
        image: gitlab/gitlab-runner-helper:x86_64-latest
    
      ## Service Account to be used for runners
      ##
      # serviceAccountName:
    
      ## If Gitlab is not reachable through $CI_SERVER_URL
      ##
      # cloneUrl:
    
      ## Specify node labels for CI job pods assignment
      ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
      ##
      nodeSelector: {}
        # gitlab: true
    
    ## Configure resource requests and limits
    ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
    ##
    resources:
      # limits:
      #   memory: 256Mi
      #   cpu: 200m
      requests:
        memory: 128Mi
        cpu: 100m
    
    ## Affinity for pod assignment
    ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
    ##
    affinity: {}
    
    ## Node labels for pod assignment
    ## Ref: https://kubernetes.io/docs/user-guide/node-selection/
    ##
    nodeSelector: {}
      # Example: The gitlab runner manager should not run on spot instances so you can assign
      # them to the regular worker nodes only.
      # node-role.kubernetes.io/worker: "true"
    
    ## List of node taints to tolerate (requires Kubernetes >= 1.6)
    ## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
    ##
    tolerations: []
      # Example: Regular worker nodes may have a taint, thus you need to tolerate the taint
      # when you assign the gitlab runner manager with nodeSelector or affinity to the nodes.
      # - key: "node-role.kubernetes.io/worker"
      #   operator: "Exists"
    
    ## Configure environment variables that will be present when the registration command runs
    ## This provides further control over the registration process and the config.toml file
    ## ref: `gitlab-runner register --help`
    ## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html
    ##
    envVars:
      - name: RUNNER_EXECUTOR
        value: kubernetes
    

    如您所见,我创建了一个 secret 在我的 ci 工作中,这里也没有发生错误。在我的图表中,我在 values.yaml 中宣布了同样的 secret (以他的名字)。文件,它允许 deployment.yaml使用它。

    所以我不明白我错在哪里。为什么我会收到此错误?

    最佳答案

    扩展我的最后一条评论,我想 TAG_NUMBER变量在你的 CI Gitlab 工作中的某个地方。但是,您无法获得 --docker-username 中分配的变量的授权。和 --docker-password docker 标志。您是否检查了用于连接到 docker-registry 的凭据? ?或者,它可能是在 GitLab Runner 中管理 secret 的选项。 Helm 图表模板。

    关于kubernetes - 使用带有 gitlab runner helm chart 和 ci job 的 helm 拉取私有(private)注册表镜像时访问被拒绝,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54482596/

    相关文章:

    docker - Gitlab-runner:在根目录中找不到docker或docker-compose,但它们已经安装

    Azure kubernetes - 在生产应用程序的控制台上写入日志?

    sonarqube - NFS(Kubernetes)上的SonarQube插件目录

    linux - "syntax error near unexpected token ` ( ' "使用 rm 命令删除所有文件或目录时出现异常

    amazon-web-services - 无法找到凭据 - 适用于 S3 的 Gitlab 管道

    kubernetes - gitlab auto-deploy-app 容器的事件探针失败

    docker - 如何将环境变量传递给gitlab ci cd中的docker run

    kubernetes - 如何将持久卷声明与 gcePersistentDisk 绑定(bind)?

    kubernetes - 将 YAML 转换为 JSON : did not find expected key 时出错

    GITLAB CI 我在哪里可以找到自由运行者的日志