postgresql - 安装 kong-ingress-controller 来管理 kubernetes 上的 ingress

标签 postgresql yaml kubernetes-ingress azure-aks

我正在安装kong ingress controller在我的 AKS 群集上,但我不想在我的群集中使用 postgres Statefulset 服务。相反,我的 azure 基础设施中有一个 postgres 数据库,我想从 kong-ingress-controller 部署连接它,在我的 aks 集群中创建 postgres 凭据(如 secret )并将其存储在环境变量中。

我已经创造了 secret

⟩ kubectl create secret generic az-pg-db-user-pass --from-literal=username='az-pg-username' --from-literal=password='az-pg-password' --namespace kong 
secret/az-pg-db-user-pass created

在我的 kongwithingress.yaml 中文件中,我有部署 list 声明,我确实想呈现 from this gist link为了不填很多yaml的body问题代码行。

这一要点基于此 AKS 部署,但删除了像 Statefulset 这样的 postgres和Service由于上述原因,我的目标是与我自己的 azure 托管 postgres 服务建立连接

我已经配置了az-pg-db-user-passkong-ingress-controller deployment中创建的通用 secret 和我的kong deployment和我的kong-migrations job在我的整个要点脚本中,为了创建一个环境变量,如下所示:

KONG_PG_USERNAME
KONG_PG_PASSWORD

这些环境变量已在 kong-ingress-controller deployment 中作为 secret 创建并引用。和kong deploymentkong-migrations job需要访问或连接 postgres 数据库

当我执行kubectl apply -f kongwithingres.yaml时命令我得到以下输出:

kong-ingress-controller deployment , kong deploymentkong-migrations job创建成功。

⟩ kubectl apply -f kongwithingres.yaml 
namespace/kong unchanged
customresourcedefinition.apiextensions.k8s.io/kongplugins.configuration.konghq.com unchanged
customresourcedefinition.apiextensions.k8s.io/kongconsumers.configuration.konghq.com unchanged
customresourcedefinition.apiextensions.k8s.io/kongcredentials.configuration.konghq.com unchanged
customresourcedefinition.apiextensions.k8s.io/kongingresses.configuration.konghq.com unchanged
serviceaccount/kong-serviceaccount unchanged
clusterrole.rbac.authorization.k8s.io/kong-ingress-clusterrole unchanged
role.rbac.authorization.k8s.io/kong-ingress-role unchanged
rolebinding.rbac.authorization.k8s.io/kong-ingress-role-nisa-binding unchanged
clusterrolebinding.rbac.authorization.k8s.io/kong-ingress-clusterrole-nisa-binding unchanged
service/kong-ingress-controller created
deployment.extensions/kong-ingress-controller created
service/kong-proxy created
deployment.extensions/kong created
job.batch/kong-migrations created
[I] 

但他们各自的 Pod 具有 CrashLoopBackOff状态

NAME                                          READY   STATUS                  RESTARTS   AGE
pod/kong-d8b88df99-j6hvl                      0/1     Init:CrashLoopBackOff   5          4m24s
pod/kong-ingress-controller-984fc9666-cd2b5   0/2     Init:CrashLoopBackOff   5          4m24s
pod/kong-migrations-t6n7p                     0/1     CrashLoopBackOff        5          4m24s

我正在检查每个 Pod 各自的日志,发现了以下内容:

pod/kong-d8b88df99-j6hvl :

⟩ kubectl logs pod/kong-d8b88df99-j6hvl -p -n kong 
Error from server (BadRequest): previous terminated container "kong-proxy" in pod "kong-d8b88df99-j6hvl" not found

在他们的描述信息中,这个 Pod 正在获取环境变量和图像

⟩ kubectl describe pod/kong-d8b88df99-j6hvl -n kong
Name:               kong-d8b88df99-j6hvl
Namespace:          kong

Status:             Pending
IP:                 10.244.1.18
Controlled By:      ReplicaSet/kong-d8b88df99
Init Containers:
  wait-for-migrations:
    Container ID:  docker://7007a89ada215daf853ec103d79dca60ccc5fb3a14c51ac6c5c56655da6da62f
    Image:         kong:1.0.0
    Image ID:      docker-pullable://kong@sha256:8fd6a312d7715a9cc85c49625a4c2f53951f6e4422926091e4d2ae67c480b6d5
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
      kong migrations list
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 26 Feb 2019 16:25:01 +0100
      Finished:     Tue, 26 Feb 2019 16:25:01 +0100
    Ready:          False
    Restart Count:  6
    Environment:
      KONG_ADMIN_LISTEN:      off
      KONG_PROXY_LISTEN:      off
      KONG_PROXY_ACCESS_LOG:  /dev/stdout
      KONG_ADMIN_ACCESS_LOG:  /dev/stdout
      KONG_PROXY_ERROR_LOG:   /dev/stderr
      KONG_ADMIN_ERROR_LOG:   /dev/stderr
      KONG_PG_HOST:           zcrm365-postgresql1.postgres.database.azure.com
      KONG_PG_USERNAME:       <set to the key 'username' in secret 'az-pg-db-user-pass'>  Optional: false
      KONG_PG_PASSWORD:       <set to the key 'password' in secret 'az-pg-db-user-pass'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-gnkjq (ro)
Containers:
  kong-proxy:
    Container ID:   
    Image:          kong:1.0.0
    Image ID:       
    Ports:          8000/TCP, 8443/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:
      KONG_PG_USERNAME:              <set to the key 'username' in secret 'az-pg-db-user-pass'>  Optional: false
      KONG_PG_PASSWORD:              <set to the key 'password' in secret 'az-pg-db-user-pass'>  Optional: false
      KONG_PG_HOST:                  zcrm365-postgresql1.postgres.database.azure.com
      KONG_PROXY_ACCESS_LOG:         /dev/stdout
      KONG_PROXY_ERROR_LOG:          /dev/stderr
      KONG_ADMIN_LISTEN:             off
      KUBERNETES_PORT_443_TCP_ADDR:  zcrm365-d73ab78d.hcp.westeurope.azmk8s.io
      KUBERNETES_PORT:               tcp://zcrm365-d73ab78d.hcp.westeurope.azmk8s.io:443
      KUBERNETES_PORT_443_TCP:       tcp://zcrm365-d73ab78d.hcp.westeurope.azmk8s.io:443
      KUBERNETES_SERVICE_HOST:       zcrm365-d73ab78d.hcp.westeurope.azmk8s.io
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-gnkjq (ro)
Conditions:
  Type              Status
  Initialized       False 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  default-token-gnkjq:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-gnkjq
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                     From                             Message
  ----     ------     ----                    ----                             -------
  Normal   Scheduled  8m44s                   default-scheduler                Successfully assigned kong/kong-d8b88df99-j6hvl to aks-default-75800594-1
  Normal   Pulled     7m9s (x5 over 8m40s)    kubelet, aks-default-75800594-1  Container image "kong:1.0.0" already present on machine
  Normal   Created    7m8s (x5 over 8m40s)    kubelet, aks-default-75800594-1  Created container
  Normal   Started    7m7s (x5 over 8m40s)    kubelet, aks-default-75800594-1  Started container
  Warning  BackOff    3m34s (x26 over 8m38s)  kubelet, aks-default-75800594-1  Back-off restarting failed container

pod/kong-ingress-controller-984fc9666-cd2b5 :

 kubectl logs pod/kong-ingress-controller-984fc9666-cd2b5 -p -n kong 
Error from server (BadRequest): a container name must be specified for pod kong-ingress-controller-984fc9666-cd2b5, choose one of: [admin-api ingress-controller] or one of the init containers: [wait-for-migrations]
[I]

以及它们各自的描述

⟩ kubectl describe pod/kong-ingress-controller-984fc9666-cd2b5 -n kong
Name:               kong-ingress-controller-984fc9666-cd2b5
Namespace:          kong

Status:             Pending
IP:                 10.244.2.18
Controlled By:      ReplicaSet/kong-ingress-controller-984fc9666
Init Containers:
  wait-for-migrations:
    Container ID:  docker://8eb035f755322b3ac72792d922974811933ba9a71afb1f4549cfe7e0a6519619
    Image:         kong:1.0.0
    Image ID:      docker-pullable://kong@sha256:8fd6a312d7715a9cc85c49625a4c2f53951f6e4422926091e4d2ae67c480b6d5
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
      kong migrations list
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 26 Feb 2019 16:29:56 +0100
      Finished:     Tue, 26 Feb 2019 16:29:56 +0100
    Ready:          False
    Restart Count:  7
    Environment:
      KONG_ADMIN_LISTEN:      off
      KONG_PROXY_LISTEN:      off
      KONG_PROXY_ACCESS_LOG:  /dev/stdout
      KONG_ADMIN_ACCESS_LOG:  /dev/stdout
      KONG_PROXY_ERROR_LOG:   /dev/stderr
      KONG_ADMIN_ERROR_LOG:   /dev/stderr
      KONG_PG_HOST:           zcrm365-postgresql1.postgres.database.azure.com
      KONG_PG_USERNAME:       <set to the key 'username' in secret 'az-pg-db-user-pass'>  Optional: false
      KONG_PG_PASSWORD:       <set to the key 'password' in secret 'az-pg-db-user-pass'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kong-serviceaccount-token-rc4sp (ro)
Containers:
  admin-api:
    Container ID:   
    Image:          kong:1.0.0
    Image ID:       
    Port:           8001/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Liveness:       http-get http://:8001/status delay=30s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8001/status delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      KONG_PG_USERNAME:              <set to the key 'username' in secret 'az-pg-db-user-pass'>  Optional: false
      KONG_PG_PASSWORD:              <set to the key 'password' in secret 'az-pg-db-user-pass'>  Optional: false
      KONG_PG_HOST:                  zcrm365-postgresql1.postgres.database.azure.com
      KONG_ADMIN_ACCESS_LOG:         /dev/stdout
      KONG_ADMIN_ERROR_LOG:          /dev/stderr
      KONG_ADMIN_LISTEN:             0.0.0.0:8001, 0.0.0.0:8444 ssl
      KONG_PROXY_LISTEN:             off
      KUBERNETES_PORT_443_TCP_ADDR:  zcrm365-d73ab78d.hcp.westeurope.azmk8s.io
      KUBERNETES_PORT:               tcp://zcrm365-d73ab78d.hcp.westeurope.azmk8s.io:443
      KUBERNETES_PORT_443_TCP:       tcp://zcrm365-d73ab78d.hcp.westeurope.azmk8s.io:443
      KUBERNETES_SERVICE_HOST:       zcrm365-d73ab78d.hcp.westeurope.azmk8s.io
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kong-serviceaccount-token-rc4sp (ro)
  ingress-controller:
    Container ID:  
    Image:         kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.3.0
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Args:
      /kong-ingress-controller
      --kong-url=https://localhost:8444
      --admin-tls-skip-verify
      --default-backend-service=kong/kong-proxy
      --publish-service=kong/kong-proxy
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Liveness:       http-get http://:10254/healthz delay=30s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:10254/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:                      kong-ingress-controller-984fc9666-cd2b5 (v1:metadata.name)
      POD_NAMESPACE:                 kong (v1:metadata.namespace)
      KUBERNETES_PORT_443_TCP_ADDR:  zcrm365-d73ab78d.hcp.westeurope.azmk8s.io
      KUBERNETES_PORT:               tcp://zcrm365-d73ab78d.hcp.westeurope.azmk8s.io:443
      KUBERNETES_PORT_443_TCP:       tcp://zcrm365-d73ab78d.hcp.westeurope.azmk8s.io:443
      KUBERNETES_SERVICE_HOST:       zcrm365-d73ab78d.hcp.westeurope.azmk8s.io
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kong-serviceaccount-token-rc4sp (ro)
Conditions:
  Type              Status
  Initialized       False 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kong-serviceaccount-token-rc4sp:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kong-serviceaccount-token-rc4sp
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                   From                             Message
  ----     ------     ----                  ----                             -------
  Normal   Scheduled  12m                   default-scheduler                Successfully assigned kong/kong-ingress-controller-984fc9666-cd2b5 to aks-default-75800594-2
  Normal   Pulled     10m (x5 over 12m)     kubelet, aks-default-75800594-2  Container image "kong:1.0.0" already present on machine
  Normal   Created    10m (x5 over 12m)     kubelet, aks-default-75800594-2  Created container
  Normal   Started    10m (x5 over 12m)     kubelet, aks-default-75800594-2  Started container
  Warning  BackOff    2m14s (x49 over 12m)  kubelet, aks-default-75800594-2  Back-off restarting failed container
[I] 
~/workspace/ZCRM365/Deployments/Kubernetes/kong · (Deployments±)
⟩ 

我不知道 CrashLoopBackOff 状态及其各自的状态为何为 Waiting: PodInitiazing

如何调试此行为? Kong 是否有可能无法与 Postgres 数据库通信?

我的 AKS 集群位于 Azure 上,还有我的 postgres 数据库,它们将通信作为服务进行。

更新

这些是我创建的容器 Pod 的日志:

⟩ kubectl logs pod/kong-ingress-controller-984fc9666-w4vvn -p -n kong -c ingress-controller



Error from server (BadRequest): previous terminated container "ingress-controller" in pod "kong-ingress-controller-984fc9666-w4vvn" not found
[I] 
⟩ kubectl logs pod/kong-d8b88df99-qsq4j -p -n kong -c kong-proxy

Error from server (BadRequest): previous terminated container "kong-proxy" in pod "kong-d8b88df99-qsq4j" not found
[I] 
~/workspace/ZCRM365/Deployments/Kubernetes/kong · (Deployments±)
⟩ 

最佳答案

我的kong-ingress-controller部署 Pod 为 CrashLoopBackOff有时在 Waiting: PodInitiazing因为我没有想到一些事情,例如以下内容:

  • 主要原因如说@Amityokong-ingress-controllerkonginit-container称为 - wait-for-migrations 等待 kong-migrations执行之前的作业。在这里,我可以确定有必要执行我的 kong 迁移

  • 但是我的kong-migrations工作无法进行,因为我没有 KONG_DATABASE用于设置连接的环境变量参数。

  • 我的部署无法正常工作的其他原因是 kong 在内部与 postgres 连接可能会等待容器中定义的用户环境变量被调用 KONG_PG_USER 。我被称为KONG_PG_USERNAME这是我的脚本执行失败的另一个原因。 (我对此不太确定)

⟩ kubectl create -f kongwithingres.yaml  
namespace/kong created
secret/az-pg-db-user-pass created
customresourcedefinition.apiextensions.k8s.io/kongplugins.configuration.konghq.com created
customresourcedefinition.apiextensions.k8s.io/kongconsumers.configuration.konghq.com created
customresourcedefinition.apiextensions.k8s.io/kongcredentials.configuration.konghq.com created
customresourcedefinition.apiextensions.k8s.io/kongingresses.configuration.konghq.com created
serviceaccount/kong-serviceaccount created
clusterrole.rbac.authorization.k8s.io/kong-ingress-clusterrole created
role.rbac.authorization.k8s.io/kong-ingress-role created
rolebinding.rbac.authorization.k8s.io/kong-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/kong-ingress-clusterrole-nisa-binding created
service/kong-ingress-controller created
deployment.extensions/kong-ingress-controller created
service/kong-proxy created
deployment.extensions/kong created
job.batch/kong-migrations created
[I] 
~/workspace/ZCRM365/Deployments/Kubernetes/kong · (Deployments) 

顺便说一句,从 kong 开始,我建议安装 konga这是一个前端仪表板工具,用于管理 kong 并检查我们可以通过 yaml 进行的操作文件。

我们有这个konga.yaml像在我们的 kubernetes 集群中部署一样安装脚本

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: konga
  namespace: kong
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: konga
    spec:
      containers:
      - env:
        - name: NODE_TLS_REJECT_UNAUTHORIZED
          value: "0"
        image: pantsel/konga:latest
        name: konga
        ports:
        - containerPort: 1337  

而且,我们可以通过kubectl port-forward在我们的机器上本地启动服务。命令

⟩ kubectl port-forward pod/konga-85b66cffff-mxq85 1337:1337 -n kong
Forwarding from 127.0.0.1:1337 -> 1337
Forwarding from [::1]:1337 -> 1337

关于postgresql - 安装 kong-ingress-controller 来管理 kubernetes 上的 ingress,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54889200/

相关文章:

java - 使用两个 yaml 文件作为配置属性

Kubernetes 服务未在 Pod 之间均匀分配流量

ruby-on-rails - 如何让对象链接到对象?

postgresql - JSON 与 JSONB Postgresql

url - Jekyll + GitHub Pages 站点中的尾部斜杠导致 404

Nginx Ingress - 如果没有匹配的路由,如何设置重定向到外部站点?

kubernetes-ingress - 私有(private)子网中的 EKS,公共(public)子网中的负载均衡器

ruby-on-rails - Rails 无法登录到 postgresql - PG::Error - 密码 - 正确信息

postgresql - RETURN NEXT 出错

rest - 如何为 YAML 对象中的几个键设置一个值