node.js - 如何解决GKE容器中的EADDRINUSE问题?

标签 node.js docker kubernetes google-cloud-platform google-kubernetes-engine

我是容器和 GKE 的新手。我曾经使用 npm run debug 运行我的 Node 服务器应用程序,并且尝试使用容器的 shell 在 GKE 上执行此操作。当我登录到 myapp 容器的 shell 并执行此操作时,我得到:

> api_server@0.0.0 start /usr/src/app
> node src/

events.js:167
      throw er; // Unhandled 'error' event
      ^

Error: listen EADDRINUSE :::8089

通常我会使用 killall -9 node 之类的东西来处理这个问题但是当我这样做时,看起来我被踢出 shell,并且容器被 kubernetes 重新启动。 Node 似乎已经在使用该端口或其他东西:

netstat -tulpn | grep 8089
tcp        0      0 :::8089                 :::*                    LISTEN      23/node

如何从 shell 启动我的服务器?

我的配置文件: Dockerfile:

FROM node:10-alpine

RUN apk add --update \
libc6-compat

WORKDIR /usr/src/app
COPY package*.json ./
COPY templates-mjml/ templates-mjml/
COPY public/ public/
COPY src/ src/
COPY data/ data/
COPY config/ config/
COPY migrations/ migrations/
ENV NODE_ENV 'development'
ENV PORT '8089'
RUN npm install --development

myapp.yaml:

apiVersion: v1
kind: Service
metadata:
  name: myapp
  labels:
    app: myapp
spec:
  ports:
  - port: 8089
    name: http
  selector:
    app: myapp    
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  labels:
    app: myapp
spec:
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
        - name: myapp
          image: gcr.io/myproject-224713/firstapp:v4
          ports:
            - containerPort: 8089
          env:
            - name: POSTGRES_DB_HOST
              value: 127.0.0.1:5432
            - name: POSTGRES_DB_USER
              valueFrom:
                secretKeyRef:
                  name: mysecret
                  key: username
            - name: POSTGRES_DB_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mysecret
                  key: password       
        - name: cloudsql-proxy
          image: gcr.io/cloudsql-docker/gce-proxy:1.11
          command: ["/cloud_sql_proxy",
                    "-instances=myproject-224713:europe-west4:mydatabase=tcp:5432",
                    "-credential_file=/secrets/cloudsql/credentials.json"]
          securityContext:
            runAsUser: 2
            allowPrivilegeEscalation: false
          volumeMounts:
            - name: cloudsql-instance-credentials
              mountPath: /secrets/cloudsql
              readOnly: true
      volumes:
        - name: cloudsql-instance-credentials
          secret:
            secretName: cloudsql-instance-credentials
---

myrouter.yaml:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: myapp-gateway
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: myapp
spec:
  hosts:
  - "*"
  gateways:
  - myapp-gateway
  http:
  - match:
    - uri:
        prefix: /
    route:
    - destination:
        host: myapp
      weight: 100
    websocketUpgrade: true

编辑: 我收到以下日志:enter image description here

编辑2: 添加featherjs health service后我得到以下 describe 输出:

Name:           myapp-95df4dcd6-lptnq
Namespace:      default
Node:           gke-standard-cluster-1-default-pool-59600833-pcj3/10.164.0.3
Start Time:     Wed, 02 Jan 2019 22:08:33 +0100
Labels:         app=myapp
                pod-template-hash=518908782
Annotations:    kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container myapp; cpu request for container cloudsql-proxy
                sidecar.istio.io/status:
                  {"version":"3c9617ff82c9962a58890e4fa987c69ca62487fda71c23f3a2aad1d7bb46c748","initContainers":["istio-init"],"containers":["istio-proxy"]...
Status:         Running
IP:             10.44.3.17
Controlled By:  ReplicaSet/myapp-95df4dcd6
Init Containers:
  istio-init:
    Container ID:  docker://768b2327c6cfa57b3d25a7029e52ce6a88dec6848e91dd7edcdf9074c91ff270
    Image:         gcr.io/gke-release/istio/proxy_init:1.0.2-gke.0
    Image ID:      docker-pullable://gcr.io/gke-release/istio/proxy_init@sha256:e30d47d2f269347a973523d0c5d7540dbf7f87d24aca2737ebc09dbe5be53134
    Port:          <none>
    Host Port:     <none>
    Args:
      -p
      15001
      -u
      1337
      -m
      REDIRECT
      -i
      *
      -x

      -b
      8089,
      -d

    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Wed, 02 Jan 2019 22:08:34 +0100
      Finished:     Wed, 02 Jan 2019 22:08:35 +0100
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:         <none>
Containers:
  myapp:
    Container ID:   docker://5566a3e8242ec6755dc2f26872cfb024fab42d5f64aadc3db1258fcb834f8418
    Image:          gcr.io/myproject-224713/firstapp:v4
    Image ID:       docker-pullable://gcr.io/myproject-224713/firstapp@sha256:0cbd4fae0b32fa0da5a8e6eb56cb9b86767568d243d4e01b22d332d568717f41
    Port:           8089/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Wed, 02 Jan 2019 22:09:19 +0100
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Wed, 02 Jan 2019 22:08:35 +0100
      Finished:     Wed, 02 Jan 2019 22:09:19 +0100
    Ready:          False
    Restart Count:  1
    Requests:
      cpu:      100m
    Liveness:   http-get http://:8089/health delay=15s timeout=20s period=10s #success=1 #failure=3
    Readiness:  http-get http://:8089/health delay=5s timeout=5s period=10s #success=1 #failure=3
    Environment:
      POSTGRES_DB_HOST:      127.0.0.1:5432
      POSTGRES_DB_USER:      <set to the key 'username' in secret 'mysecret'>  Optional: false
      POSTGRES_DB_PASSWORD:  <set to the key 'password' in secret 'mysecret'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-9vtz5 (ro)
  cloudsql-proxy:
    Container ID:  docker://414799a0699abe38c9759f82a77e1a3e06123714576d6d57390eeb07611f9a63
    Image:         gcr.io/cloudsql-docker/gce-proxy:1.11
    Image ID:      docker-pullable://gcr.io/cloudsql-docker/gce-proxy@sha256:5c690349ad8041e8b21eaa63cb078cf13188568e0bfac3b5a914da3483079e2b
    Port:          <none>
    Host Port:     <none>
    Command:
      /cloud_sql_proxy
      -instances=myproject-224713:europe-west4:osm=tcp:5432
      -credential_file=/secrets/cloudsql/credentials.json
    State:          Running
      Started:      Wed, 02 Jan 2019 22:08:36 +0100
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        100m
    Environment:  <none>
    Mounts:
      /secrets/cloudsql from cloudsql-instance-credentials (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-9vtz5 (ro)
  istio-proxy:
    Container ID:  docker://898bc95c6f8bde18814ef01ce499820d545d7ea2d8bf494b0308f06ab419041e
    Image:         gcr.io/gke-release/istio/proxyv2:1.0.2-gke.0
    Image ID:      docker-pullable://gcr.io/gke-release/istio/proxyv2@sha256:826ef4469e4f1d4cabd0dc846f9b7de6507b54f5f0d0171430fcd3fb6f5132dc
    Port:          <none>
    Host Port:     <none>
    Args:
      proxy
      sidecar
      --configPath
      /etc/istio/proxy
      --binaryPath
      /usr/local/bin/envoy
      --serviceCluster
      myapp
      --drainDuration
      45s
      --parentShutdownDuration
      1m0s
      --discoveryAddress
      istio-pilot.istio-system:15007
      --discoveryRefreshDelay
      1s
      --zipkinAddress
      zipkin.istio-system:9411
      --connectTimeout
      10s
      --statsdUdpAddress
      istio-statsd-prom-bridge.istio-system:9125
      --proxyAdminPort
      15000
      --controlPlaneAuthPolicy
      NONE
    State:          Running
      Started:      Wed, 02 Jan 2019 22:08:36 +0100
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:  10m
    Environment:
      POD_NAME:                      myapp-95df4dcd6-lptnq (v1:metadata.name)
      POD_NAMESPACE:                 default (v1:metadata.namespace)
      INSTANCE_IP:                    (v1:status.podIP)
      ISTIO_META_POD_NAME:           myapp-95df4dcd6-lptnq (v1:metadata.name)
      ISTIO_META_INTERCEPTION_MODE:  REDIRECT
    Mounts:
      /etc/certs/ from istio-certs (ro)
      /etc/istio/proxy from istio-envoy (rw)
Conditions:
  Type           Status
  Initialized    True 
  Ready          False 
  PodScheduled   True 
Volumes:
  cloudsql-instance-credentials:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  cloudsql-instance-credentials
    Optional:    false
  default-token-9vtz5:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-9vtz5
    Optional:    false
  istio-envoy:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:  Memory
  istio-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  istio.default
    Optional:    true
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                 Age                From                                                        Message
  ----     ------                 ----               ----                                                        -------
  Normal   Scheduled              68s                default-scheduler                                           Successfully assigned myapp-95df4dcd6-lptnq to gke-standard-cluster-1-default-pool-59600833-pcj3
  Normal   SuccessfulMountVolume  68s                kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3  MountVolume.SetUp succeeded for volume "istio-envoy"
  Normal   SuccessfulMountVolume  68s                kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3  MountVolume.SetUp succeeded for volume "default-token-9vtz5"
  Normal   SuccessfulMountVolume  68s                kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3  MountVolume.SetUp succeeded for volume "cloudsql-instance-credentials"
  Normal   SuccessfulMountVolume  68s                kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3  MountVolume.SetUp succeeded for volume "istio-certs"
  Normal   Pulled                 67s                kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3  Container image "gcr.io/gke-release/istio/proxy_init:1.0.2-gke.0" already present on machine
  Normal   Created                67s                kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3  Created container
  Normal   Started                67s                kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3  Started container
  Normal   Pulled                 66s                kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3  Container image "gcr.io/cloudsql-docker/gce-proxy:1.11" already present on machine
  Normal   Created                66s                kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3  Created container
  Normal   Started                66s                kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3  Started container
  Normal   Created                65s                kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3  Created container
  Normal   Started                65s                kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3  Started container
  Normal   Pulled                 65s                kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3  Container image "gcr.io/gke-release/istio/proxyv2:1.0.2-gke.0" already present on machine
  Normal   Created                65s                kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3  Created container
  Normal   Started                65s                kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3  Started container
  Warning  Unhealthy              31s (x4 over 61s)  kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3  Readiness probe failed: HTTP probe failed with statuscode: 404
  Normal   Pulled                 22s (x2 over 66s)  kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3  Container image "gcr.io/myproject-224713/firstapp:v4" already present on machine
  Warning  Unhealthy              22s (x3 over 42s)  kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3  Liveness probe failed: HTTP probe failed with statuscode: 404
  Normal   Killing                22s                kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3  Killing container with id docker://myapp:Container failed liveness probe.. Container will be killed and recreated.

最佳答案

这就是 Kubernetes 的工作原理,只要您的 Pod 有进程在运行,它就会保持“运行”状态。一旦你杀死一个进程,Kubernetes 就会重新启动 pod,因为它崩溃了或者出现了问题。

如果您确实想使用 npm run debug 进行调试,请考虑:

  1. 使用 CMD 创建一个容器(最后)或ENTRYPOINT Dockerfile 中的值为 npm run debug。然后使用 Kubernetes 中的部署定义运行它。

  2. 使用以下内容覆盖部署定义中 myapp 容器中的命令:

    spec:
      containers:
        - name: myapp
          image: gcr.io/myproject-224713/firstapp:v4
          ports:
            - containerPort: 8089
          command: ["npm", "run", "debug" ]
          env:
            - name: POSTGRES_DB_HOST
              value: 127.0.0.1:5432
            - name: POSTGRES_DB_USER
              valueFrom:
                secretKeyRef:
                  name: mysecret
                  key: username
            - name: POSTGRES_DB_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mysecret
                  key: password   
    

关于node.js - 如何解决GKE容器中的EADDRINUSE问题?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54007478/

相关文章:

node.js - Twitter api 搜索用户发布的推文

docker - 从Docker容器访问主机上的服务

mysql - 无法以 root 身份登录到 MySQL 容器

kubernetes - 如何将网关绑定(bind)到特定的命名空间?

javascript - 当子进程忙时,主集群是否接受请求

javascript - botbuilder v4 nodejs 在提示对话框中添加快速回复 facebook messenger

javascript - 如何在 node.js 事件监听器中获取事件名称?

docker拉取镜像保存路径

cluster-computing - Docker-Swarm、Kubernetes、Mesos 和 Core-OS 舰队

kubernetes - Google Cloud Kubernetes - 与 Cloudflare 的负载均衡器 session 关联