kubernetes - 无法连接到卡夫卡经纪人

标签 kubernetes apache-kafka confluent-platform

我已经部署了https://github.com/confluentinc/cp-helm-charts/tree/master/charts/cp-kafka在我的本地 k8s 集群上。
我正在尝试使用带有 nginx 的 TCP Controller 来公开它。

我的 TCP nginx 配置图看起来像

data:
  "<zookeper-tcp-port>": <namespace>/cp-zookeeper:2181
  "<kafka-tcp-port>": <namespace>/cp-kafka:9092

我已经在我的 nginx 入口 Controller 中创建了相应的条目
  - name: <zookeper-tcp-port>-tcp
    port: <zookeper-tcp-port>
    protocol: TCP
    targetPort: <zookeper-tcp-port>-tcp
  - name: <kafka-tcp-port>-tcp
    port: <kafka-tcp-port>
    protocol: TCP
    targetPort: <kafka-tcp-port>-tcp

现在我正在尝试连接到我的 kafka 实例。
当我尝试使用 kafka 工具连接到 IP 和端口时,我收到错误消息
Unable to determine broker endpoints from Zookeeper.
One or more brokers have multiple endpoints for protocol PLAIN...
Please proved bootstrap.servers value in advanced settings
[<cp-broker-address-0>.cp-kafka-headless.<namespace>:<port>][<ip>]

当我进入时,我假设是正确的经纪人地址(我都试过了......)我得到了一个时间。没有来自 nginx Controller 的日志,但除外
[08/Apr/2020:15:51:12 +0000]TCP200000.000
[08/Apr/2020:15:51:12 +0000]TCP200000.000
[08/Apr/2020:15:51:14 +0000]TCP200000.001

来自 pod kafka-zookeeper-0我得到了很多
[2020-04-08 15:52:02,415] INFO Accepted socket connection from /<ip:port> (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2020-04-08 15:52:02,415] WARN Unable to read additional data from client sessionid 0x0, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn)
[2020-04-08 15:52:02,415] INFO Closed socket connection for client /<ip:port>  (no session established for client) (org.apache.zookeeper.server.NIOServerCnxn)

虽然我不确定这些与它有什么关系?

关于我做错了什么的任何想法?
提前致谢。

最佳答案

TL;DR:

  • 更改值 nodeport.enabledtrue里面 cp-kafka/values.yaml在部署之前。
  • 更改 TCP NGINX Configmap 和 Ingress 对象中的服务名称和端口。
  • 设置 bootstrap-server在你的卡夫卡工具上<Cluster_External_IP>:31090


  • 解释:

    The Headless Service was created alongside the StatefulSet. The created service will not be given a clusterIP, but will instead simply include a list of Endpoints. These Endpoints are then used to generate instance-specific DNS records in the form of: <StatefulSet>-<Ordinal>.<Service>.<Namespace>.svc.cluster.local



    它为每个 pod 创建一个 DNS 名称,例如:
    [ root@curl:/ ]$ nslookup my-confluent-cp-kafka-headless
    Server:    10.0.0.10
    Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
    
    Name:      my-confluent-cp-kafka-headless
    Address 1: 10.8.0.23 my-confluent-cp-kafka-1.my-confluent-cp-kafka-headless.default.svc.cluster.local
    Address 2: 10.8.1.21 my-confluent-cp-kafka-0.my-confluent-cp-kafka-headless.default.svc.cluster.local
    Address 3: 10.8.3.7 my-confluent-cp-kafka-2.my-confluent-cp-kafka-headless.default.svc.cluster.local
    
  • 这就是使这些服务在集群内相互连接的原因。


  • 我经历了很多试验和错误,直到我意识到它应该如何工作。基于您的 TCP Nginx Configmap,我相信您遇到了同样的问题。
  • Nginx ConfigMap求:<PortToExpose>: "<Namespace>/<Service>:<InternallyExposedPort>" .
  • 我意识到您不需要公开 Zookeeper,因为它是内部服务并由 kafka 代理处理。
  • 我也意识到你试图暴露 cp-kafka:9092这是 headless 服务,也只在内部使用,正如我上面解释的那样。
  • 为了获得外部访问权限您必须设置参数nodeport.enabledtrue 如此处所述:External Access Parameters .
  • 它在图表部署期间向每个 kafka-N pod 添加一项服务。
  • 然后您更改您的 configmap 以映射到其中之一:
  • data:
    "31090": default/demo-cp-kafka-0-nodeport:31090
    

    请注意,创建的服务具有选择器 statefulset.kubernetes.io/pod-name: demo-cp-kafka-0这就是服务识别它要连接的 pod 的方式。
  • 编辑 nginx-ingress-controller:
  • - containerPort: 31090
      hostPort: 31090
      protocol: TCP
    
  • 将您的 kafka 工具设置为 <Cluster_External_IP>:31090


  • 转载:
    - 片段编辑于 cp-kafka/values.yaml :
    nodeport:
      enabled: true
      servicePort: 19092
      firstListenerPort: 31090
    
  • 部署图表:
  • $ helm install demo cp-helm-charts
    $ kubectl get pods
    NAME                                       READY   STATUS    RESTARTS   AGE
    demo-cp-control-center-6d79ddd776-ktggw    1/1     Running   3          113s
    demo-cp-kafka-0                            2/2     Running   1          113s
    demo-cp-kafka-1                            2/2     Running   0          94s
    demo-cp-kafka-2                            2/2     Running   0          84s
    demo-cp-kafka-connect-79689c5c6c-947c4     2/2     Running   2          113s
    demo-cp-kafka-rest-56dfdd8d94-79kpx        2/2     Running   1          113s
    demo-cp-ksql-server-c498c9755-jc6bt        2/2     Running   2          113s
    demo-cp-schema-registry-5f45c498c4-dh965   2/2     Running   3          113s
    demo-cp-zookeeper-0                        2/2     Running   0          112s
    demo-cp-zookeeper-1                        2/2     Running   0          93s
    demo-cp-zookeeper-2                        2/2     Running   0          74s
    
    $ kubectl get svc
    NAME                         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)             AGE
    demo-cp-control-center       ClusterIP   10.0.13.134   <none>        9021/TCP            50m
    demo-cp-kafka                ClusterIP   10.0.15.71    <none>        9092/TCP            50m
    demo-cp-kafka-0-nodeport     NodePort    10.0.7.101    <none>        19092:31090/TCP     50m
    demo-cp-kafka-1-nodeport     NodePort    10.0.4.234    <none>        19092:31091/TCP     50m
    demo-cp-kafka-2-nodeport     NodePort    10.0.3.194    <none>        19092:31092/TCP     50m
    demo-cp-kafka-connect        ClusterIP   10.0.3.217    <none>        8083/TCP            50m
    demo-cp-kafka-headless       ClusterIP   None          <none>        9092/TCP            50m
    demo-cp-kafka-rest           ClusterIP   10.0.14.27    <none>        8082/TCP            50m
    demo-cp-ksql-server          ClusterIP   10.0.7.150    <none>        8088/TCP            50m
    demo-cp-schema-registry      ClusterIP   10.0.7.84     <none>        8081/TCP            50m
    demo-cp-zookeeper            ClusterIP   10.0.9.119    <none>        2181/TCP            50m
    demo-cp-zookeeper-headless   ClusterIP   None          <none>        2888/TCP,3888/TCP   50m
    
  • 创建 TCP 配置图:
  • $ cat nginx-tcp-configmap.yaml 
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: tcp-services
      namespace: kube-system
    data:
      31090: "default/demo-cp-kafka-0-nodeport:31090"
    
    $ kubectl apply -f nginx-tcp.configmap.yaml
    configmap/tcp-services created
    
  • 编辑 Nginx 入口 Controller :
  • $ kubectl edit deploy nginx-ingress-controller -n kube-system
    
    $kubectl get deploy nginx-ingress-controller -n kube-system -o yaml
    {{{suppressed output}}}
            ports:
            - containerPort: 31090
              hostPort: 31090
              protocol: TCP
            - containerPort: 80
              name: http
              protocol: TCP
            - containerPort: 443
              name: https
              protocol: TCP
    
  • 我的入口在 IP 35.226.189.123 ,现在让我们尝试从集群外部连接。为此,我将连接到另一个我有 minikube 的 VM,所以我可以使用 kafka-client要测试的 pods :
  • user@minikube:~$ kubectl get pods
    NAME           READY   STATUS    RESTARTS   AGE
    kafka-client   1/1     Running   0          17h
    
    user@minikube:~$ kubectl exec kafka-client -it -- bin/bash
    
    root@kafka-client:/# kafka-console-consumer --bootstrap-server 35.226.189.123:31090 --topic demo-topic --from-beginning --timeout-ms 8000 --max-messages 1
    Wed Apr 15 18:19:48 UTC 2020
    Processed a total of 1 messages
    root@kafka-client:/# 
    

    如您所见,我能够从外部访问卡夫卡。
  • 如果您还需要对 Zookeeper 的外部访问,我会为您留下一个服务模型:
  • zookeeper-external-0.yaml
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        app: cp-zookeeper
        pod: demo-cp-zookeeper-0
      name: demo-cp-zookeeper-0-nodeport
      namespace: default
    spec:
      externalTrafficPolicy: Cluster
      ports:
      - name: external-broker
        nodePort: 31181
        port: 12181
        protocol: TCP
        targetPort: 31181
      selector:
        app: cp-zookeeper
        statefulset.kubernetes.io/pod-name: demo-cp-zookeeper-0
      sessionAffinity: None
      type: NodePort
    
  • 它将为其创建一个服务:
  • NAME                           TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)             AGE
    demo-cp-zookeeper-0-nodeport   NodePort    10.0.5.67     <none>        12181:31181/TCP     2s
    
  • 修补您的配置图:
  • data:
      "31090": default/demo-cp-kafka-0-nodeport:31090
      "31181": default/demo-cp-zookeeper-0-nodeport:31181
    
  • 添加入口规则:
  •         ports:
            - containerPort: 31181
              hostPort: 31181
              protocol: TCP
    
  • 使用您的外部 IP 进行测试:
  • pod/zookeeper-client created
    user@minikube:~$ kubectl exec -it zookeeper-client -- /bin/bash
    root@zookeeper-client:/# zookeeper-shell 35.226.189.123:31181
    Connecting to 35.226.189.123:31181
    Welcome to ZooKeeper!
    JLine support is disabled
    

    如果您有任何疑问,请在评论中告诉我!

    关于kubernetes - 无法连接到卡夫卡经纪人,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/61105315/

    相关文章:

    kubernetes - 使用go-template返回正确的键值

    apache-kafka - Streamparse wordcount 示例

    java - 如何从 spring boot kafka 项目连接到 aws MSK

    apache-kafka - 卡夫卡-avro-控制台-消费者 : Specify truststore location for schema-registry

    docker - 创建多个Docker容器

    kubernetes - 从链接器注入(inject)部署到正常部署的请求路径

    file - 使大型静态数据文件可用于 kubernetes pod

    apache-kafka - Kafka 主题数据不会在 Windows 中被删除

    elasticsearch - Kafka Elasticsearch 连接器时间戳

    docker - Kafka集群模式端口配置