amazon-web-services - kubernetes-aws 中的集群 IP 是如何配置的?

标签 amazon-web-services docker amazon-ec2 kubernetes amazon-vpc

我对 kubernetes 非常陌生,并且刚刚使用 kube-up 在 AWS 上建立了一个股票 kubernetes v.1.3.5 集群。到目前为止,我一直在使用 kubernetes 来了解它的机制(节点、pod、svc 和其他东西)。根据我最初的(或者可能是粗略的)理解,我有几个问题:

1) 到集群 IP 的路由如何在这里工作(即在 kube-aws 中)?我看到这些服务的 IP 地址在 10.0.0.0/16 范围内。我使用 rc=3 的股票 nginx 进行了部署,然后在暴露节点端口的情况下附加了一个服务。一切都很好!我可以从我的开发机器连接到服务。此 nginx 服务的集群 IP 为 10.0.33.71:1321。现在,如果我 ssh 进入其中一个 minions(或节点或 VMS)并执行“telnet 10.0.33.71 1321”,它会按预期连接。但我不知道这是如何工作的,我在 kubernetes 的 VPC 设置中找不到任何与 10.0.0.0/16 相关的路由。这里到底发生了什么导致像telnet这样的应用程序成功连接?但是,如果我 ssh 进入主节点并执行“telnet 10.0.33.71 1321”,它不会连接。为什么无法从master连接?

2)每个节点内部都有一个cbr0接口(interface)。每个 minion 节点的 cbr0 配置为 10.244.x.0/24,master 的 cbr0 配置为 10.246.0.0/24。
我可以从任何节点(包括主节点)ping 到任何 10.244.x.x pod。但我无法从任何 minion 节点 ping 10.246.0.1(主节点内的 cbr0)。这里会发生什么?

这是 kubernetes 在 aws 中设置的路由。专有网络。

Destination     Target
172.20.0.0/16   local
0.0.0.0/0       igw-<hex value>
10.244.0.0/24   eni-<hex value> / i-<hex value>
10.244.1.0/24   eni-<hex value> / i-<hex value>
10.244.2.0/24   eni-<hex value> / i-<hex value>
10.244.3.0/24   eni-<hex value> / i-<hex value>
10.244.4.0/24   eni-<hex value> / i-<hex value>
10.246.0.0/24   eni-<hex value> / i-<hex value>

最佳答案

Mark Betz ( SRE at Olark ) 在三篇文章中介绍了 Kubernetes 网络:

  • pods
  • services :
  • ingress

  • 对于 pod,您正在查看:

    pod network

    你发现:
  • 精神0 :一个“物理网络接口(interface)”
  • docker 0 / cbr0 : 一个 bridge用于连接两个 ethernet段,无论其协议(protocol)如何。
  • veth0 , 1 , 2 :虚拟网络接口(interface),每个容器一个。
    docker 0 default Gateway veth0 .它被命名为 cbr0 对于“自定义桥”。
    Kubernetes 通过共享 same veth0 启动容器,这意味着每个容器必须暴露不同的端口。
  • 暂停 :在“pause”中启动的特殊容器,用于检测发送到 pod 的 SIGTERM,并将其转发给容器。
  • 节点 : 主机
  • 集群 : 一组节点
  • router/gateway

  • 最后一个元素是事情开始变得更复杂的地方:

    Kubernetes assigns an overall address space for the bridges on each node, and then assigns the bridges addresses within that space, based on the node the bridge is built on.
    Secondly, it adds routing rules to the gateway at 10.100.0.1 telling it how packets destined for each bridge should be routed, i.e. which node’s eth0 the bridge can be reached through.

    Such a combination of virtual network interfaces, bridges, and routing rules is usually called an overlay network.



    当一个 pod 联系另一个 pod 时,它会通过 service .
    为什么?

    Pod networking in a cluster is neat stuff, but by itself it is insufficient to enable the creation of durable systems. That’s because pods in Kubernetes are ephemeral.
    You can use a pod IP address as an endpoint but there is no guarantee that the address won’t change the next time the pod is recreated, which might happen for any number of reasons.



    这意味着:您需要一个反向代理/动态负载平衡器。它最好是有弹性的。

    A service is a type of kubernetes resource that causes a proxy to be configured to forward requests to a set of pods.
    The set of pods that will receive traffic is determined by the selector, which matches labels assigned to the pods when they were created



    该服务使用自己的网络。默认类型为“ ClusterIP ”;它有自己的IP。

    这是两个 pod 之间的通信路径:

    two pods network

    它使用 kube-proxy .
    此代理使用自己的 netfilter .

    netfilter is a rules-based packet processing engine.
    It runs in kernel space and gets a look at every packet at various points in its life cycle.
    It matches packets against rules and when it finds a rule that matches it takes the specified action.
    Among the many actions it can take is redirecting the packet to another destination.



    kube-proxy and netfilter

    In this mode, kube-proxy:

    • opens a port (10400 in the example above) on the local host interface to listen for requests to the test-service,
    • inserts netfilter rules to reroute packets destined for the service IP to its own port, and
    • forwards those requests to a pod on port 8080.

    That is how a request to 10.3.241.152:80 magically becomes a request to 10.0.2.2:8080.
    Given the capabilities of netfilter all that’s required to make this all work for any service is for kube-proxy to open a port and insert the correct netfilter rules for that service, which it does in response to notifications from the master api server of changes in the cluster.



    但:

    There’s one more little twist to the tale.
    I mentioned above that user space proxying is expensive due to marshaling packets. In kubernetes 1.2, kube-proxy gained the ability to run in iptables mode.

    In this mode, kube-proxy mostly ceases to be a proxy for inter-cluster connections, and instead delegates to netfilter the work of detecting packets bound for service IPs and redirecting them to pods, all of which happens in kernel space.
    In this mode kube-proxy’s job is more or less limited to keeping netfilter rules in sync.



    网络模式变为:

    netfilter in action

    但是,这不适合需要外部固定 IP 的外部(面向公众)通信。

    您有专门的服务:nodePort and LoadBalancer :

    A service of type NodePort is a ClusterIP service with an additional capability: it is reachable at the IP address of the node as well as at the assigned cluster IP on the services network.
    The way this is accomplished is pretty straightforward:

    When kubernetes creates a NodePort service, kube-proxy allocates a port in the range 30000–32767 and opens this port on the eth0 interface of every node (thus the name “NodePort”).

    Connections to this port are forwarded to the service’s cluster IP.



    你得到:

    load-balancer / nodeport

    Loadalacer 更先进,允许使用标准端口公开服务。
    请参阅此处的映射:
    $ kubectl get svc service-test
    NAME      CLUSTER-IP      EXTERNAL-IP     PORT(S)          AGE
    openvpn   10.3.241.52     35.184.97.156   80:32213/TCP     5m
    

    然而:

    Services of type LoadBalancer have some limitations.

    • You cannot configure the lb to terminate https traffic.
    • You can’t do virtual hosts or path-based routing, so you can’t use a single load balancer to proxy to multiple services in any practically useful way.

    These limitations led to the addition in version 1.2 of a separate kubernetes resource for configuring load balancers, called an Ingress.

    The Ingress API supports TLS termination, virtual hosts, and path-based routing. It can easily set up a load balancer to handle multiple backend services.
    The implementation follows a basic kubernetes pattern: a resource type and a controller to manage that type.
    The resource in this case is an Ingress, which comprises a request for networking resources



    例如:
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: test-ingress
      annotations:
        kubernetes.io/ingress.class: "gce"
    spec:
      tls:
        - secretName: my-ssl-secret
      rules:
      - host: testhost.com
        http:
          paths:
          - path: /*
            backend:
              serviceName: service-test
              servicePort: 80
    

    The ingress controller is responsible for satisfying this request by driving resources in the environment to the necessary state.
    When using an Ingress you create your services as type NodePort and let the ingress controller figure out how to get traffic to the nodes.

    There are ingress controller implementations for GCE load balancers, AWS elastic load balancers, and for popular proxies such as NGiNX and HAproxy.

    关于amazon-web-services - kubernetes-aws 中的集群 IP 是如何配置的?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/39159547/

    相关文章:

    php - 我们可以更新 Amazon S3 中特定文件的内容吗?

    amazon-web-services - 如何调试失败的systemctl服务(代码=已退出,状态= 217/USER)?

    Docker for windows sql express 卷数据不持久

    amazon-ec2 - aws elastic load balancer 可以将端口 443 转发到弹性 beantalk 实例的端口 443 吗?

    amazon-web-services - AWS CLI 在使用 ec2 描述实例时如何对包含多个值的标签使用标签过滤?

    ios - 从 iOS (Swift) 上的 AWS S3 存储桶并行下载对象

    amazon-web-services - 有没有办法使用无服务器 CLI 进行试运行?

    docker - 理解 `` 标准输入 : true tty: true`` on a kubernetes container?

    macos - 在Mopidy docker容器上运行MPD服务器,如何在Mac上播放?

    ruby-on-rails-3 - 如何将 resque 与橡胶 gem 一起使用