我们有一个 Istio 集群,我们正在尝试为 Kubernetes 配置水平 pod 自动缩放。我们希望使用请求计数作为 hpa 的自定义指标。我们如何利用 Istio 的 Prometheus 来达到同样的目的?
最佳答案
这个问题比我想象的要复杂得多,但我终于找到了答案。
首先,您需要配置应用程序以提供自定义指标。它位于开发应用程序方面。下面是一个例子,如何用Go语言制作:Watching Metrics With Prometheus
其次,您需要定义应用程序(或 Pod,或您想要的任何内容)的 Deployment 并将其部署到 Kubernetes,例如:
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: podinfo spec: replicas: 2 template: metadata: labels: app: podinfo annotations: prometheus.io/scrape: 'true' spec: containers: - name: podinfod image: stefanprodan/podinfo:0.0.1 imagePullPolicy: Always command: - ./podinfo - -port=9898 - -logtostderr=true - -v=2 volumeMounts: - name: metadata mountPath: /etc/podinfod/metadata readOnly: true ports: - containerPort: 9898 protocol: TCP readinessProbe: httpGet: path: /readyz port: 9898 initialDelaySeconds: 1 periodSeconds: 2 failureThreshold: 1 livenessProbe: httpGet: path: /healthz port: 9898 initialDelaySeconds: 1 periodSeconds: 3 failureThreshold: 2 resources: requests: memory: "32Mi" cpu: "1m" limits: memory: "256Mi" cpu: "100m" volumes: - name: metadata downwardAPI: items: - path: "labels" fieldRef: fieldPath: metadata.labels - path: "annotations" fieldRef: fieldPath: metadata.annotations --- apiVersion: v1 kind: Service metadata: name: podinfo labels: app: podinfo spec: type: NodePort ports: - port: 9898 targetPort: 9898 nodePort: 31198 protocol: TCP selector: app: podinfo
关注领域
annotations: prometheus.io/scrape: 'true'
。需要请求Prometheus从资源中读取metrics。另请注意,还有两个注释,它们具有默认值;但如果您在应用程序中更改它们,则需要使用正确的值添加它们:-
prometheus.io/path
:如果metrics路径不是/metrics,则用该注解定义。 -
prometheus.io/port
:在指定端口而不是 pod 声明的端口上抓取 pod(如果没有声明,则默认为无端口目标)。
-
接下来,Istio 中的 Prometheus 使用自己针对 Istio 目的进行修改的配置,并且默认情况下它会跳过 Pod 中的自定义指标。因此,您需要对其进行一些修改。 就我而言,我从 this example 获取 Pod 指标的配置。并仅针对 Pod 修改了 Istio 的 Prometheus 配置:
kubectl edit configmap -n istio-system prometheus
我根据前面提到的示例更改了标签的顺序:
# pod's declared ports (default is a port-free target if none are declared). - job_name: 'kubernetes-pods' # if you want to use metrics on jobs, set the below field to # true to prevent Prometheus from setting the `job` label # automatically. honor_labels: false kubernetes_sd_configs: - role: pod # skip verification so you can do HTTPS to pods tls_config: insecure_skip_verify: true # make sure your labels are in order relabel_configs: # these labels tell Prometheus to automatically attach source # pod and namespace information to each collected sample, so # that they'll be exposed in the custom metrics API automatically. - source_labels: [__meta_kubernetes_namespace] action: replace target_label: namespace - source_labels: [__meta_kubernetes_pod_name] action: replace target_label: pod # these labels tell Prometheus to look for # prometheus.io/{scrape,path,port} annotations to configure # how to scrape - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 target_label: __address__ - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme] action: replace target_label: __scheme__
之后,自定义指标出现在 Prometheus 中。但是,更改 Prometheus 配置时要小心,因为 Istio 所需的一些指标可能会消失,请仔细检查所有内容。
现在是时候安装Prometheus custom metric adapter了。
- 下载 this存储库
- 更改文件
<repository-directory>/deploy/manifests/custom-metrics-apiserver-deployment.yaml
中 Prometheus 服务器的地址。例如,- --prometheus-url=http://prometheus.istio-system:9090/
- 运行命令
kubectl apply -f <repository-directory>/deploy/manifests
一段时间后,custom.metrics.k8s.io/v1beta1
应出现在命令“kubectl api-vesions”的输出中。
此外,使用命令
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq .
检查自定义 API 的输出和kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/http_requests" | jq .
最后一个的输出应如下例所示:{ "kind": "MetricValueList", "apiVersion": "custom.metrics.k8s.io/v1beta1", "metadata": { "selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/http_requests" }, "items": [ { "describedObject": { "kind": "Pod", "namespace": "default", "name": "podinfo-6b86c8ccc9-kv5g9", "apiVersion": "/__internal" }, "metricName": "http_requests", "timestamp": "2018-01-10T16:49:07Z", "value": "901m" }, { "describedObject": { "kind": "Pod", "namespace": "default", "name": "podinfo-6b86c8ccc9-nm7bl", "apiVersion": "/__internal" }, "metricName": "http_requests", "timestamp": "2018-01-10T16:49:07Z", "value": "898m" } ] }
如果是这样,您可以转到下一步。如果没有,请在 CustomMetrics
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq . | grep "pods/"
中查看哪些 API 可用于 Pod对于 http_requestskubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq . | grep "http"
。 MetricNames 是根据 Prometheus 从 Pod 收集的指标生成的,如果它们为空,则需要朝该方向查看。最后一步是配置 HPA 并对其进行测试。因此,就我而言,我为之前定义的 podinfo 应用程序创建了 HPA:
apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: podinfo spec: scaleTargetRef: apiVersion: extensions/v1beta1 kind: Deployment name: podinfo minReplicas: 2 maxReplicas: 10 metrics: - type: Pods pods: metricName: http_requests targetAverageValue: 10
并使用简单的 Go 应用程序来测试负载:
#install hey go get -u github.com/rakyll/hey #do 10K requests rate limited at 25 QPS hey -n 10000 -q 5 -c 5 http://<K8S-IP>:31198/healthz
一段时间后,我看到使用命令
kubectl describe hpa
的缩放发生了变化和kubectl get hpa
我使用了文章 Ensure High Availability and Uptime With Kubernetes Horizontal Pod Autoscaler and Prometheus 中有关创建自定义指标的说明。
所有有用的链接都集中在一处:
- Watching Metrics With Prometheus - 向应用程序添加指标的示例
- k8s-prom-hpa - 为Prometheus创建自定义指标的示例(与上面的文章相同)
- Kubernetes Custom Metrics Adapter for Prometheus
- Setting up the custom metrics adapter and sample app
关于kubernetes - 如何使用Istio的Prometheus配置kubernetes hpa?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51840970/