如果我发布的有效负载大于50k,我会遇到Ingress-Nginx的奇怪行为。如果是这样,则在Nginx中提交的请求后的转发时间最多需要50秒或更长时间,但是如果我提交较小的负载,则Nginx的转发速度非常快。如果我发布4mb的请求,则最多需要100秒。
环境:
-使用Ubuntu 16.04的具有3个节点的Baremetall kubernetes集群
-在gitlab之外的自定义 Helm 模板上进行部署
-gitlab管理的Nginx-controller pod,通过主机头进行代理路由
-Java应用程序接收发布并返回
应用拓扑:
网络->(Apache反向代理)->(IngressNginx)->(应用程序)
我可以看到apache直接转发了整个有效负载,并且Nginx pod立刻接收到它,但是应用程序pod在长达50秒的时间内都没有收到任何东西(取决于有效负载的大小),有时我也会遇到Nginx 502,但是我不能找到一个模式。
我尝试过增大或减小缓冲大小,禁用或启用缓冲,但没有任何效果:
nginx.ingress.kubernetes.io/proxy-body-size: "100M"
nginx.ingress.kubernetes.io/client-body-buffer-size: "5M"
nginx.ingress.kubernetes.io/proxy-send-timeout: "300"
nginx.ingress.kubernetes.io/proxy-buffering: "on"
nginx.ingress.kubernetes.io/proxy-buffer-size: "5M"
nginx.ingress.kubernetes.io/proxy-request-buffering: "on"
nginx.ingress.kubernetes.io/proxy-next-upstream-tries: "1"
ingress.yaml
模板:{{- if .Values.ingress.enabled -}}
{{- $fullName := include "integrity-adapter-autodeployment.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "integrity-adapter-autodeployment.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "100M"
nginx.ingress.kubernetes.io/client-body-buffer-size: "5M"
nginx.ingress.kubernetes.io/proxy-send-timeout: "300"
nginx.ingress.kubernetes.io/proxy-buffering: "on"
nginx.ingress.kubernetes.io/proxy-buffer-size: "5M"
nginx.ingress.kubernetes.io/proxy-request-buffering: "on"
nginx.ingress.kubernetes.io/proxy-next-upstream-tries: "1"
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ . }}
backend:
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}
仅此服务器的
nginx.conf
: ## start server feature-document-response-integrity-adapter.prod.semanticlab.net
server {
server_name feature-document-response-integrity-adapter.prod.semanticlab.net ;
listen 80 ;
listen 443 ssl http2 ;
set $proxy_upstream_name "-";
ssl_certificate_by_lua_block {
certificate.call()
}
location ~* "^/" {
set $namespace "default";
set $ingress_name "review-integrity-adapter-feature-document-response";
set $service_name "review-integrity-adapter-feature-document-response";
set $service_port "63016";
set $location_path "/";
rewrite_by_lua_block {
lua_ingress.rewrite({
force_ssl_redirect = false,
ssl_redirect = true,
force_no_ssl_redirect = false,
use_port_in_redirects = false,
})
balancer.rewrite()
plugins.run()
}
# be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
# will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
# other authentication method such as basic auth or external auth useless - all requests will be allowed.
#access_by_lua_block {
#}
header_filter_by_lua_block {
lua_ingress.header()
plugins.run()
}
body_filter_by_lua_block {
}
log_by_lua_block {
balancer.log()
monitor.call()
plugins.run()
}
port_in_redirect off;
set $balancer_ewma_score -1;
set $proxy_upstream_name "default-review-integrity-adapter-feature-document-response-63016";
set $proxy_host $proxy_upstream_name;
set $pass_access_scheme $scheme;
set $pass_server_port $server_port;
set $best_http_host $http_host;
set $pass_port $pass_server_port;
set $proxy_alternative_upstream_name "";
client_max_body_size 100M;
client_body_buffer_size 5M;
proxy_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Request-ID $req_id;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 300s;
proxy_read_timeout 60s;
proxy_buffering on;
proxy_buffer_size 5M;
proxy_buffers 4 5M;
proxy_max_temp_file_size 1024m;
proxy_request_buffering on;
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout;
proxy_next_upstream_timeout 0;
proxy_next_upstream_tries 1;
proxy_pass http://upstream_balancer;
proxy_redirect off;
}
}
## end server feature-document-response-integrity-adapter.prod.semanticlab.net
```
Does some have any suggestions for me?
Thanks in advance
最佳答案
经过一周的搜索,我们终于找到了它。Ingress-Nginx默认启用gzip压缩。使用use-gzip: "false"
创建configMap可以解决此问题。kubectl apply -f {configmap.yaml}
apiVersion: v1
data:
use-gzip: "false"
kind: ConfigMap
metadata:
labels:
app: nginx-ingress
component: controller
heritage: Tiller
release: ingress
name: ingress-nginx-ingress-controller
namespace: gitlab-managed-apps
关于kubernetes - 如果发布了较大的json,则用于自动审查应用程序(gitlab-managed-apps)的NGINX入口会非常慢,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/62410898/