我正在kubernetes集群中使用fluentd从pod收集日志并将其发送到elasticseach。
一两天一次,fluent就会得到错误:
[warn]: #0 emit transaction failed: error_class=Fluent::Plugin::Buffer::BufferOverflowError error=“buffer space has too many data” location=“/fluentd/vendor/bundle/ruby/2.6.0/gems/fluentd-1.7.4/lib/fluent/plugin/buffer.rb:265:in `write’”
流利的语言会停止发送日志,直到我重置流利的语言 Pane 为止。
如何避免出现此错误?
也许我需要更改我的配置?
<match filter.Logs.**.System**>
@type elasticsearch
host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME']}"
user "#{ENV['FLUENT_ELASTICSEARCH_USER']}"
password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD']}"
logstash_format true
logstash_prefix system
type_name systemlog
time_key_format %Y-%m-%dT%H:%M:%S.%NZ
time_key time
log_es_400_reason true
<buffer>
flush_thread_count "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_THREAD_COUNT'] || '8'}"
flush_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_INTERVAL'] || '5s'}"
chunk_limit_size "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_CHUNK_LIMIT_SIZE'] || '8M'}"
queue_limit_length "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_QUEUE_LIMIT_LENGTH'] || '32'}"
retry_max_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_RETRY_MAX_INTERVAL'] || '30'}"
retry_forever true
</buffer>
</match>
最佳答案
默认的缓冲区类型是内存检查:https://github.com/uken/fluent-plugin-elasticsearch/blob/master/lib/fluent/plugin/out_elasticsearch.rb#L63
这种类型的缓冲区有两个缺点
-如果重新启动Pod或容器,则缓冲区中的日志将丢失。
-如果分配给流利的所有RAM已被消耗,则不再发送日志
尝试通过以下配置使用基于文件的缓冲区
<buffer>
@type file
path /fluentd/log/elastic-buffer
flush_thread_count 8
flush_interval 1s
chunk_limit_size 32M
queue_limit_length 4
flush_mode interval
retry_max_interval 30
retry_forever true
</buffer>
关于elasticsearch - 流利的错误: “buffer space has too many data”,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/60316412/