elasticsearch - Logstash索引编制错误-聚合插件:对于task_id模式 '%{id}',有多个过滤器

标签 elasticsearch logstash

我在Linux上使用Elasticsearch 5.5.0和Logstash 5.5.0-AWS ec2-instance。

在/etc/logstash/conf.d中有一个logstash_etl.conf文件:

input {
     jdbc {
         jdbc_connection_string => "jdbc:mysql://localhost:3306/mydatabase"
         jdbc_user => "root"
         jdbc_password => ""
         jdbc_driver_library => "/etc/logstash/mysql-connector/mysql-connector-java-5.1.21.jar"
         jdbc_driver_class => "com.mysql.jdbc.driver"
         schedule => "*/5 * * * *"
         statement => "select * from customers"
         use_column_value => false
         clean_run => true
     }
  }

 filter {
    if ([api_key]) {
      aggregate {
        task_id => "%{id}"
        push_map_as_event_on_timeout => false
        #timeout_task_id_field => "[@metadata][index_id]"
        #timeout => 60 
        #inactivity_timeout => 30
        code => "sample code"
        timeout_code => "sample code"
      }
    }
  }

  # sudo /usr/share/logstash/bin/logstash-plugin install logstash-output-exec
  output {
     if ([purge_task] == "yes") {
       exec {
           command => "curl -XPOST '127.0.0.1:9200/_all/_delete_by_query?conflicts=proceed' -H 'Content-Type: application/json' -d'
               {
                 \"query\": {
                   \"range\" : {
                     \"@timestamp\" : {
                       \"lte\" : \"now-3h\"
                     }
                   }
                 }
               }
           '"
       }
     } else {
         stdout { codec => json_lines}
         elasticsearch {
            "hosts" => "127.0.0.1:9200"
            "index" => "myindex_%{api_key}"
            "document_type" => "%{[@metadata][index_type]}"
            "document_id" => "%{[@metadata][index_id]}"
            "doc_as_upsert" => true
            "action" => "update"
            "retry_on_conflict" => 7
         }
     }
  }

当我像这样重新启动logstash时:
sudo initctl restart logstash

在/var/log/logstash/logstash-plain.log内部-一切正常,正在发生对Elasticsearch的实际索引!

但是,如果我将另一个SQL输入添加到此配置文件中:
input {
     jdbc {
         jdbc_connection_string => "jdbc:mysql://localhost:3306/mydatabase"
         jdbc_user => "root"
         jdbc_password => ""
         jdbc_driver_library => "/etc/logstash/mysql-connector/mysql-connector-java-5.1.21.jar"
         jdbc_driver_class => "com.mysql.jdbc.driver"
         schedule => "*/5 * * * *"
         statement => "select * from orders"
         use_column_value => false
         clean_run => true
     }
  }

由于配置文件中的错误,索引停止!

在/var/log/logstash/logstash-plain.log内部:
[2018-04-06T21:33:54,123][ERROR][logstash.agent ] Pipeline aborted due to error {:exception=>#<LogStash::ConfigurationError: Aggregate plugin: For task_id pattern '%{id}', there are more than one filter which defines timeout options. All timeout options have to be defined in only one aggregate filter per task_id pattern. Timeout options are : timeout, inactivity_timeout, timeout_code, push_map_as_event_on_timeout, push_previous_map_as_event, timeout_task_id_field, timeout_tags>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-filter-aggregate-2.6.1/lib/logstash/filters/aggregate.rb:486:in `register'", "org/jruby/ext/thread/Mutex.java:149:in `synchronize'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-filter-aggregate-2.6.1/lib/logstash/filters/aggregate.rb:480:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:281:in `register_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:292:in `register_plugins'", "org/jruby/RubyArray.java:1613:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:292:in `register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:302:in `start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:226:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:398:in `start_pipeline'"]}
[2018-04-06T21:33:54,146][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2018-04-06T21:33:57,131][WARN ][logstash.agent ] stopping pipeline {:id=>"main"}

Logstash和Elasticsearch真的是新手...

这是什么意思?

如果有人能告诉我为什么仅通过添加一个新输入而导致此工具崩溃,将不胜感激!

最佳答案

Would appreciate if someone could tell me why by just by adding one new input causes this tool to crash?!



您不能在同一配置内添加两个input语句。像documentation says一样,如果要在配置文件中添加多个input,则应使用类似以下内容:
input {
  file {
    path => "/var/log/messages"
    type => "syslog"
  }

  file {
    path => "/var/log/apache/access.log"
    type => "apache"
  }
}

关于elasticsearch - Logstash索引编制错误-聚合插件:对于task_id模式 '%{id}',有多个过滤器,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49701823/

相关文章:

spring data elasticsearch错误页面totalPages

ruby-on-rails - 事件记录模型中具有替代搜索的轮胎elasticsearch kaminari分页

elasticsearch - 是否在 Elastic-search 中删除了 ImmutableSettings?

jdbc - 使用 logstash 将数据从 MySQL 索引到 ElasticSearch 时,它显示 Java heapspace 错误

elasticsearch - 计算Logstash中不同事件之间的差异或增量

c# - Elasticsearch Nest C#编译器错误

elasticsearch - Kibana是否直接从ElasticSearch或LogStash获取数据?

logstash 错误 : com. mariadb.jdbc.Driver 未加载

docker - Filebeat甚至没有成功将日志发送到Logstash

elasticsearch - 如何分割用户