ElasticSearch + Logstash 工作,但不显示任何数据

标签 elasticsearch logstash

我有一个 Oracle 数据库。 Logstash 从 Oracle 检索数据并将其放入 ElasticSearch。并且一切看起来都很好,但是 Logstash 服务器上并没有发生任何变化,就好像它不知道该做什么一样。

logstash.conf:

input {
    jdbc {
        jdbc_driver_library => "C:\JBoss\wildfly\...\ojdbc7.jar"
        jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
        jdbc_connection_string => "jdbc:oracle:thin:@3d-ztemtis-ora.iba:1521/ORCL"
        jdbc_user => "sample_user"
        jdbc_password => "12345"
        jdbc_validate_connection => true

        # once a 2 minute
        schedule => "2 * * * *"
        statement => "SELECT * FROM table_one"
    }
}
output {
    elasticsearch {
        hosts => "localhost:9200"
        index => "tableone"
        document_id => "%{uid}"
    }
    stdout{
    codec => rubydebug
    }
}

Logstash 日志
D:\Workspace3\ElasticLogstash\logstash-6.5.1>bin\logstash -f logstash.conf
Sending Logstash logs to D:/Workspace3/ElasticLogstash/logstash-6.5.1/logs which is now configured via log4j2.properties
[2018-11-28T00:49:30,296][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-11-28T00:49:30,308][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.5.1"}
[2018-11-28T00:49:33,174][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-11-28T00:49:33,455][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-11-28T00:49:33,471][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-11-28T00:49:33,625][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-11-28T00:49:33,674][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-11-28T00:49:33,674][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-11-28T00:49:33,699][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2018-11-28T00:49:33,718][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-11-28T00:49:33,745][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-11-28T00:49:33,940][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x64e24d22 run>"}
[2018-11-28T00:49:33,971][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-11-28T00:49:34,217][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

Elasticsearch 日志
[2018-11-28T00:36:06,492][DEBUG][o.e.a.ActionModule       ] [px9stLj] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2018-11-28T00:36:06,683][INFO ][o.e.d.DiscoveryModule    ] [px9stLj] using discovery type [zen] and host providers [settings]
[2018-11-28T00:36:07,188][INFO ][o.e.n.Node               ] [px9stLj] initialized
[2018-11-28T00:36:07,188][INFO ][o.e.n.Node               ] [px9stLj] starting ...
[2018-11-28T00:36:07,387][INFO ][o.e.t.TransportService   ] [px9stLj] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2018-11-28T00:36:10,500][INFO ][o.e.c.s.MasterService    ] [px9stLj] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {px9stLj}{px9stLjKSkqdyzudpK1ZhA}{bkR2txqXTn-Eo1o7-2PqEA}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=17058418688, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}
[2018-11-28T00:36:10,500][INFO ][o.e.c.s.ClusterApplierService] [px9stLj] new_master {px9stLj}{px9stLjKSkqdyzudpK1ZhA}{bkR2txqXTn-Eo1o7-2PqEA}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=17058418688, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, reason: apply cluster state (from master [master {px9stLj}{px9stLjKSkqdyzudpK1ZhA}{bkR2txqXTn-Eo1o7-2PqEA}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=17058418688, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2018-11-28T00:36:10,585][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [px9stLj] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
[2018-11-28T00:36:10,585][INFO ][o.e.n.Node               ] [px9stLj] started
[2018-11-28T00:36:10,921][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [px9stLj] Failed to clear cache for realms [[]]
[2018-11-28T00:36:10,962][INFO ][o.e.l.LicenseService     ] [px9stLj] license [852e276a-f99f-4ce3-a5d6-86c7769ae24e] mode [basic] - valid
[2018-11-28T00:36:10,970][INFO ][o.e.g.GatewayService     ] [px9stLj] recovered [3] indices into cluster_state
[2018-11-28T00:36:12,366][INFO ][o.e.c.r.a.AllocationService] [px9stLj] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[blog][0]] ...]).

正如我所说,问题是 - 什么都没有发生,也没有记录错误。
我怎么知道这是否已成功连接到 Oracle?

最佳答案

请在此处查看时间表示例:

https://discuss.elastic.co/t/how-to-run-the-schedule-every-five-minutes-in-logstash-5-0/66222

https://www.thegeekstuff.com/2011/07/cron-every-5-minutes/

我认为您的日程安排部分应如下所示:

每 2 分钟

schedule => "*/2 * * * *"

关于ElasticSearch + Logstash 工作,但不显示任何数据,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53508788/

相关文章:

elasticsearch - 使用 Logstash 将数据从 Elasticsearch 导出到 CSV

elasticsearch - 一个Logstash到具有某些字段的多个Elasticsearch索引

linux - 错误 : Expected one of#, input, filter, output at line 34, column 1 (byte 1) after {:level=>:error} in logstash

elasticsearch - Logstash-forwarder 作为 Windows 服务

.net - ElasticSearch.net 6.0.2:无法声明PostData或Index方法

logstash - 如何在logstash中加载CSV文件

linux - Logstash 过早创建目录

elasticsearch - geo_distance仅对查询的特定条件进行过滤?

java - 用于精确搜索的 Hibernate 搜索和 Elastic 搜索分析器

elasticsearch - 重新索引 Elasticsearch 不会返回所有文档