elasticsearch - 如何使用logstash在elasticsearch中使用java堆栈跟踪?

标签 elasticsearch logstash logstash-grok

我有这样的日志

    [2017-05-18 00:00:05,871][INFO ][cluster.metadata         ] [esndata-2] [.data-es-1-2017.05.18] creating index, cause [auto(bulk api)], templates [.data
-es-1], shards [1]/[1], mappings [_default_, shards, node, index_stats, index_recovery, cluster_state, cluster_stats, node_stats, indices_stats]
    [2017-05-18 00:00:06,161][INFO ][cluster.routing.allocation] [esndata-2] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.data-es-1-2017.05.18][0]] ...]).
    [2017-05-18 00:00:06,249][INFO ][cluster.metadata         ] [esndata-2] [.data-es-1-2017.05.18] update_mapping [node_stats]
    [2017-05-18 00:00:06,290][INFO ][cluster.routing.allocation] [esndata-2] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.data-es-1-2017.05.18][0]] ...]).
    [2017-05-18 00:00:06,339][DEBUG][action.admin.indices.create] [esndata-2] [data-may-2017,data-apr-2017,data-mar-2017] failed to create
    [data-may-2017,data-apr-2017,data-mar-2017] InvalidIndexNameException[Invalid index name [data-may-2017,data-apr-2017,data-mar-2017], must not contain the following characters [\, /, *, ?, ", <, >, |,  , ,]]
            at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService.validateIndexName(MetaDataCreateIndexService.java:142)
            at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService.validate(MetaDataCreateIndexService.java:431)
            at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService.access$100(MetaDataCreateIndexService.java:95)
            at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService$1.execute(MetaDataCreateIndexService.java:190)
            at org.elasticsearch.cluster.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:45)
            at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:468)
            at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:772)
            at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)
            at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
            at java.lang.Thread.run(Thread.java:745)

我的logstash配置是这样的
 input {
          file {
            path => "F:\logstash-2.4.0\logstash-2.4.0\bin\dex.txt"
            start_position => "beginning"
            codec => multiline {
            pattern => "^%{TIMESTAMP_ISO8601} "
            negate => true
            what => previous
            }
          }
        }

filter {
    grok {
     match => [ 
       "message", "(?m)^%{TIMESTAMP_ISO8601:TIMESTAMP}\[%{LOGLEVEL:LEVEL}%{SPACE}\]\[%{DATA:ERRORTYPE}\]%{SPACE}\[%{DATA:SERVERNAME}\]%{SPACE}\[%{DATA:INDEX-NAME}\]%{SPACE}%{GREEDYDATA:mydata}",
       "message", "^%{TIMESTAMP_ISO8601:TIMESTAMP}\[%{LOGLEVEL:LEVEL}%{SPACE}\]\[%{DATA:ERRORTYPE}\]%{SPACE}\[%{DATA:SERVERNAME}\]%{SPACE}%{GREEDYDATA:mydata}" 
     ]
 }
    date {
    match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z"]
 }
}

output {
  stdout { codec => rubydebug}
}

这是我使用上述配置时得到的输出:
{
    "@timestamp" => "2017-05-24T06:25:11.245Z",
       "message" => "[2017-05-18 00:00:05,871][INFO ][cluster.metadata         ]
[esndata-2] [.data-es-1-2017.05.18] creating index, cause [auto(bulk api)], tem
plates [.data\r\n-es-1], shards [1]/[1], mappings [_default_, shards, node, inde
x_stats, index_recovery, cluster_state, cluster_stats, node_stats, indices_stats
]\r\n    [2017-05-18 00:00:06,161][INFO ][cluster.routing.allocation] [esndata-2
] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started
[[.data-es-1-2017.05.18][0]] ...]).\r\n    [2017-05-18 00:00:06,249][INFO ][clus
ter.metadata         ] [esndata-2] [.data-es-1-2017.05.18] update_mapping [node_
stats]\r\n    [2017-05-18 00:00:06,290][INFO ][cluster.routing.allocation] [esnd
ata-2] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards s
tarted [[.data-es-1-2017.05.18][0]] ...]).\r\n    [2017-05-18 00:00:06,339][DEBU
G][action.admin.indices.create] [esndata-2] [data-may-2017,data-apr-2017,data-ma
r-2017] failed to create\r\n    [data-may-2017,data-apr-2017,data-mar-2017] Inva
lidIndexNameException[Invalid index name [data-may-2017,data-apr-2017,data-mar-2
017], must not contain the following characters [\\, /, *, ?, \", <, >, |,  , ,]
]\r\n            at org.elasticsearch.cluster.metadata.MetaDataCreateIndexServic
e.validateIndexName(MetaDataCreateIndexService.java:142)\r\n            at org.e
lasticsearch.cluster.metadata.MetaDataCreateIndexService.validate(MetaDataCreate
IndexService.java:431)\r\n            at org.elasticsearch.cluster.metadata.Meta
DataCreateIndexService.access$100(MetaDataCreateIndexService.java:95)\r\n
     at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService$1.execute(
MetaDataCreateIndexService.java:190)\r\n            at org.elasticsearch.cluster
.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:45)\r\n            a
t org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(I
nternalClusterService.java:468)\r\n            at org.elasticsearch.cluster.serv
ice.InternalClusterService$UpdateTask.run(InternalClusterService.java:772)\r\n
          at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExe
cutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor
.java:231)\r\n            at org.elasticsearch.common.util.concurrent.Prioritize
dEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPool
Executor.java:194)\r\n            at java.util.concurrent.ThreadPoolExecutor.run
Worker(ThreadPoolExecutor.java:1142)\r\n            at java.util.concurrent.Thre
adPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\r\n            at java.la
ng.Thread.run(Thread.java:745)\r\n\r",
      "@version" => "1",
          "tags" => [
        [0] "multiline",
        [1] "_grokparsefailure"
    ],
          "path" => "D:\\logstash\\logstash-2.4.0\\bin\\error.txt",
          "host" => "PC326815"
}

我使用了https://gist.github.com/wiibaa/c47e5f79d45d58d05121链接

如何在不添加所有内容的情况下解析日志?

谢谢

最佳答案

问题是输入中提到的多行模式和过滤器中提到的grok模式

我使用了以下配置:

input {
      file {
            path => "D:\logstash\logstash-2.4.0\bin\errors.txt"
            start_position => "beginning"
        codec => multiline {
            pattern => "^\[%{TIMESTAMP_ISO8601:TIMESTAMP}\]"
            negate => true
            what => "previous"
        }
  }

}
filter {
   grok {
        match => [ "message", "(?m)^\[%{TIMESTAMP_ISO8601:TIMESTAMP}\]\[%{LOGLEVEL:LEVEL}%{SPACE}\]\[%{DATA:ERRORTYPE}\]%{SPACE}\[%{DATA:SERVERNAME}\]%{SPACE}\[%{DATA:INDEX-NAME}\]%{SPACE}(?<ERRORMESSAGE>(.|\r|\n)*)"]
   }

}
output {

stdout { codec => rubydebug }

}

关于elasticsearch - 如何使用logstash在elasticsearch中使用java堆栈跟踪?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/44128855/

相关文章:

spring-boot - LogStash - 无法实例化类型 net.logstash.logback.appender.LogstashTcpSocketAppender

nginx - 解析Nginx日志时出现Logstash _grokparsefailure

elasticsearch - '错误无法通过http://localhost:9200与Elasticsearch联系。' -基巴纳

ruby-on-rails - 将搜索词屈服/添加到ElasticSearch结果中

elasticsearch - 文本搜索微服务架构

regex - 提取最后一个词/文本多个匹配的 logstash

elasticsearch - 如何在Logstash的tar包中增加LS_HEAP_SIZE

elasticsearch - ELK-如何从日志文件中选择时间戳记而不是插入时间戳记

Logstash grok 过滤器不适用于最后一个字段

amazon-web-services - AWS ElasticSearch 不接受某些 CloudTrail 文档