elasticsearch - 带Java堆栈跟踪的logstash多行编解码器

标签 elasticsearch logging logstash logstash-grok

我正在尝试用grok解析日志文件。我使用的配置允许我解析单行事件,但如果是多行则不能(使用Java堆栈跟踪)。

#what i get on KIBANA for a single line:
{
  "_index": "logstash-2015.02.05",
  "_type": "logs",
  "_id": "mluzA57TnCpH-XBRbeg",
  "_score": null,
  "_source": {
    "message": " -  2014-01-14 11:09:35,962 [main] INFO  (api.batch.ThreadPoolWorker)   user.country=US",
    "@version": "1",
    "@timestamp": "2015-02-05T09:38:21.310Z",
    "path": "/root/test2.log",
    "time": "2014-01-14 11:09:35,962",
    "main": "main",
    "loglevel": "INFO",
    "class": "api.batch.ThreadPoolWorker",
    "mydata": "  user.country=US"
  },
  "sort": [
    1423129101310,
    1423129101310
  ]
}

#what i get for a multiline with Stack trace:
  {
  "_index": "logstash-2015.02.05",
  "_type": "logs",
  "_id": "9G6LsSO-aSpsas_jOw",
  "_score": null,
  "_source": {
    "message": "\tat oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:20)",
    "@version": "1",
    "@timestamp": "2015-02-05T09:38:21.380Z",
    "path": "/root/test2.log",
    "tags": [
      "_grokparsefailure"
    ]
  },
  "sort": [
    1423129101380,
    1423129101380
  ]
}
input {
  file {
    path => "/root/test2.log"
    start_position => "beginning"
    codec => multiline {
      pattern => "^ -  %{TIMESTAMP_ISO8601} "
      negate => true
      what => "previous"
    }
  }
}

filter {
 grok {
    match => [ "message", " -%{SPACE}%{SPACE}%{TIMESTAMP_ISO8601:time} \[%{WORD:main}\] %{LOGLEVEL:loglevel}%{SPACE}%{SPACE}\(%{JAVACLASS:class}\) %{GREEDYDATA:mydata} %{JAVASTACKTRACEPART}"]
  }
    date {
    match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
  }
}

output {
  elasticsearch {
    host => "194.3.227.23"
  }
 # stdout { codec => rubydebug}
}


谁能告诉我我在配置文件上做错了什么?谢谢。
这是我的日志文件的示例:
-2014-01-14 11:09:36,447 [main] INFO(support.context.ContextFactory)创建默认上下文
-2014-01-14 11:09:38,623 [main]错误(support.context.ContextFactory)使用用户cisuser和驱动程序oracle.jdbc.driver获取与数据库jdbc:oracle:thin:@ HAL9000:1521:DEVPRINT的连接时出错.Oracle驱动程序
java.sql.SQLException:ORA-28001:密码已过期
在oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70)
在oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:131)
**

*>编辑:这是我正在使用的最新配置

https://gist.github.com/anonymous/9afe80ad604f9a3d3c00#file-output-L1*



**

最佳答案

首先,在对文件输入进行重复测试时,请确保使用,因为db_path =>“/ dev / null” ,以确保从文件开头读取。

关于多行,问题内容或多行模式都一定有问题,因为所有事件都没有多行标记,该标记由多行编解码器或聚合行时的过滤器添加。
您的消息字段应包含所有由换行符 \ n (在我看来是Windows上的\ r \ n)分隔的行。这是输入配置的预期输出

{
"@timestamp" => "2015-02-10T11:03:33.298Z",
   "message" => " -  2014-01-14 11:09:35,962 [main] INFO  (api.batch.ThreadPoolWorker)   user.country=US\r\n\tat oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:20\r",
  "@version" => "1",
      "tags" => [
    [0] "multiline"
],
      "host" => "localhost",
      "path" => "/root/test.file"
}

关于grok,当您要匹配多行字符串时,应使用这样的模式。
filter {
  grok {
    match => {"message" => [
      "(?m)^ -%{SPACE}%{TIMESTAMP_ISO8601:time} \[%{WORD:main}\] %   {LOGLEVEL:loglevel}%{SPACE}\(%{JAVACLASS:class}\) %{DATA:mydata}\n%{GREEDYDATA:stack}",
      "^ -%{SPACE}%{TIMESTAMP_ISO8601:time} \[%{WORD:main}\] %{LOGLEVEL:loglevel}%{SPACE}\(%{JAVACLASS:class}\) %{GREEDYDATA:mydata}"]
}

}
}

(?m)前缀指示正则表达式引擎进行多行匹配。
然后你会得到一个像
{
"@timestamp" => "2015-02-10T10:47:20.078Z",
   "message" => " -  2014-01-14 11:09:35,962 [main] INFO  (api.batch.ThreadPoolWorker)   user.country=US\r\n\tat oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:20\r",
  "@version" => "1",
      "tags" => [
    [0] "multiline"
],
      "host" => "localhost",
      "path" => "/root/test.file",
      "time" => "2014-01-14 11:09:35,962",
      "main" => "main",
  "loglevel" => "INFO",
     "class" => "api.batch.ThreadPoolWorker",
    "mydata" => "  user.country=US\r",
     "stack" => "\tat oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:20\r"
}

您可以使用此在线工具http://grokconstructor.appspot.com/do/match构建和验证多行模式

最后的警告是,如果您在路径设置中使用列表或通配符,则使用多行编解码器的Logstash文件输入中当前存在错误,该错误会将多个文件中的内容混合在一起。唯一的方法是使用多行过滤器

高温超导

编辑:我专注于多行字符串,您需要为非单行字符串添加类似的模式

关于elasticsearch - 带Java堆栈跟踪的logstash多行编解码器,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/28342474/

相关文章:

performance - 如何在Elasticsearch Service中随着数据大小的增加而降低索引和检索性能?

c# - 如何追踪 AppInsights 内部错误

c# - 如何使用 AutoFac 解析正确的记录器类型?

elasticsearch - filebeat忽略多个探矿者中的logiles

sql-server - 保持SQL Server数据库和Elasticsearch索引同步

elasticsearch - 如何在 YAML 文件中安装期间配置 Elasticsearch Index Lifecycle Management (ILM)

elasticsearch - 基于属性的索引提示会改变我的结果

logging - 根据在 Google Cloud 上的 Kubernetes 中运行的容器日志创建 Prometheus 指标

ruby - 无法安装logtash contrib插件?

elasticsearch - Logstash doc_as_upsert在Elasticsearch中交叉索引消除重复