我是 logstash 的 super 新手,并且搜索了所有文档。我尝试了一些事情,但没有一个奏效。我有这样一行的日志:
[2014-06-03 17:00:27,696][INFO ][node ] [Savage Steel] initialized
[2014-06-03 17:00:27,697][INFO ][node ] [Savage Steel] starting ...
[2014-06-03 17:00:27,824][INFO ][transport ] [Savage Steel] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/10.35.142.60:9300]}
[2014-06-03 17:00:30,981][INFO ][cluster.service ] [Savage Steel] new_master [Savage Steel][Sb9jmVPZTgGsK1Yyj_xG-A][20EX17512][inet[/10.35.142.60:9300]], reason: zen-disco-join (elected_as_master)
[2014-06-03 17:00:31,030][INFO ][discovery ] [Savage Steel] elasticsearch/Sb9jmVPZTgGsK1Yyj_xG-A
[2014-06-03 17:00:31,062][INFO ][gateway ] [Savage Steel] recovered [0] indices into cluster_state
[2014-06-03 17:00:31,098][INFO ][http ] [Savage Steel] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/10.35.142.60:9200]}
如果您想知道,它们是 ElasticSearch 日志。我想捕获其中包含“bound_address”一词的行,并添加一个名为“test field”的字段。
我的logstash配置文件如下:
input {
file {
codec => multiline {
pattern => "^\s"
what => "previous"
}
path => ["C:\Users\spanguluri\Downloads\elasticsearch\logs\elasticsearch.log"]
start_position => "beginning"
}
}
filter {
grok {
match => [ "message", "%{YEAR:annual}" ]
add_field => { "foo_field" => "hello world, from %{host}" }
}
if ([message] =~ /bound_address/) {
add_field => { "test_field" => "test field" }
}
}
output {
elasticsearch {
protocol => "http"
host => "localhost"
port => "9200"
index => "logstash-%{+YYYY.MM.dd}"
}
}
启动logstash时,它一直在提示:
expected one of #, { at line 18, column 12 (byte 378) after filter
..有人可以调查一下吗?谢谢!
最佳答案
没有名为 add_field
的过滤器.
你可以改变这个:
if ([message] =~ /bound_address/) {
add_field => { "test_field" => "test field" }
}
对于类似这样的事情,使用
mutate
filterif ([message] =~ /bound_address/) {
mutate {
add_field => { "test_field" => "test field" }
}
}
关于logging - Logstash - 将字段添加到包含单词的行(事件),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/24025230/