nginx的access log和error log收集到了一个kafka的topic中,在logstash配置中如何将这个input的的数据 通过判断每条数据的内容而写到不同的文件当中呢?
目前我的配置:
input {
kafka {
type => "test1"
group_id => "test1"
client_id => "XXX"
bootstrap_servers => "127.0.0.1:9300"
security_protocol => "SASL_PLAINTEXT"
sasl_mechanism => "PLAIN"
jaas_path => "/data/www/kafka-jaas.conf"
topics => ["test1"]
codec => plain {}
}
}
output {
if ([type] == "test1") {
file {
path => "/data/www/applog/google.com/applog.log"
codec => line {
format => "%{message}"
}
}
}
}
目前我的配置:
input {
kafka {
type => "test1"
group_id => "test1"
client_id => "XXX"
bootstrap_servers => "127.0.0.1:9300"
security_protocol => "SASL_PLAINTEXT"
sasl_mechanism => "PLAIN"
jaas_path => "/data/www/kafka-jaas.conf"
topics => ["test1"]
codec => plain {}
}
}
output {
if ([type] == "test1") {
file {
path => "/data/www/applog/google.com/applog.log"
codec => line {
format => "%{message}"
}
}
}
}
2 个回复
elastic_daniel - 小菜鸟
赞同来自: bossLeon
input {
kafka {
type => "aaa"
group_id => "aaa"
client_id => "logstash_110_aaa"
bootstrap_servers => "xxx:9110"
security_protocol => "SASL_PLAINTEXT"
sasl_mechanism => "PLAIN"
jaas_path => "/data/workspace//kafka-jaas.conf"
topics => ["aaa"]
codec => plain {}
}
}
filter {
grok {
match => { "message" => "这里面是你的正则,比如过滤包含new这个单词" }
add_tag => [ "new" ]
tag_on_failure => [] # prevent default _grokparsefailure tag on real records
}
grok {
match => { "message" => "这里面是你的正则,比如过滤包含show这个单词" }
add_tag => [ "show" ]
tag_on_failure => [] # prevent default _grokparsefailure tag on real records
}
}
output {
if ([type] == "aaa" and "new" in [tags]) {
file {
path => "/data/www/applog/new.log"
codec => line {
format => "%{message}"
}
}
}
if ([type] == "aaa" and "show" in [tags]) {
file {
path => "/data/www/applog/show.log"
codec => line {
format => "%{message}"
}
}
}
}
tongchuan1992 - 学无止境、学以致用
赞同来自: