嘿~ 今天天气不错嘛

logstash解析报文空格合并

Logstash | 作者 yanhb | 发布于2018年12月25日 | 阅读数:2990

现在我们用filebeat+logstash+es做日志解析和存储,但是logstash会把报文中连续的多个空格合并为一个空格,这种问题怎么避免,希望大家给个解决方案。
eg:
日志格式为:
{"IP":"134.64.11.36","ID":"CHAN","REQ_STR":"1187   031181219173308498071102011101100         186443262111Beijinh2st1006    1100000","REQ_TIME":"2018.12.19 17:28:10.394","RESP_STR":"11211  031181219173308498071102011101100         186443262111Beijinh2st10061    100000330762265                9       310     20140220144353  3       3       13091   -??       8986011461310006768                     8       20140416144233","RESP_TIME":"2018.12.19 17:28:10.460","TIME_USED":"0.065783"}
 
logstash解析完就是:
{"IP":"134.64.11.36","ID":"CHAN","REQ_STR":"1187 031181219173308498071102011101100 186443262111Beijinh2st1006    1100000","REQ_TIME":"2018.12.19 17:28:10.394","RESP_STR":"11211 031181219173308498071102011101100 186443262111Beijinh2st10061 100000330762265 9 310 20140220144353 3 13091 -??  8986011461310006768 8 20140416144233","RESP_TIME":"2018.12.19 17:28:10.460","TIME_USED":"0.065783"}
已邀请:

yanhb - 90后IT男

赞同来自:

下面是我的logstash的配置,其实就是split一下,取到上面的部分  # For detail config for log4j as input,
  # See: https://www.elastic.co/guide/e ... .html
  beats {
#    mode => "server"
    host => "192.168.75.31"
    port => 5044
  }
}
filter { 
       date {
    match => ["message","UNIX_MS"]
    target => "@timestamp"
  }
 ruby {
   code => "event.set('timestamp', event.get('@timestamp').time.localtime + 8*60*60)"
 }
 ruby {
   code => "event.set('@timestamp',event.get('timestamp'))"
 }
 mutate {
   remove_field => ["timestamp"]
 }
        mutate{
        split=>["message"," [H2_LOG]: "]
                add_field => {
                        "field1" => "%{[message][0]}"
                }
                add_field => {
                        "field2" => "%{[message][1]}"
                }
                            }
        json {
                source => "field2"
                target => "log"
              }
        mutate{
          remove_field => "message"
          remove_field => "field1"      
          remove_field => "field2"      
         }

output {
    #if "_jsonparsefailure" not in [tags] {
    elasticsearch {
        hosts => ["192.168.75.25:9200","192.168.75.26:9200","192.168.75.27:9200"]
        index => "applogtest"
                  }
     #}
}

要回复问题请先登录注册