filebeat配置:
bayes.log 每次新增record,filebeat都会publish所有的record。
filebeat publish event 后的日志:
请问有人知道是怎么回事吗?
谢谢~
1 filebeat.prospectors:
2 - type: log
3 paths:
4 - /Users/king/logstash/logs/bayes.log
5 tail_files: true
6
7 filebeat.shutdown_timeout: 5s
8
9 output.logstash:
10 hosts: ["localhost:5044"]
logstash 配置: 1 input {
2 beats {
3 port => "5044"
4 }
5 }
6
7 filter{
10 grok {
11 match => {
12 "message" => "(?<request_time>\d\d:\d\d:\d\d\.\d+)\s+\[(?<thread_id>[\w\-]+)\]\s+(?<log_level>\w+)\s+(?<class_name>\w+)\s+\-(?<detail>.+)"
13 }
14 }
15 }
16
17 output {
18 stdout { codec => rubydebug }
19 }
bayes.log 每次新增record,filebeat都会publish所有的record。
filebeat publish event 后的日志:
2018/01/10 07:58:49.044372 metrics.go:39: INFO Non-zero metrics in the last 30s: beat.info.uptime.ms=30000 beat.memstats.gc_next=5695216 beat.memstats.memory_alloc=2871816 beat.memstats.memory_total=4431568 filebeat.events.added=6 filebeat.events.done=6 filebeat.harvester.open_files=1 filebeat.harvester.running=1 filebeat.harvester.started=1 libbeat.config.module.running=0 libbeat.output.events.acked=4 libbeat.output.events.batches=1 libbeat.output.events.total=4 libbeat.output.read.bytes=6 libbeat.output.write.bytes=500 libbeat.pipeline.clients=1 libbeat.pipeline.events.active=0 libbeat.pipeline.events.filtered=2 libbeat.pipeline.events.published=4 libbeat.pipeline.events.retry=4 libbeat.pipeline.events.total=6 libbeat.pipeline.queue.acked=4 registrar.states.cleanup=1 registrar.states.current=1 registrar.states.update=6 registrar.writes=3
根据网上查询的解决方案,加入了下面的配置,还是不行tail_files: true
filebeat.shutdown_timeout: 5s
请问有人知道是怎么回事吗?
谢谢~
5 个回复
rockybean - Elastic Certified Engineer, ElasticStack Fans,公众号:ElasticTalk
赞同来自: lianjie
medcl - 今晚打老虎。
赞同来自:
yangbiao
赞同来自:
football025
赞同来自:
hlsa816215
赞同来自: