logstash出现大量的close_wait,filebeat那边一直io/timeout
匿名 | 发布于2019年03月19日 | 阅读数:4585
filebeat 那边一直报错:
2019-03-19T17:55:58+08:00 INFO No non-zero metrics in the last 30s
2019-03-19T17:56:01+08:00 ERR Failed to publish events (host: xxxxxx:5000:10200), caused by: read tcp xxxxx:35314->xxxxx:5000: i/o timeout
2019-03-19T17:56:01+08:00 INFO Error publishing events (retrying): read tcp xxxxxxx:35314->xxxxxx:5000: i/o timeout
logstash 那边也报错,并且有很多close_wait,这些是和filebeat的连接
[2019-03-19T18:00:41,228][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 429 ({"type"=>"es_rejected_execution_exception", "reason"=>"rejected execution of org.elasticsearch.transport.TransportService$7@2b0d8f39 on EsThreadPoolExecutor[bulk, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@9438ff2[Running, pool size = 8, active threads = 8, queued tasks = 200, completed tasks = 5505510]]"})
[2019-03-19T18:00:41,228][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 429 ({"type"=>"es_rejected_execution_exception", "reason"=>"rejected execution of org.elasticsearch.transport.TransportService$7@3173674e on EsThreadPoolExecutor[bulk, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@9438ff2[Running, pool size = 8, active threads = 8, queued tasks = 200, completed tasks = 5505510]]"})
[2019-03-19T18:00:41,228][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>4}
初步判断是es那边负载很高,但是为啥每次重启filebeat的进程,日志才会写进去,求大神解答
input {
beats {
port => "5000"
codec => "json"
}
}
output 就是直接写到elastic里面
2019-03-19T17:55:58+08:00 INFO No non-zero metrics in the last 30s
2019-03-19T17:56:01+08:00 ERR Failed to publish events (host: xxxxxx:5000:10200), caused by: read tcp xxxxx:35314->xxxxx:5000: i/o timeout
2019-03-19T17:56:01+08:00 INFO Error publishing events (retrying): read tcp xxxxxxx:35314->xxxxxx:5000: i/o timeout
logstash 那边也报错,并且有很多close_wait,这些是和filebeat的连接
[2019-03-19T18:00:41,228][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 429 ({"type"=>"es_rejected_execution_exception", "reason"=>"rejected execution of org.elasticsearch.transport.TransportService$7@2b0d8f39 on EsThreadPoolExecutor[bulk, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@9438ff2[Running, pool size = 8, active threads = 8, queued tasks = 200, completed tasks = 5505510]]"})
[2019-03-19T18:00:41,228][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 429 ({"type"=>"es_rejected_execution_exception", "reason"=>"rejected execution of org.elasticsearch.transport.TransportService$7@3173674e on EsThreadPoolExecutor[bulk, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@9438ff2[Running, pool size = 8, active threads = 8, queued tasks = 200, completed tasks = 5505510]]"})
[2019-03-19T18:00:41,228][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>4}
初步判断是es那边负载很高,但是为啥每次重启filebeat的进程,日志才会写进去,求大神解答
input {
beats {
port => "5000"
codec => "json"
}
}
output 就是直接写到elastic里面
2 个回复
bellengao - 博客: https://www.jianshu.com/u/e0088e3e2127
赞同来自:
wq131311
赞同来自: