在 Mapping 里面,将 dynamic 参数设置成 strict 可以拒绝索引包含未知字段的文档。 此条 Tips 由 medcl 贡献。

logstash出现大量的close_wait,filebeat那边一直io/timeout

匿名 | 发布于2019年03月19日 | 阅读数:4603

filebeat 那边一直报错:
2019-03-19T17:55:58+08:00 INFO No non-zero metrics in the last 30s
2019-03-19T17:56:01+08:00 ERR Failed to publish events (host: xxxxxx:5000:10200), caused by: read tcp xxxxx:35314->xxxxx:5000: i/o timeout
2019-03-19T17:56:01+08:00 INFO Error publishing events (retrying): read tcp xxxxxxx:35314->xxxxxx:5000: i/o timeout
 
logstash 那边也报错,并且有很多close_wait,这些是和filebeat的连接
[2019-03-19T18:00:41,228][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 429 ({"type"=>"es_rejected_execution_exception", "reason"=>"rejected execution of org.elasticsearch.transport.TransportService$7@2b0d8f39 on EsThreadPoolExecutor[bulk, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@9438ff2[Running, pool size = 8, active threads = 8, queued tasks = 200, completed tasks = 5505510]]"})
[2019-03-19T18:00:41,228][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 429 ({"type"=>"es_rejected_execution_exception", "reason"=>"rejected execution of org.elasticsearch.transport.TransportService$7@3173674e on EsThreadPoolExecutor[bulk, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@9438ff2[Running, pool size = 8, active threads = 8, queued tasks = 200, completed tasks = 5505510]]"})
[2019-03-19T18:00:41,228][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>4}
 
 
 
 
初步判断是es那边负载很高,但是为啥每次重启filebeat的进程,日志才会写进去,求大神解答
input {
  beats {
    port => "5000"
    codec => "json"
  }
}

output 就是直接写到elastic里面
已邀请:

bellengao - 博客: https://www.jianshu.com/u/e0088e3e2127

赞同来自:

bulk队列都满了,拒绝写入请求了,等你重启filebeat后queue里的请求可能被处理一些,之后仍然会拒绝写入;得对es集群进行扩容

wq131311

赞同来自:

这个我能删除索引减少总体分片数解决吗?奇怪的是阿里云的es删除后会自动恢复

要回复问题请先登录注册