ELK,萌萌哒

master节点机器宕机,服务5分钟不可用

Elasticsearch | 作者 xiaowoniu | 发布于2020年04月06日 | 阅读数:6963

ES版本6.3,设置的最少master节点数是4
ES集群6个节点,在6台服务器上,master节点所在的服务器上宕机,服务不可用
 
[2020-04-05T17:02:30]:node1所在服务器宕机(node1为主节点)
[2020-04-05T17:04:07,376]:node2检测到master_left,reason [failed to ping, tried [3] times, each with  maximum [30s] timeout]
[2020-04-05T17:04:10,400]:貌似是选择node2为主节点
[2020-04-05T17:04:27,642]:没找到主节点,重试 [node2] no known master node, scheduling a retry
[2020-04-05T17:04:40,402]:node2等待所有的节点发布状态,但是node1超时了
[2020-04-05T17:04:40,409]:好像是又发起选举
[2020-04-05T17:05:13,371]:node2等待所有的节点发布状态,但是node1超时了
[2020-04-05T17:06:10,439]:移除node1节点
后面应该是好了!期间这段时间的查询都不可用!
为什么选举主节点这么费劲?好像是剔除node1就用了好长时间?
无主节点状态下,默认是不是不影响查询的
 
日志如下:
[2020-04-05T17:04:07,376][INFO ][o.e.d.z.ZenDiscovery     ] [node2] master_left [{node1}{4dMawxqUTimg0HZr4uI6QA}{6TNNcnduRRWMBzQ_42wYmw}{xx.xx.174.203}{xx.xx.174.203:9300}{ml.machine_memory=67491926016, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}], reason [failed to ping, tried [3] times, each with  maximum [30s] timeout]
[2020-04-05T17:04:07,380][WARN ][o.e.d.z.ZenDiscovery     ] [node2] master left (reason = failed to ping, tried [3] times, each with  maximum [30s] timeout), current nodes: nodes: 
   {node3}{SImMreROSvWh3gxchPMTUg}{tshM2s5URoufj283uDsZpQ}{xx.xx.66.82}{xx.xx.66.82:9300}{ml.machine_memory=67491926016, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}
   {node2}{2Xyt650gR92WeqP9z3wCkQ}{xz6zMjuhTZiiw0ErCpk3pA}{xx.xx.167.65}{xx.xx.167.65:9300}{ml.machine_memory=67491926016, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, local
   {node6}{rZcq35YSSpSI3AGthCQ8Rg}{mS3ASliaSNGWRoW_-jsfTg}{xx.xx.56.32}{xx.xx.56.32:9300}{ml.machine_memory=67491926016, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}
   {node4}{8ZhF8ptsSdSlepC_pwTwHg}{OfjBH-PtS1e7EaYBSe-47Q}{xx.xx.73.208}{xx.xx.73.208:9300}{ml.machine_memory=67491926016, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}
   {node1}{4dMawxqUTimg0HZr4uI6QA}{6TNNcnduRRWMBzQ_42wYmw}{xx.xx.174.203}{xx.xx.174.203:9300}{ml.machine_memory=67491926016, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}, master
   {node5}{TjU2vYVNT9qrF10R4K9gAw}{TvtmoM05RQC1xOP-lGKF-g}{xx.xx.105.78}{xx.xx.105.78:9300}{ml.machine_memory=67491926016, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}

[2020-04-05T17:04:07,384][INFO ][o.e.x.w.WatcherService   ] [node2] stopping watch service, reason [no master node]
[2020-04-05T17:04:10,400][INFO ][o.e.c.s.MasterService    ] [node2] zen-disco-elected-as-master ([4] nodes joined)[, ], reason: new_master {node2}{2Xyt650gR92WeqP9z3wCkQ}{xz6zMjuhTZiiw0ErCpk3pA}{xx.xx.167.65}{xx.xx.167.65:9300}{ml.machine_memory=67491926016, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}
[2020-04-05T17:04:11,134][WARN ][o.e.d.z.UnicastZenPing   ] [node2] failed to send ping to [{node1}{4dMawxqUTimg0HZr4uI6QA}{6TNNcnduRRWMBzQ_42wYmw}{xx.xx.174.203}{xx.xx.174.203:9300}{ml.machine_memory=67491926016, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}]
org.elasticsearch.transport.ReceiveTimeoutTransportException: [node1][xx.xx.174.203:9300][internal:discovery/zen/unicast] request_id [196984720] timed out after [3751ms]
        at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:987) [elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:626) [elasticsearch-6.3.2.jar:6.3.2]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
[2020-04-05T17:04:12,134][WARN ][o.e.d.z.UnicastZenPing   ] [node2] failed to send ping to [{node1}{4dMawxqUTimg0HZr4uI6QA}{6TNNcnduRRWMBzQ_42wYmw}{xx.xx.174.203}{xx.xx.174.203:9300}{ml.machine_memory=67491926016, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}]
org.elasticsearch.transport.ReceiveTimeoutTransportException: [node1][xx.xx.174.203:9300][internal:discovery/zen/unicast] request_id [196984744] timed out after [3750ms]
        at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:987) [elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:626) [elasticsearch-6.3.2.jar:6.3.2]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
[2020-04-05T17:04:13,134][WARN ][o.e.d.z.UnicastZenPing   ] [node2] failed to send ping to [{node1}{4dMawxqUTimg0HZr4uI6QA}{6TNNcnduRRWMBzQ_42wYmw}{xx.xx.174.203}{xx.xx.174.203:9300}{ml.machine_memory=67491926016, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}]
org.elasticsearch.transport.ReceiveTimeoutTransportException: [node1][xx.xx.174.203:9300][internal:discovery/zen/unicast] request_id [196984768] timed out after [3750ms]
        at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:987) [elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:626) [elasticsearch-6.3.2.jar:6.3.2]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
[2020-04-05T17:04:27,642][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [node2] no known master node, scheduling a retry
[2020-04-05T17:04:40,402][WARN ][o.e.d.z.PublishClusterStateAction] [node2] timed out waiting for all nodes to process published state [5248] (timeout [30s], pending nodes: [{node1}{4dMawxqUTimg0HZr4uI6QA}{6TNNcnduRRWMBzQ_42wYmw}{xx.xx.174.203}{xx.xx.174.203:9300}{ml.machine_memory=67491926016, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}])
[2020-04-05T17:04:40,409][INFO ][o.e.c.s.ClusterApplierService] [node2] new_master {node2}{2Xyt650gR92WeqP9z3wCkQ}{xz6zMjuhTZiiw0ErCpk3pA}{xx.xx.167.65}{xx.xx.167.65:9300}{ml.machine_memory=67491926016, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, reason: apply cluster state (from master [master {node2}{2Xyt650gR92WeqP9z3wCkQ}{xz6zMjuhTZiiw0ErCpk3pA}{xx.xx.167.65}{xx.xx.167.65:9300}{ml.machine_memory=67491926016, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true} committed version [5248] source [zen-disco-elected-as-master ([4] nodes joined)[, ]]])
[2020-04-05T17:04:40,435][WARN ][o.e.c.s.MasterService    ] [node2] cluster state update task [zen-disco-elected-as-master ([4] nodes joined)[, ]] took [30s] above the warn threshold of 30s
[2020-04-05T17:04:43,368][INFO ][o.e.c.m.MetaDataMappingService] [node2] [.watcher-history-7-2020.04.05/bnmgAvTvQ_i2ps7e4NS7xg] update_mapping [doc]
[2020-04-05T17:04:45,414][WARN ][o.e.x.w.WatcherLifeCycleService] [node2] failed to start watcher. please wait for the cluster to become ready or try to start Watcher manually
org.elasticsearch.ElasticsearchTimeoutException: java.util.concurrent.TimeoutException: Timeout waiting for task.
        at org.elasticsearch.common.util.concurrent.FutureUtils.get(FutureUtils.java:72) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.action.support.AdapterActionFuture.actionGet(AdapterActionFuture.java:54) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.action.support.AdapterActionFuture.actionGet(AdapterActionFuture.java:49) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.xpack.watcher.WatcherService.loadWatches(WatcherService.java:215) ~[?:?]
        at org.elasticsearch.xpack.watcher.WatcherService.start(WatcherService.java:130) ~[?:?]
        at org.elasticsearch.xpack.watcher.WatcherLifeCycleService.start(WatcherLifeCycleService.java:140) ~[?:?]
        at org.elasticsearch.xpack.watcher.WatcherLifeCycleService.lambda$clusterChanged$3(WatcherLifeCycleService.java:204) ~[?:?]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:626) [elasticsearch-6.3.2.jar:6.3.2]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
Caused by: java.util.concurrent.TimeoutException: Timeout waiting for task.
        at org.elasticsearch.common.util.concurrent.BaseFuture$Sync.get(BaseFuture.java:235) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.common.util.concurrent.BaseFuture.get(BaseFuture.java:69) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.common.util.concurrent.FutureUtils.get(FutureUtils.java:70) ~[elasticsearch-6.3.2.jar:6.3.2]
        ... 10 more
[2020-04-05T17:04:55,433][DEBUG][o.e.a.a.c.n.s.TransportNodesStatsAction] [node2] failed to execute on node [4dMawxqUTimg0HZr4uI6QA]
org.elasticsearch.transport.ReceiveTimeoutTransportException: [node1][xx.xx.174.203:9300][cluster:monitor/nodes/stats[n]] request_id [196985096] timed out after [15000ms]
        at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:987) [elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:626) [elasticsearch-6.3.2.jar:6.3.2]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
[2020-04-05T17:05:00,332][ERROR][o.e.x.m.c.i.IndexStatsCollector] [node2] collector [index-stats] timed out when collecting data
[2020-04-05T17:05:10,334][ERROR][o.e.x.m.c.c.ClusterStatsCollector] [node2] collector [cluster_stats] timed out when collecting data
[2020-04-05T17:05:13,371][WARN ][o.e.d.z.PublishClusterStateAction] [node2] timed out waiting for all nodes to process published state [5249] (timeout [30s], pending nodes: [{node1}{4dMawxqUTimg0HZr4uI6QA}{6TNNcnduRRWMBzQ_42wYmw}{xx.xx.174.203}{xx.xx.174.203:9300}{ml.machine_memory=67491926016, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}])
[2020-04-05T17:05:13,377][WARN ][o.e.c.s.MasterService    ] [node2] cluster state update task [put-mapping[doc]] took [30s] above the warn threshold of 30s
[2020-04-05T17:05:18,375][WARN ][o.e.x.w.WatcherLifeCycleService] [node2] failed to start watcher. please wait for the cluster to become ready or try to start Watcher manually
org.elasticsearch.ElasticsearchTimeoutException: java.util.concurrent.TimeoutException: Timeout waiting for task.
        at org.elasticsearch.common.util.concurrent.FutureUtils.get(FutureUtils.java:72) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.action.support.AdapterActionFuture.actionGet(AdapterActionFuture.java:54) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.action.support.AdapterActionFuture.actionGet(AdapterActionFuture.java:49) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.xpack.watcher.WatcherService.loadWatches(WatcherService.java:215) ~[?:?]
        at org.elasticsearch.xpack.watcher.WatcherService.start(WatcherService.java:130) ~[?:?]
        at org.elasticsearch.xpack.watcher.WatcherLifeCycleService.start(WatcherLifeCycleService.java:140) ~[?:?]
        at org.elasticsearch.xpack.watcher.WatcherLifeCycleService.lambda$clusterChanged$3(WatcherLifeCycleService.java:204) ~[?:?]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:626) [elasticsearch-6.3.2.jar:6.3.2]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
Caused by: java.util.concurrent.TimeoutException: Timeout waiting for task.
        at org.elasticsearch.common.util.concurrent.BaseFuture$Sync.get(BaseFuture.java:235) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.common.util.concurrent.BaseFuture.get(BaseFuture.java:69) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.common.util.concurrent.FutureUtils.get(FutureUtils.java:70) ~[elasticsearch-6.3.2.jar:6.3.2]
        ... 10 more
[2020-04-05T17:05:20,347][ERROR][o.e.x.m.c.m.JobStatsCollector] [node2] collector [job_stats] timed out when collecting data
[2020-04-05T17:05:25,428][DEBUG][o.e.a.a.c.n.s.TransportNodesStatsAction] [node2] failed to execute on node [4dMawxqUTimg0HZr4uI6QA]
org.elasticsearch.transport.ReceiveTimeoutTransportException: [node1][xx.xx.174.203:9300][cluster:monitor/nodes/stats[n]] request_id [196985740] timed out after [15000ms]
        at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:987) [elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:626) [elasticsearch-6.3.2.jar:6.3.2]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
[2020-04-05T17:05:30,363][ERROR][o.e.x.m.c.i.IndexRecoveryCollector] [node2] collector [index_recovery] timed out when collecting data
[2020-04-05T17:05:50,332][ERROR][o.e.x.m.c.i.IndexStatsCollector] [node2] collector [index-stats] timed out when collecting data
[2020-04-05T17:06:00,333][ERROR][o.e.x.m.c.c.ClusterStatsCollector] [node2] collector [cluster_stats] timed out when collecting data
[2020-04-05T17:06:10,334][ERROR][o.e.x.m.c.m.JobStatsCollector] [node2] collector [job_stats] timed out when collecting data
[2020-04-05T17:06:10,438][INFO ][o.e.c.r.a.AllocationService] [node2] Cluster health status changed from [GREEN] to [YELLOW] (reason: []).
[2020-04-05T17:06:10,439][INFO ][o.e.c.s.MasterService    ] [node2] zen-disco-node-failed({node1}{4dMawxqUTimg0HZr4uI6QA}{6TNNcnduRRWMBzQ_42wYmw}{xx.xx.174.203}{xx.xx.174.203:9300}{ml.machine_memory=67491926016, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}), reason(failed to ping, tried [3] times, each with maximum [30s] timeout), reason: removed {{node1}{4dMawxqUTimg0HZr4uI6QA}{6TNNcnduRRWMBzQ_42wYmw}{xx.xx.174.203}{xx.xx.174.203:9300}{ml.machine_memory=67491926016, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true},}
[2020-04-05T17:06:10,523][INFO ][o.e.c.s.ClusterApplierService] [node2] removed {{node1}{4dMawxqUTimg0HZr4uI6QA}{6TNNcnduRRWMBzQ_42wYmw}{xx.xx.174.203}{xx.xx.174.203:9300}{ml.machine_memory=67491926016, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true},}, reason: apply cluster state (from master [master {node2}{2Xyt650gR92WeqP9z3wCkQ}{xz6zMjuhTZiiw0ErCpk3pA}{xx.xx.167.65}{xx.xx.167.65:9300}{ml.machine_memory=67491926016, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true} committed version [5250] source [zen-disco-node-failed({node1}{4dMawxqUTimg0HZr4uI6QA}{6TNNcnduRRWMBzQ_42wYmw}{xx.xx.174.203}{xx.xx.174.203:9300}{ml.machine_memory=67491926016, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}), reason(failed to ping, tried [3] times, each with maximum [30s] timeout)]])
[2020-04-05T17:06:10,541][DEBUG][o.e.a.a.i.r.TransportRecoveryAction] [node2] failed to execute [indices:monitor/recovery] on node [4dMawxqUTimg0HZr4uI6QA]
org.elasticsearch.transport.NodeDisconnectedException: [node1][xx.xx.174.203:9300][indices:monitor/recovery[n]] disconnected
[2020-04-05T17:06:10,541][WARN ][o.e.x.s.t.n.SecurityNetty4ServerTransport] [node2] send message failed [channel: NettyTcpChannel{localAddress=0.0.0.0/0.0.0.0:60408, remoteAddress=xx.xx.174.203/xx.xx.174.203:9300}]
java.nio.channels.ClosedChannelException: null
        at io.netty.channel.AbstractChannel$AbstractUnsafe.close(...)(Unknown Source) ~[?:?]
[2020-04-05T17:06:10,541][WARN ][o.e.x.s.t.n.SecurityNetty4ServerTransport] [node2] send message failed [channel: NettyTcpChannel{localAddress=0.0.0.0/0.0.0.0:60412, remoteAddress=xx.xx.174.203/xx.xx.174.203:9300}]
java.nio.channels.ClosedChannelException: null
        at io.netty.channel.AbstractChannel$AbstractUnsafe.close(...)(Unknown Source) ~[?:?]
[2020-04-05T17:06:10,542][WARN ][o.e.x.s.t.n.SecurityNetty4ServerTransport] [node2] send message failed [channel: NettyTcpChannel{localAddress=0.0.0.0/0.0.0.0:60408, remoteAddress=xx.xx.174.203/xx.xx.174.203:9300}]
java.nio.channels.ClosedChannelException: null
        at io.netty.channel.AbstractChannel$AbstractUnsafe.close(...)(Unknown Source) ~[?:?]
[2020-04-05T17:06:10,542][WARN ][o.e.x.s.t.n.SecurityNetty4ServerTransport] [node2] send message failed [channel: NettyTcpChannel{localAddress=0.0.0.0/0.0.0.0:60412, remoteAddress=xx.xx.174.203/xx.xx.174.203:9300}]
java.nio.channels.ClosedChannelException: null
        at io.netty.channel.AbstractChannel$AbstractUnsafe.close(...)(Unknown Source) ~[?:?]
[2020-04-05T17:06:10,542][WARN ][o.e.x.s.t.n.SecurityNetty4ServerTransport] [node2] send message failed [channel: NettyTcpChannel{localAddress=0.0.0.0/0.0.0.0:60408, remoteAddress=xx.xx.174.203/xx.xx.174.203:9300}]
java.nio.channels.ClosedChannelException: null
        at io.netty.channel.AbstractChannel$AbstractUnsafe.close(...)(Unknown Source) ~[?:?]
[2020-04-05T17:06:10,542][WARN ][o.e.x.s.t.n.SecurityNetty4ServerTransport] [node2] send message failed [channel: NettyTcpChannel{localAddress=0.0.0.0/0.0.0.0:60412, remoteAddress=xx.xx.174.203/xx.xx.174.203:9300}]
java.nio.channels.ClosedChannelException: null
        at io.netty.channel.AbstractChannel$AbstractUnsafe.close(...)(Unknown Source) ~[?:?]
[2020-04-05T17:06:10,543][WARN ][o.e.x.s.t.n.SecurityNetty4ServerTransport] [node2] send message failed [channel: NettyTcpChannel{localAddress=0.0.0.0/0.0.0.0:60412, remoteAddress=xx.xx.174.203/xx.xx.174.203:9300}]
java.nio.channels.ClosedChannelException: null
        at io.netty.channel.AbstractChannel$AbstractUnsafe.close(...)(Unknown Source) ~[?:?]
[2020-04-05T17:06:10,543][DEBUG][o.e.a.a.i.s.TransportIndicesStatsAction] [node2] failed to execute [indices:monitor/stats] on node [4dMawxqUTimg0HZr4uI6QA]
org.elasticsearch.transport.NodeDisconnectedException: [node1][xx.xx.174.203:9300][indices:monitor/stats[n]] disconnected
[2020-04-05T17:06:10,543][WARN ][o.e.x.s.t.n.SecurityNetty4ServerTransport] [node2] send message failed [channel: NettyTcpChannel{localAddress=0.0.0.0/0.0.0.0:60408, remoteAddress=xx.xx.174.203/xx.xx.174.203:9300}]
java.nio.channels.ClosedChannelException: null
        at io.netty.channel.AbstractChannel$AbstractUnsafe.close(...)(Unknown Source) ~[?:?]
[2020-04-05T17:06:10,543][WARN ][o.e.x.s.t.n.SecurityNetty4ServerTransport] [node2] send message failed [channel: NettyTcpChannel{localAddress=0.0.0.0/0.0.0.0:60412, remoteAddress=xx.xx.174.203/xx.xx.174.203:9300}]
java.nio.channels.ClosedChannelException: null
        at io.netty.channel.AbstractChannel$AbstractUnsafe.close(...)(Unknown Source) ~[?:?]
[2020-04-05T17:06:10,543][WARN ][o.e.x.s.t.n.SecurityNetty4ServerTransport] [node2] send message failed [channel: NettyTcpChannel{localAddress=0.0.0.0/0.0.0.0:60408, remoteAddress=xx.xx.174.203/xx.xx.174.203:9300}]
java.nio.channels.ClosedChannelException: null
        at io.netty.channel.AbstractChannel$AbstractUnsafe.close(...)(Unknown Source) ~[?:?]
[2020-04-05T17:06:10,544][DEBUG][o.e.a.a.c.n.s.TransportNodesStatsAction] [node2] failed to execute on node [4dMawxqUTimg0HZr4uI6QA]
org.elasticsearch.transport.NodeDisconnectedException: [node1][xx.xx.174.203:9300][cluster:monitor/nodes/stats[n]] disconnected
[2020-04-05T17:06:10,544][WARN ][o.e.x.s.t.n.SecurityNetty4ServerTransport] [node2] send message failed [channel: NettyTcpChannel{localAddress=0.0.0.0/0.0.0.0:60408, remoteAddress=xx.xx.174.203/xx.xx.174.203:9300}]
java.nio.channels.ClosedChannelException: null
        at io.netty.channel.AbstractChannel$AbstractUnsafe.close(...)(Unknown Source) ~[?:?]
[2020-04-05T17:06:10,544][WARN ][o.e.x.s.t.n.SecurityNetty4ServerTransport] [node2] send message failed [channel: NettyTcpChannel{localAddress=0.0.0.0/0.0.0.0:60408, remoteAddress=xx.xx.174.203/xx.xx.174.203:9300}]
java.nio.channels.ClosedChannelException: null
        at io.netty.channel.AbstractChannel$AbstractUnsafe.close(...)(Unknown Source) ~[?:?]
[2020-04-05T17:06:10,545][DEBUG][o.e.a.a.i.s.TransportIndicesStatsAction] [node2] failed to execute [indices:monitor/stats] on node [4dMawxqUTimg0HZr4uI6QA]
org.elasticsearch.transport.NodeDisconnectedException: [node1][xx.xx.174.203:9300][indices:monitor/stats[n]] disconnected
[2020-04-05T17:06:10,545][DEBUG][o.e.a.a.c.n.s.TransportNodesStatsAction] [node2] failed to execute on node [4dMawxqUTimg0HZr4uI6QA]
org.elasticsearch.transport.NodeDisconnectedException: [node1][xx.xx.174.203:9300][cluster:monitor/nodes/stats[n]] disconnected
[2020-04-05T17:06:10,545][WARN ][o.e.x.s.t.n.SecurityNetty4ServerTransport] [node2] send message failed [channel: NettyTcpChannel{localAddress=0.0.0.0/0.0.0.0:60408, remoteAddress=xx.xx.174.203/xx.xx.174.203:9300}]
java.nio.channels.ClosedChannelException: null
        at io.netty.channel.AbstractChannel$AbstractUnsafe.close(...)(Unknown Source) ~[?:?]
[2020-04-05T17:06:10,545][WARN ][o.e.x.s.t.n.SecurityNetty4ServerTransport] [node2] send message failed [channel: NettyTcpChannel{localAddress=0.0.0.0/0.0.0.0:60408, remoteAddress=xx.xx.174.203/xx.xx.174.203:9300}]
java.nio.channels.ClosedChannelException: null
        at io.netty.channel.AbstractChannel$AbstractUnsafe.close(...)(Unknown Source) ~[?:?]
[2020-04-05T17:06:10,545][WARN ][o.e.x.s.t.n.SecurityNetty4ServerTransport] [node2] send message failed [channel: NettyTcpChannel{localAddress=0.0.0.0/0.0.0.0:60408, remoteAddress=xx.xx.174.203/xx.xx.174.203:9300}]
java.nio.channels.ClosedChannelException: null
        at io.netty.channel.AbstractChannel$AbstractUnsafe.close(...)(Unknown Source) ~[?:?]
[2020-04-05T17:06:10,546][WARN ][o.e.x.s.t.n.SecurityNetty4ServerTransport] [node2] send message failed [channel: NettyTcpChannel{localAddress=0.0.0.0/0.0.0.0:60408, remoteAddress=xx.xx.174.203/xx.xx.174.203:9300}]
java.nio.channels.ClosedChannelException: null
        at io.netty.channel.AbstractChannel$AbstractUnsafe.close(...)(Unknown Source) ~[?:?]
[2020-04-05T17:06:10,546][WARN ][o.e.x.s.t.n.SecurityNetty4ServerTransport] [node2] send message failed [channel: NettyTcpChannel{localAddress=0.0.0.0/0.0.0.0:60408, remoteAddress=xx.xx.174.203/xx.xx.174.203:9300}]
java.nio.channels.ClosedChannelException: null
        at io.netty.channel.AbstractChannel$AbstractUnsafe.close(...)(Unknown Source) ~[?:?]
[2020-04-05T17:06:10,547][WARN ][o.e.x.s.t.n.SecurityNetty4ServerTransport] [node2] send message failed [channel: NettyTcpChannel{localAddress=0.0.0.0/0.0.0.0:60408, remoteAddress=xx.xx.174.203/xx.xx.174.203:9300}]
java.nio.channels.ClosedChannelException: null
        at io.netty.channel.AbstractChannel$AbstractUnsafe.close(...)(Unknown Source) ~[?:?]
[2020-04-05T17:06:10,547][WARN ][o.e.x.s.t.n.SecurityNetty4ServerTransport] [node2] send message failed [channel: NettyTcpChannel{localAddress=0.0.0.0/0.0.0.0:60408, remoteAddress=xx.xx.174.203/xx.xx.174.203:9300}]
java.nio.channels.ClosedChannelException: null
        at io.netty.channel.AbstractChannel$AbstractUnsafe.close(...)(Unknown Source) ~[?:?]
[2020-04-05T17:06:10,547][DEBUG][o.e.a.a.c.s.TransportClusterStatsAction] [node2] failed to execute on node [4dMawxqUTimg0HZr4uI6QA]
org.elasticsearch.transport.NodeDisconnectedException: [node1][xx.xx.174.203:9300][cluster:monitor/stats[n]] disconnected
[2020-04-05T17:06:10,548][WARN ][o.e.x.s.t.n.SecurityNetty4ServerTransport] [node2] send message failed [channel: NettyTcpChannel{localAddress=0.0.0.0/0.0.0.0:60408, remoteAddress=xx.xx.174.203/xx.xx.174.203:9300}]
java.nio.channels.ClosedChannelException: null
        at io.netty.channel.AbstractChannel$AbstractUnsafe.close(...)(Unknown Source) ~[?:?]
[2020-04-05T17:06:10,548][WARN ][o.e.x.s.t.n.SecurityNetty4ServerTransport] [node2] send message failed [channel: NettyTcpChannel{localAddress=0.0.0.0/0.0.0.0:60408, remoteAddress=xx.xx.174.203/xx.xx.174.203:9300}]
java.nio.channels.ClosedChannelException: null
        at io.netty.channel.AbstractChannel$AbstractUnsafe.close(...)(Unknown Source) ~[?:?]
[2020-04-05T17:06:10,549][WARN ][o.e.x.s.t.n.SecurityNetty4ServerTransport] [node2] send message failed [channel: NettyTcpChannel{localAddress=0.0.0.0/0.0.0.0:60408, remoteAddress=xx.xx.174.203/xx.xx.174.203:9300}]
java.nio.channels.ClosedChannelException: null
        at io.netty.channel.AbstractChannel$AbstractUnsafe.close(...)(Unknown Source) ~[?:?]
[2020-04-05T17:06:10,567][DEBUG][o.e.a.a.c.s.TransportClusterStatsAction] [node2] failed to execute on node [4dMawxqUTimg0HZr4uI6QA]
org.elasticsearch.transport.NodeDisconnectedException: [node1][xx.xx.174.203:9300][cluster:monitor/stats[n]] disconnected
[2020-04-05T17:06:10,573][DEBUG][o.e.a.a.i.s.TransportIndicesStatsAction] [node2] failed to execute [indices:monitor/stats] on node [4dMawxqUTimg0HZr4uI6QA]
org.elasticsearch.transport.NodeDisconnectedException: [node1][xx.xx.174.203:9300][indices:monitor/stats[n]] disconnected
[2020-04-05T17:06:10,573][DEBUG][o.e.a.a.i.s.TransportIndicesStatsAction] [node2] failed to execute [indices:monitor/stats] on node [4dMawxqUTimg0HZr4uI6QA]
org.elasticsearch.transport.NodeDisconnectedException: [node1][xx.xx.174.203:9300][indices:monitor/stats[n]] disconnected
[2020-04-05T17:06:10,577][DEBUG][o.e.a.a.i.s.TransportIndicesStatsAction] [node2] failed to execute [indices:monitor/stats] on node [4dMawxqUTimg0HZr4uI6QA]
org.elasticsearch.transport.NodeDisconnectedException: [node1][xx.xx.174.203:9300][indices:monitor/stats[n]] disconnected
[2020-04-05T17:06:10,578][DEBUG][o.e.a.a.i.r.TransportRecoveryAction] [node2] failed to execute [indices:monitor/recovery] on node [4dMawxqUTimg0HZr4uI6QA]
org.elasticsearch.transport.NodeDisconnectedException: [node1][xx.xx.174.203:9300][indices:monitor/recovery[n]] disconnected
[2020-04-05T17:06:10,579][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [node2] connection exception while trying to forward request with action name [cluster:monitor/health] to master node [{node1}{4dMawxqUTimg0HZr4uI6QA}{6TNNcnduRRWMBzQ_42wYmw}{xx.xx.174.203}{xx.xx.174.203:9300}{ml.machine_memory=67491926016, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}], scheduling a retry. Error: [org.elasticsearch.transport.NodeDisconnectedException: [node1][xx.xx.174.203:9300][cluster:monitor/health] disconnected]
[2020-04-05T17:06:10,581][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [node2] timed out while retrying [cluster:monitor/health] after failure (timeout [30s])
org.elasticsearch.transport.NodeDisconnectedException: [node1][xx.xx.174.203:9300][cluster:monitor/health] disconnected
[2020-04-05T17:06:10,581][WARN ][r.suppressed             ] path: /_cluster/health, params: {}
org.elasticsearch.discovery.MasterNotDiscoveredException: NodeDisconnectedException[[node1][xx.xx.174.203:9300][cluster:monitor/health] disconnected]
        at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:223) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:317) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:145) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:117) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.retry(TransportMasterNodeAction.java:208) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.access$500(TransportMasterNodeAction.java:108) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.handleException(TransportMasterNodeAction.java:194) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1103) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.transport.TransportService.lambda$onConnectionClosed$11(TransportService.java:931) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:626) [elasticsearch-6.3.2.jar:6.3.2]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
Caused by: org.elasticsearch.transport.NodeDisconnectedException: [node1][xx.xx.174.203:9300][cluster:monitor/health] disconnected
[2020-04-05T17:06:10,589][INFO ][o.e.c.r.DelayedAllocationService] [node2] scheduling reroute for delayed shards in [59.8s] (26 delayed shards)
[2020-04-05T17:06:10,597][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [node2] updating number_of_replicas to [4] for indices [.security-6]
[2020-04-05T17:06:10,614][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [node2] [.security-6/p-i84FuVQsKmvqtNjAfUFg] auto expanded replicas to [4]
[2020-04-05T17:06:10,651][WARN ][o.e.c.r.a.AllocationService] [node2] [mryx_search_product_han_20200326][0] marking unavailable shards as stale: [C4cr-BhSRJaePvfhGiiFOA]
[2020-04-05T17:06:10,651][WARN ][o.e.c.r.a.AllocationService] [node2] [.watches][0] marking unavailable shards as stale: [e8vCpdEySm6JlFYiMNUfhA]
[2020-04-05T17:06:10,651][WARN ][o.e.c.r.a.AllocationService] [node2] [.watcher-history-7-2020.04.05][0] marking unavailable shards as stale: [XgPLppgsTdG5lKIkQ5ERsg]
[2020-04-05T17:06:10,651][WARN ][o.e.c.r.a.AllocationService] [node2] [mryx_search_product_hanlp_20200319][0] marking unavailable shards as stale: [k9LP1rsiT8ijP42xwTU6RA]
[2020-04-05T17:08:18,878][INFO ][o.e.c.m.MetaDataMappingService] [node2] [.watcher-history-7-2020.04.05/bnmgAvTvQ_i2ps7e4NS7xg] update_mapping [doc]
 
 
 
 
 
已邀请:

God_lockin

赞同来自:

其实很好理解,当其他小弟联系不上老大哥(主节点node1)的时候,他们也不知道是因为网络波动、丢包还是其他原因找不到老大哥了,所以第一反应是继续呼叫老大哥,万一老大哥并没有断网/崩溃可能在一个超时周期之内就回来了,一切就当没发生过,只是日志里会显示老大哥曾经出去浪了。
 
但是当一个超时周期结束了,老大哥还没回来,那么可能老大哥已经遭遇不测或者选择云游四海去了,集群虽然可以继续支持查询,但是包括状态同步、索引规划等等任务都需要带头大哥来驱动,老大哥一时半会儿回不来了只能重新选一个带头大哥出来主持大局了(node2)。
 
在无主状态下大部分服务是不受影响的,但是上面说了,状态同步之类的任务是要带头大哥发起的,所以无主状态的集群并不能完成100%的任务。

byx313 - BLOG:https://www.jianshu.com/u/43fd06f9589c

赞同来自:

“无主节点状态下,默认是不是不影响查询的”
可以看下no_master_block这个参数,默认是读写都禁止的。

laoyang360 - 《一本书讲透Elasticsearch》作者,Elastic认证工程师 [死磕Elasitcsearch]知识星球地址:http://t.cn/RmwM3N9;微信公众号:铭毅天下; 博客:https://elastic.blog.csdn.net

赞同来自:

时间长的一个维度原因是:重试次数为3,故障每次监测时间都是30s。
还有其他时间的维度,在您日志中都有体现。
基于这个思路核查下

要回复问题请先登录注册