报错信息:[elasticsearch.server][WARN] failing shard [failed shard, shard [busi_v202010][192], node[5lAJiB_6TIevyuvKwHvfWA], [R], s[STARTED], a[id=4887uAdfTV6BhbZF3sbH6g], message [failed to perform indices:data/write/bulk[s] on replica [busi_v202010][192], node[5lAJiB_6TIevyuvKwHvfWA], [R], s[STARTED], a[id=4887uAdfTV6BhbZF3sbH6g]], failure [RemoteTransportException[[nod-16.207-2][10.11.16.207:9301][indices:data/write/bulk[s][r]]]; nested: CircuitBreakingException[[parent] Data too large, data for [<transport_request>] would be [30668109502/28.5gb], which is larger than the limit of [30369601945/28.2gb], real usage: [30666952568/28.5gb], new bytes reserved: [1156934/1.1mb], usages [request=0/0b, fielddata=32679/31.9kb, in_flight_requests=1586882/1.5mb, accounting=706596702/673.8mb]]; ], markAsStale [true]]
软件版本: 7.3.1
JVM : 31G
fielddata size :20%
index_buffer_size:10%
thread_pool.write.queue_size: 2000
场景: 加载数据到开启副本的索引时,一两个进程就出现这种情况,不开启副本可以正常加载数据。
使用的是 elasticsearch-hadoop组件加载数据。
软件版本: 7.3.1
JVM : 31G
fielddata size :20%
index_buffer_size:10%
thread_pool.write.queue_size: 2000
场景: 加载数据到开启副本的索引时,一两个进程就出现这种情况,不开启副本可以正常加载数据。
使用的是 elasticsearch-hadoop组件加载数据。
1 个回复
zhengchar
赞同来自: