不要急,总有办法的

如何理解CircuitBreakingException: [parent] Data too large的问题?

Elasticsearch | 作者 wajika | 发布于2019年08月14日 | 阅读数:487

原来JVM内存是3G,以前是好好的,这几天一直报错。[2019-08-13T22:05:14,870][DEBUG][o.e.a.a.c.n.i.TransportNodesInfoAction] [node-02] failed to execute on node [tkXXmENiRfe0mtYfXRuqBA]
org.elasticsearch.transport.RemoteTransportException: [node-01][192.168.10.248:9300][cluster:monitor/nodes/info[n]]
Caused by: org.elasticsearch.common.breaker.CircuitBreakingException: [parent] Data too large, data for [<transport_request>] would be [2543027816/2.3gb], which is larger than the limit of [2521196134/2.3gb], real usage: [2543026456/2.3gb], new bytes reserved: [1360/1.3kb]

我给JVM内存开到4G,limit of [2521196134/2.3gb] 提示还是不变
[2019-08-13T22:13:44,960][WARN ][o.e.c.a.s.ShardStateAction] [node-02] unexpected failure while sending request [internal:cluster/shard/started] to [{node-01}{tkXXmENiRfe0mtYfXRuqBA}{o-yWUlZxT8WsPw1UzOq-0Q}{192.168.10.248}{192.168.10.248:9300}{ml.machine_memory=8201502720, ml.max_open_jobs=20, xpack.installed=true}] for shard entry [StartedShardEntry{shardId [[.monitoring-es-7-2019.08.14][0]], allocationId [nrzk-Rz9Sge1ZmCgYXECPw], primary term [4], message [after existing store recovery; bootstrap_history_uuid=false]}]
org.elasticsearch.transport.RemoteTransportException: [node-01][192.168.10.248:9300][internal:cluster/shard/started]
Caused by: org.elasticsearch.common.breaker.CircuitBreakingException: [parent] Data too large, data for [<transport_request>] would be [2868683286/2.6gb], which is larger than the limit of [2521196134/2.3gb], real usage: [2868682904/2.6gb], new bytes reserved: [382/382b]
 
我有几个疑问希望能获得解答
1、transport是同步的操作把,为什么transport的量会突然变大? 
2、如何控制transport的量?
3、transport中的内容是indices吗?
4、我设置后,重启为什么limit of [2521196134/2.3gb] 提示还是不变?    4G JVM 的80%应该不止2.3G把
    "transient" : {
         "indices.breaker.total.limit" : "80%"
     }
5、日志中的提示是说node-02 同步数据到node-01 出现资源不足,那么是node01上的内存不够?
已邀请:

wajika

赞同来自:

对于第4个问题已经解决,整个集群的JVM都改为4G,分别重启后暂时没有提示错误了,但是一重启感觉会丢indices。

wajika

赞同来自:

 关于CircuitBreakingException: [parent] Data too large 找到了答案
jvm 堆内存不够当前查询加载数据所以会报 data too large, 请求被熔断,indices.breaker.request.limit默认为 jvm heap 的 60%,因此可以通过调整 ES 的 Heap Size 来解决该问题

要回复问题请先登录注册