好的想法是十分钱一打,真正无价的是能够实现这些想法的人。

【es 2.3.3 】探讨ES内存消耗

Elasticsearch | 作者 Keenbo | 发布于2017年03月14日 | 阅读数:8337

4TB SSD存储,32GB内存的机器做测试,部署2台机器。
每个主shard有1备副本,16GB内存分配给ES。
当机器数据存储占满 80%以上,共计近70亿条数据,内存基本就接近90+%,老代甚至100%。
我关闭所有index后,让内存下降为很低的值,然后重新打开,不做任何读写操作,内存消耗也马上恢复到90+%。
 
请问,ES这种情况下,主要内存的消耗来自哪里?Segment memory吗?
已邀请:

Keenbo

赞同来自:

 
"segments": {
"count": 384,
"memory": "3.2gb",
"memory_in_bytes": 3541899192,
"terms_memory": "3.1gb",
"terms_memory_in_bytes": 3411171256,
"stored_fields_memory": "124.6mb",
"stored_fields_memory_in_bytes": 130666880,
"term_vectors_memory": "0b",
"term_vectors_memory_in_bytes": 0,
"norms_memory": "24kb",
"norms_memory_in_bytes": 24576,
"doc_values_memory": "35.6kb",
"doc_values_memory_in_bytes": 36480,
"index_writer_memory": "0b",
"index_writer_memory_in_bytes": 0,
"index_writer_max_memory": "5.3mb",
"index_writer_max_memory_in_bytes": 5632000,
"version_map_memory": "0b",
"version_map_memory_in_bytes": 0,
"fixed_bit_set": "0b",
"fixed_bit_set_memory_in_bytes": 0
},



"young": {
"used": "1gb",
"used_in_bytes": 1145372672,
"max": "1gb",
"max_in_bytes": 1145372672,
"peak_used": "1gb",
"peak_used_in_bytes": 1145372672,
"peak_max": "1gb",
"peak_max_in_bytes": 1145372672
},
"survivor": {
"used": "8.6mb",
"used_in_bytes": 9115536,
"max": "136.5mb",
"max_in_bytes": 143130624,
"peak_used": "136.5mb",
"peak_used_in_bytes": 143130624,
"peak_max": "136.5mb",
"peak_max_in_bytes": 143130624
},
"old": {
"used": "2.6gb",
"used_in_bytes": 2862477872,
"max": "2.6gb",
"max_in_bytes": 2863333376,
"peak_used": "2.6gb",
"peak_used_in_bytes": 2863120360,
"peak_max": "2.6gb",
"peak_max_in_bytes": 2863333376
}

piaofeng84

赞同来自:

同问,最近也碰到貌似ES很吃内存的情况

kennywu76 - Wood

赞同来自:

内存的确是被segment memory吃了,可是heap大小只有4GB不到啊? 不是说分了16GB?

kepmoving - 90后

赞同来自:

heap分了16G,segment memory分配的是heap里面的内存,32G的机器,剩余的内存应该是被lucene的索引文件消耗掉了,你可以看下你的索引里面的fielddata消耗了多少内存,可以用内存工具看一下内存的具体消耗
 

要回复问题请先登录注册