elasticsearch 大概1000个索引,吃内存800m,而且还没法gc, es的索引是按照logstash建议的设置全局的mapping,后面索引越多,内存吃得越多,如何解决呢?
全局mapping部分内容如下:
"logstash": {"order": 0,
"template": "logstash-*",
"settings": {
"index": {
"refresh_interval": "5s"
}
},
"mappings": {
"_default_": {
"dynamic_templates": [
{
"message_field": {
"mapping": {
"fielddata": {
"format": "disabled"
},
"index": "analyzed",
"omit_norms": true,
"type": "string"
},
"match_mapping_type": "string",
"match": "message"
}
}
,
{
"string_fields": {
"mapping": {
"fielddata": {
"format": "disabled"
},
"index": "analyzed",
"omit_norms": true,
"type": "string",
"fields": {
"raw": {
"ignore_above": 256,
"index": "not_analyzed",
"type": "string",
"doc_values": true
}
}
},
"match_mapping_type": "string",
"match": "*"
}
}
全局mapping部分内容如下:
"logstash": {"order": 0,
"template": "logstash-*",
"settings": {
"index": {
"refresh_interval": "5s"
}
},
"mappings": {
"_default_": {
"dynamic_templates": [
{
"message_field": {
"mapping": {
"fielddata": {
"format": "disabled"
},
"index": "analyzed",
"omit_norms": true,
"type": "string"
},
"match_mapping_type": "string",
"match": "message"
}
}
,
{
"string_fields": {
"mapping": {
"fielddata": {
"format": "disabled"
},
"index": "analyzed",
"omit_norms": true,
"type": "string",
"fields": {
"raw": {
"ignore_above": 256,
"index": "not_analyzed",
"type": "string",
"doc_values": true
}
}
},
"match_mapping_type": "string",
"match": "*"
}
}
2 个回复
medcl - 今晚打老虎。
赞同来自:
runc
赞同来自:
"version": {
"number": "2.2.1",
"build_hash": "d045fc29d1932bce18b2e65ab8b297fbf6cd41a1",
"build_timestamp": "2016-03-09T09:38:54Z",
"build_snapshot": false,
"lucene_version": "5.4.1"
}
因为我们系统设计就是支持多租户,假设每个用户每天一个索引,所以索引从系统架构和功能设计上没法合并,而且你的意思,索引这么多,消耗这么多内存是正常的?如果索引本身没法合并,有什么办法优化呢?