我刚打酱油去了,不好意思

es5.5版本,当文档字段是1100多个的时候,报异常Limit of total fields [1000] in index [nfvoemspm] has been exceeded

Elasticsearch | 作者 王社英 | 发布于2019年01月30日 | 阅读数:11546

[2019-01-30T01:35:25,985][INFO ][o.e.c.m.MetaDataDeleteIndexService] [node-1] [nfvoemspm/P8RMhW7yRqiW98l56H-x5g] deleting index
[2019-01-30T01:36:26,114][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [nfvoemspm] creating index, cause [auto(bulk api)], templates [template_nfvo], shards [5]/[1], mappings []
[2019-01-30T01:36:26,278][DEBUG][o.e.a.a.i.m.p.TransportPutMappingAction] [node-1] failed to put mappings on indices [[[nfvoemspm/srjL3cMMRUqa7DgOrYqX-A]]], type [log]
java.lang.IllegalArgumentException: Limit of total fields [1000] in index [nfvoemspm] has been exceeded
at org.elasticsearch.index.mapper.MapperService.checkTotalFieldsLimit(MapperService.java:604) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:420) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:336) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:268) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:311) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:230) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.cluster.service.ClusterService.executeTasks(ClusterService.java:634) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.cluster.service.ClusterService.calculateTaskOutputs(ClusterService.java:612) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.cluster.service.ClusterService.runTasks(ClusterService.java:571) [elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.cluster.service.ClusterService$ClusterServiceTaskBatcher.run(ClusterService.java:263) [elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:247) [elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:210) [elasticsearch-5.5.0.jar:5.5.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_191]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_191]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]
[2019-01-30T01:36:26,279][DEBUG][o.e.a.b.TransportShardBulkAction] [node-1] [nfvoemspm][3] failed to execute bulk item (index) BulkShardRequest [[nfvoemspm][3]] containing [index {[nfvoemspm][log][AWicZnJnPZbaVi7Y3edD], source[n/a, actual length: [43.4kb], max length: 2kb]}]
java.lang.IllegalArgumentException: Limit of total fields [1000] in index [nfvoemspm] has been exceeded
at org.elasticsearch.index.mapper.MapperService.checkTotalFieldsLimit(MapperService.java:604) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:420) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:336) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:268) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:311) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:230) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.cluster.service.ClusterService.executeTasks(ClusterService.java:634) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.cluster.service.ClusterService.calculateTaskOutputs(ClusterService.java:612) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.cluster.service.ClusterService.runTasks(ClusterService.java:571) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.cluster.service.ClusterService$ClusterServiceTaskBatcher.run(ClusterService.java:263) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:247) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:210) ~[elasticsearch-5.5.0.jar:5.5.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_191]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_191]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]
[2019-01-30T01:37:06,159][INFO ][o.e.c.m.MetaDataMappingService] [node-1] [nfvoemspm/srjL3cMMRUqa7DgOrYqX-A] create_mapping [log]
已邀请:

tacsklet - 公司有用到es

赞同来自: rochy exceptions

修改settings
{
"index.mapping.total_fields.limit": 2000
}
 
话说真的需要这么多字段放在一起吗,能不能从设计上优化一下。

God_lockin

赞同来自:

字面意思就是你index里面放了太多的字段了,虽然说es可以打宽表塞数据进行各种计算、聚合之类的,但是你真的用的到那么多字段么?
 
俩建议啊,1. 找业务方和用户聊聊,哪些字段其实用不到的就别放进去了,或者在mapping的时候直接index:false
2. 换个其他的大数据解决方案做类似多维度存储和计算,比如hive啊之类之类的

zqc0512 - andy zhou

赞同来自:

提示很清楚啊 超过默认1000字段了。
你这么NB有1000多个字段?查询OK?
 

要回复问题请先登录注册