使用 man ascii 来查看 ASCII 表。

es的index.max_result_window只能修改一次?

Elasticsearch | 作者 wyntergreg | 发布于2017年03月08日 | 阅读数:25328

老生常谈的from+size问题,我的最终目的是想实现传统的分页查询,所以应该不需要用到深检索吧
es:5.2.0
linux:centos6.5
java:jdk8
PUT _all/_settings?preserve_existing=true'
{
"index.max_result_window" : "10000000"
}
这是es系统规定的修改设置方法,初次执行修改成了100W,发现普通的分页查询完全没有压力,因为我的size只有10条。想改成1000W或者更多试试,执行了上边的请求,于是问题来了:
{
"acknowledged": true
}
系统回复设置成功,但是:
{
"error": {
"root_cause": [
{
"type": "query_shard_exception",
"reason": "No mapping found for [@timestamp] in order to sort on",
"index_uuid": "ug-vpPa-Twy_1SH_V4SJiQ",
"index": ".kibana"
},
{
"type": "query_shard_exception",
"reason": "No mapping found for [@timestamp] in order to sort on",
"index_uuid": "3gZPIK5OQmauiqPnZhK-6w",
"index": "datacategory"
},
{
"type": "query_phase_execution_exception",
"reason": "Result window is too large, from + size must be less than or equal to: [1000000] but was [5349410]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level setting."
}
],
"type": "search_phase_execution_exception",
"reason": "all shards failed",
"phase": "query",
"grouped": true,
"failed_shards": [
{
"shard": 0,
"index": ".kibana",
"node": "a0o7-VOkRaKVORapqQVMMQ",
"reason": {
"type": "query_shard_exception",
"reason": "No mapping found for [@timestamp] in order to sort on",
"index_uuid": "ug-vpPa-Twy_1SH_V4SJiQ",
"index": ".kibana"
}
},
{
"shard": 0,
"index": "datacategory",
"node": "a0o7-VOkRaKVORapqQVMMQ",
"reason": {
"type": "query_shard_exception",
"reason": "No mapping found for [@timestamp] in order to sort on",
"index_uuid": "3gZPIK5OQmauiqPnZhK-6w",
"index": "datacategory"
}
},
{
"shard": 0,
"index": "test_json",
"node": "a0o7-VOkRaKVORapqQVMMQ",
"reason": {
"type": "query_phase_execution_exception",
"reason": "Result window is too large, from + size must be less than or equal to: [1000000] but was [5349410]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level setting."
}
}
],
"caused_by": {
"type": "query_phase_execution_exception",
"reason": "Result window is too large, from + size must be less than or equal to: [1000000] but was [5349410]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level setting."
}
},
"status": 500
}
查询的时候仍旧报错!于是查一下设置:
"settings": {
"index": {
"number_of_shards": "5",
"provided_name": "人工打码",
"max_result_window": "1000000",
"creation_date": "1488291811228",
"number_of_replicas": "1",
"uuid": "jbZjC9VmRGuO6oNHS3u_OA",
"version": {
"created": "5020099"
}
}
}
坑爹了,max_result_window仍旧是100W?
再执行修改也改不了了,改大改小都无效,但是每次执行修改,系统都返回成功。
es你在逗我?
已邀请:

Xargin

赞同来自: shengtu0328

我在本地试了一下,看起来put _all来修改max_result_window的话,在内部也相当于是对每个index单独去做操作(可以看到日志),改成功了之后,如果你再新建新的索引,那新的索引实际上也没有这个max_result_window的选项。
 
所以这个严格意义上还是索引级别的设置。。全局的api就只是为了方便你一次性升级所有的索引max_result_window,后续的新索引还是默认的1w(吧?)。
 
测试版本:5.2.2
 
以上
 
你的修改根本问题出在preserve_existing=true这个query_string,你按英文字面理解一下。。。

medcl - 今晚打老虎。

赞同来自:

修改的时候 把_all换成具体的索引名呢?

Xargin

赞同来自:

这报错没问题啊,max_result_window应该是index级别的设置,不是全局的,你如果只改了一个index,但是搜索的时候不指定这个index,那没有设置max_result_window的index们还是会报错
 
work as expected
 
 
=================
orz,仔细一看竟然是带_all的,我去试一试再来编辑

要回复问题请先登录注册