老生常谈的from+size问题,我的最终目的是想实现传统的分页查询,所以应该不需要用到深检索吧
es:5.2.0
linux:centos6.5
java:jdk8
再执行修改也改不了了,改大改小都无效,但是每次执行修改,系统都返回成功。
es你在逗我?
es:5.2.0
linux:centos6.5
java:jdk8
PUT _all/_settings?preserve_existing=true'
{
"index.max_result_window" : "10000000"
}
这是es系统规定的修改设置方法,初次执行修改成了100W,发现普通的分页查询完全没有压力,因为我的size只有10条。想改成1000W或者更多试试,执行了上边的请求,于是问题来了:{
"acknowledged": true
}
系统回复设置成功,但是:{
"error": {
"root_cause": [
{
"type": "query_shard_exception",
"reason": "No mapping found for [@timestamp] in order to sort on",
"index_uuid": "ug-vpPa-Twy_1SH_V4SJiQ",
"index": ".kibana"
},
{
"type": "query_shard_exception",
"reason": "No mapping found for [@timestamp] in order to sort on",
"index_uuid": "3gZPIK5OQmauiqPnZhK-6w",
"index": "datacategory"
},
{
"type": "query_phase_execution_exception",
"reason": "Result window is too large, from + size must be less than or equal to: [1000000] but was [5349410]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level setting."
}
],
"type": "search_phase_execution_exception",
"reason": "all shards failed",
"phase": "query",
"grouped": true,
"failed_shards": [
{
"shard": 0,
"index": ".kibana",
"node": "a0o7-VOkRaKVORapqQVMMQ",
"reason": {
"type": "query_shard_exception",
"reason": "No mapping found for [@timestamp] in order to sort on",
"index_uuid": "ug-vpPa-Twy_1SH_V4SJiQ",
"index": ".kibana"
}
},
{
"shard": 0,
"index": "datacategory",
"node": "a0o7-VOkRaKVORapqQVMMQ",
"reason": {
"type": "query_shard_exception",
"reason": "No mapping found for [@timestamp] in order to sort on",
"index_uuid": "3gZPIK5OQmauiqPnZhK-6w",
"index": "datacategory"
}
},
{
"shard": 0,
"index": "test_json",
"node": "a0o7-VOkRaKVORapqQVMMQ",
"reason": {
"type": "query_phase_execution_exception",
"reason": "Result window is too large, from + size must be less than or equal to: [1000000] but was [5349410]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level setting."
}
}
],
"caused_by": {
"type": "query_phase_execution_exception",
"reason": "Result window is too large, from + size must be less than or equal to: [1000000] but was [5349410]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level setting."
}
},
"status": 500
}
查询的时候仍旧报错!于是查一下设置:"settings": {
"index": {
"number_of_shards": "5",
"provided_name": "人工打码",
"max_result_window": "1000000",
"creation_date": "1488291811228",
"number_of_replicas": "1",
"uuid": "jbZjC9VmRGuO6oNHS3u_OA",
"version": {
"created": "5020099"
}
}
}
坑爹了,max_result_window仍旧是100W?再执行修改也改不了了,改大改小都无效,但是每次执行修改,系统都返回成功。
es你在逗我?
3 个回复
Xargin
赞同来自: shengtu0328
所以这个严格意义上还是索引级别的设置。。全局的api就只是为了方便你一次性升级所有的索引max_result_window,后续的新索引还是默认的1w(吧?)。
测试版本:5.2.2
以上
你的修改根本问题出在preserve_existing=true这个query_string,你按英文字面理解一下。。。
medcl - 今晚打老虎。
赞同来自:
Xargin
赞同来自:
work as expected
=================
orz,仔细一看竟然是带_all的,我去试一试再来编辑