不要急,总有办法的

es新建索引的分片一直往同一个节点分配

Elasticsearch | 作者 coding_hl | 发布于2022年08月11日 | 阅读数:1745

es 7.2
没有进行es的分片分配设置:文档都看过了
磁盘容量正常的,节点的分片数量是,索引setting正常
但是新建的索引7个分片全部分配到同一个节点上了,测试新建好几个索引都是这种情况
 
GET _cluster/allocation/explain
{
    "index": "test",
    "shard": 3,
    "primary": false,
    "current_state": "unassigned",
    "unassigned_info": {
        "reason": "INDEX_CREATED",
        "at": "2022-08-11T09:49:22.628Z",
        "last_allocation_status": "no_attempt"
    },
    "can_allocate": "throttled",
    "allocate_explanation": "allocation temporarily throttled",
    "target_node": {
        "id": "HsdSZfmhTe-jVdm8qIdYAw",
        "name": "data7",
        "transport_address": "XXX:9301",
        "attributes": {
            "ml.machine_memory": "134936035328",
            "ml.max_open_jobs": "20",
            "xpack.installed": "true"
        }
    },
    "node_allocation_decisions": [
        {
            "node_id": "HsdSZfmhTe-jVdm8qIdYAw",
            "node_name": "data7",
            "transport_address": "XXXXX:9301",
            "node_attributes": {
                "ml.machine_memory": "134936035328",
                "ml.max_open_jobs": "20",
                "xpack.installed": "true"
            },
            "node_decision": "throttled",
            "weight_ranking": 4,
            "deciders": [
                {
                    "decider": "max_retry",
                    "decision": "YES",
                    "explanation": "shard has no previous failures"
                },
                {
                    "decider": "replica_after_primary_active",
                    "decision": "YES",
                    "explanation": "primary shard for this replica is already active"
                },
                {
                    "decider": "enable",
                    "decision": "YES",
                    "explanation": "all allocations are allowed"
                },
                {
                    "decider": "node_version",
                    "decision": "YES",
                    "explanation": "can allocate replica shard to a node with version [7.2.0] since this is equal-or-newer than the primary version [7.2.0]"
                },
                {
                    "decider": "snapshot_in_progress",
                    "decision": "YES",
                    "explanation": "the shard is not being snapshotted"
                },
                {
                    "decider": "restore_in_progress",
                    "decision": "YES",
                    "explanation": "ignored as shard is not being recovered from a snapshot"
                },
                {
                    "decider": "filter",
                    "decision": "YES",
                    "explanation": "node passes include/exclude/require filters"
                },
                {
                    "decider": "same_shard",
                    "decision": "YES",
                    "explanation": "the shard does not exist on the same node"
                },
                {
                    "decider": "disk_threshold",
                    "decision": "YES",
                    "explanation": "enough disk for shard on node, free: [4.7tb], shard size: [0b], free after allocating shard: [4.7tb]"
                },
                {
                    "decider": "throttling",
                    "decision": "THROTTLE",
                    "explanation": "reached the limit of outgoing shard recoveries [2] on the node [NcrBra9qT12dO8amMCtCNQ] which holds the primary, cluster setting [cluster.routing.allocation.node_concurrent_outgoing_recoveries=2] (can also be set via [cluster.routing.allocation.node_concurrent_recoveries])"
                },
                {
                    "decider": "shards_limit",
                    "decision": "YES",
                    "explanation": "total shard limits are disabled: [index: -1, cluster: -1] <= 0"
                },
                {
                    "decider": "awareness",
                    "decision": "YES",
                    "explanation": "allocation awareness is not enabled, set cluster setting [cluster.routing.allocation.awareness.attributes] to enable it"
                }
            ]
        }
    ]
}
这个什么情况?
已邀请:

w455091555 - 关心开源技术的初学者

赞同来自:

好像client会检测请求到各个节点的耗时? 我记得有个这样的配置,要不试试

Charele - Cisco4321

赞同来自:

没有说清楚吧,
比如你的集群有几个节点,各个节点都正常吗?还有节点都有data角色吗?
 
“新建的索引7个分片全部分配到同一个节点上了”你的这7个都是主分片吗?
如果是,你贴的那个信息是副分片的,和问题没有任何关系

要回复问题请先登录注册