设置参数 `node.name` 可以自定义 Elasticsearch 节点的名字。 此条 Tips 由 medcl 贡献。
shard

shard

ElasticSearch 7.x 不需要优化分片的数量了吗?

Elasticsearchlaoyang360 回复了问题 • 5 人关注 • 3 个回复 • 10174 次浏览 • 2020-05-27 08:31 • 来自相关话题

Shard大小官方推荐值为20-40GB, 具体原理呢?

Elasticsearchmedcl 回复了问题 • 7 人关注 • 2 个回复 • 5812 次浏览 • 2018-12-13 10:19 • 来自相关话题

es 集群索引 分片分配策略

Elasticsearchyayg2008 回复了问题 • 6 人关注 • 3 个回复 • 9253 次浏览 • 2018-09-19 09:31 • 来自相关话题

es的shard扩展问题

Elasticsearchrochy 回复了问题 • 5 人关注 • 3 个回复 • 3787 次浏览 • 2018-09-03 12:36 • 来自相关话题

转载一篇关于shard数量设计的文章,很赞

Elasticsearchlz8086 发表了文章 • 2 个评论 • 3259 次浏览 • 2018-05-28 18:34 • 来自相关话题

How many shards should I have in my Elasticsearch cluster?   Elasticsearch is a very versatile platform, that supports a variety of use cases, and provides great flexibility around data organisation and replication strategies. This flexibility can however sometimes make it hard to determine up-front how to best organize your data into indices and shards, especially if you are new to the Elastic Stack. While suboptimal choices  will not necessarily cause problems when first starting out, they have the potential to cause performance problems as data volumes grow over time. The more data the cluster holds, the more difficult it also becomes to correct the problem, as reindexing of large amounts of data can sometimes be required. When we come across users that are experiencing performance problems, it is not uncommon that this can be traced back to issues around how data is indexed and number of shards in the cluster. This is especially true for use-cases involving multi-tenancy and/or use of time-based indices. When discussing this with users, either in person at events or meetings or via our forum, some of the most common questions are “How many shards should I have?” and “How large should my shards be?”. This blog post aims to help you answer these questions and provide practical guidelines for use cases that involve the use of time-based indices, e.g. logging or security analytics, in a single place. What is a shard? Before we start, we need to establish some facts and terminology that we will need in later sections. Data in Elasticsearch is organized into indices. Each index is made up of one or more shards. Each shard is an instance of a Lucene index, which you can think of as a self-contained search engine that indexes and handles queries for a subset of the data in an Elasticsearch cluster. As data is written to a shard, it is periodically published into new immutable Lucene segments on disk, and it is at this time it becomes available for querying. This is referred to as a refresh. How this works is described in greater detail in Elasticsearch: the Definitive Guide. As the number of segments grow, these are periodically consolidated into larger segments. This process is referred to as merging. As all segments are immutable, this means that the disk space used will typically fluctuate during indexing, as new, merged segments need to be created before the ones they replace can be deleted. Merging can be quite resource intensive, especially with respect to disk I/O. The shard is the unit at which Elasticsearch distributes data around the cluster. The speed at which Elasticsearch can move shards around when rebalancing data, e.g. following a failure, will depend on the size and number of shards as well as network and disk performance. TIP: Avoid having very large shards as this can negatively affect the cluster's ability to recover from failure. There is no fixed limit on how large shards can be, but a shard size of 50GB is often quoted as a limit that has been seen to work for a variety of use-cases. Index by retention period As segments are immutable, updating a document requires Elasticsearch to first find the existing document, then mark it as deleted and add the updated version. Deleting a document also requires the document to be found and marked as deleted. For this reason, deleted documents will continue to tie up disk space and some system resources until they are merged out, which can consume a lot of system resources. Elasticsearch allows complete indices to be deleted very efficiently directly from the file system, without explicitly having to delete all records individually. This is by far the most efficient way to delete data from Elasticsearch. TIP: Try to use time-based indices for managing data retention whenever possible. Group data into indices based on the retention period. Time-based indices also make it easy to vary the number of primary shards and replicas over time, as this can be changed for the next index to be generated. This simplifies adapting to changing data volumes and requirements. Are indices and shards not free? For each Elasticsearch index, information about mappings and state is stored in the cluster state. This is kept in memory for fast access. Having a large number of indices in a cluster can therefore result in a large cluster state, especially if mappings are large. This can become slow to update as all updates need to be done through a single thread in order to guarantee consistency before the changes are distributed across the cluster. TIP: In order to reduce the number of indices and avoid large and sprawling mappings, consider storing data with similar structure in the same index rather than splitting into separate indices based on where the data comes from. It is important to find a good balance between the number of indices and the mapping size for each individual index. Each shard has data that need to be kept in memory and use heap space. This includes data structures holding information at the shard level, but also at the segment level in order to define where data reside on disk. The size of these data structures is not fixed and will vary depending on the use-case. One important characteristic of the segment related overhead is however that it is not strictly proportional to the size of the segment. This means that larger segments have less overhead per data volume compared to smaller segments. The difference can be substantial. In order to be able to store as much data as possible per node, it becomes important to manage heap usage and reduce the amount of overhead as much as possible. The more heap space a node has, the more data and shards it can handle. Indices and shards are therefore not free from a cluster perspective, as there is some level of resource overhead for each index and shard. TIP: Small shards result in small segments, which increases overhead. Aim to keep the average shard size between a few GB and a few tens of GB. For use-cases with time-based data, it is common to see shards between 20GB and 40GB in size. TIP: As the overhead per shard depends on the segment count and size, forcing smaller segments to merge into larger ones through a forcemerge operation can reduce overhead and improve query performance. This should ideally be done once no more data is written to the index. Be aware that this is an expensive operation that should ideally be performed during off-peak hours. TIP: The number of shards you can hold on a node will be proportional to the amount of heap you have available, but there is no fixed limit enforced by Elasticsearch. A good rule-of-thumb is to ensure you keep the number of shards per node below 20 to 25 per GB heap it has configured. A node with a 30GB heap should therefore have a maximum of 600-750 shards, but the further below this limit you can keep it the better. This will generally help the cluster stay in good health. How does shard size affect performance? In Elasticsearch, each query is executed in a single thread per shard. Multiple shards can however be processed in parallel, as can multiple queries and aggregations against the same shard. This means that the minimum query latency, when no caching is involved, will depend on the data, the type of query, as well as the size of the shard. Querying lots of small shards will make the processing per shard faster, but as many more tasks need to be queued up and processed in sequence, it is not necessarily going to be faster than querying a smaller number of larger shards. Having lots of small shards can also reduce the query throughput if there are multiple concurrent queries. TIP: The best way to determine the maximum shard size from a query performance perspective is to benchmark using realistic data and queries. Always benchmark with a query and indexing load representative of what the node would need to handle in production, as optimizing for a single query might give misleading results. How do I manage shard size? When using time-based indices, each index has traditionally been associated with a fixed time period. Daily indices are very common, and often used for holding data with short retention period or large daily volumes. These allow retention period to be managed with good granularity and makes it easy to adjust for changing volumes on a daily basis. Data with a longer retention period, especially if the daily volumes do not warrant the use of daily indices, often use weekly or monthly induces in order to keep the shard size up. This reduces the number of indices and shards that need to be stored in the cluster over time. TIP: If using time-based indices covering a fixed period, adjust the period each index covers based on the retention period and expected data volumes in order to reach the target shard size. Time-based indices with a fixed time interval works well when data volumes are reasonably predictable and change slowly. If the indexing rate can vary quickly, it is very difficult to maintain a uniform target shard size. In order to be able to better handle this type of scenarios, the Rollover and Shrink APIs were introduced. These add a lot of flexibility to how indices and shards are managed, specifically for time-based indices. The rollover index API makes it possible to specify the number of documents and index should contain and/or the maximum period documents should be written to it. Once one of these criteria has been exceeded, Elasticsearch can trigger a new index to be created for writing without downtime. Instead of having each index cover a specific time-period, it is now possible to switch to a new index at a specific size, which makes it possible to more easily achieve an even shard size for all indices. In cases where data might be updated, there is no longer a distinct link between the timestamp of the event and the index it resides in when using this API, which may make updates significantly less efficient as each update my need to be preceded by a search. TIP: If you have time-based, immutable data where volumes can vary significantly over time, consider using the rollover index API to achieve an optimal target shard size by dynamically varying the time-period each index covers. This gives great flexibility and can help avoid having too large or too small shards when volumes are unpredictable. The shrink index API allows you to shrink an existing index into a new index with fewer primary shards. If an even spread of shards across nodes is desired during indexing, but this will result in too small shards, this API can be used to reduce the number of primary shards once the index is no longer indexed into. This will result in larger shards, better suited for longer term storage of data. TIP: If you need to have each index cover a specific time period but still want to be able to spread indexing out across a large number of nodes, consider using the shrink API to reduce the number of primary shards once the index is no longer indexed into. This API can also be used to reduce the number of shards in case you have initially configured too many shards. Conclusions This blog post has provided tips and practical guidelines around how to best manage data in Elasticsearch. If you are interested in learning more, "Elasticsearch: the definitive guide" contains a section about designing for scale, which is well worth reading even though it is a bit old. A lot of the decisions around how to best distribute your data across indices and shards will however depend on the use-case specifics, and it can sometimes be hard to determine how to best apply the advice available. For more in-depth and personal advice you can engage with us commercially through a subscription and let our Support and Consulting teams help accelerate your project. If you are happy to discuss your use-case in the open, you can also get help from our community and through our public forum.

同一个Indices 主分片是存在同一个Node 还是分散存在不同的Node

Elasticsearchbill 回复了问题 • 2 人关注 • 1 个回复 • 2505 次浏览 • 2018-05-07 12:39 • 来自相关话题

es集群 副本5个的时候会造成 shards丢失 求各位大大看看

Elasticsearchwq131311 回复了问题 • 2 人关注 • 2 个回复 • 2629 次浏览 • 2017-11-17 11:01 • 来自相关话题

replica的分配

Elasticsearchc981337 回复了问题 • 2 人关注 • 2 个回复 • 5071 次浏览 • 2017-05-22 11:58 • 来自相关话题

es集群规划问题

Elasticsearchgfswsry 回复了问题 • 6 人关注 • 2 个回复 • 5193 次浏览 • 2016-11-14 19:35 • 来自相关话题

Shards数量 对 elasticsearch搜素性能的影响

Elasticsearchhelloes 回复了问题 • 4 人关注 • 1 个回复 • 7174 次浏览 • 2016-03-04 16:34 • 来自相关话题

条新动态, 点击查看
kennywu76

kennywu76 回答了问题 • 2018-09-18 17:57 • 3 个回复 不感兴趣

es 集群索引 分片分配策略

赞同来自:

可以给不同数据中心的结点,在ES配置文件里配置不同的rack_id这个属性,然后集群设置里将rack_id设置为shard allocation awareness即可。 这样ES会保证,主分片和副本必须分布在rack_id不同的结点上。
 
参考: http... 显示全部 »
可以给不同数据中心的结点,在ES配置文件里配置不同的rack_id这个属性,然后集群设置里将rack_id设置为shard allocation awareness即可。 这样ES会保证,主分片和副本必须分布在rack_id不同的结点上。
 
参考: https://www.elastic.co/guide/en/elasticsearch/reference/6.4/allocation-awareness.html
 
 
 

ElasticSearch 7.x 不需要优化分片的数量了吗?

回复

Elasticsearchlaoyang360 回复了问题 • 5 人关注 • 3 个回复 • 10174 次浏览 • 2020-05-27 08:31 • 来自相关话题

Shard大小官方推荐值为20-40GB, 具体原理呢?

回复

Elasticsearchmedcl 回复了问题 • 7 人关注 • 2 个回复 • 5812 次浏览 • 2018-12-13 10:19 • 来自相关话题

es 集群索引 分片分配策略

回复

Elasticsearchyayg2008 回复了问题 • 6 人关注 • 3 个回复 • 9253 次浏览 • 2018-09-19 09:31 • 来自相关话题

es的shard扩展问题

回复

Elasticsearchrochy 回复了问题 • 5 人关注 • 3 个回复 • 3787 次浏览 • 2018-09-03 12:36 • 来自相关话题

同一个Indices 主分片是存在同一个Node 还是分散存在不同的Node

回复

Elasticsearchbill 回复了问题 • 2 人关注 • 1 个回复 • 2505 次浏览 • 2018-05-07 12:39 • 来自相关话题

es集群 副本5个的时候会造成 shards丢失 求各位大大看看

回复

Elasticsearchwq131311 回复了问题 • 2 人关注 • 2 个回复 • 2629 次浏览 • 2017-11-17 11:01 • 来自相关话题

replica的分配

回复

Elasticsearchc981337 回复了问题 • 2 人关注 • 2 个回复 • 5071 次浏览 • 2017-05-22 11:58 • 来自相关话题

es集群规划问题

回复

Elasticsearchgfswsry 回复了问题 • 6 人关注 • 2 个回复 • 5193 次浏览 • 2016-11-14 19:35 • 来自相关话题

Shards数量 对 elasticsearch搜素性能的影响

回复

Elasticsearchhelloes 回复了问题 • 4 人关注 • 1 个回复 • 7174 次浏览 • 2016-03-04 16:34 • 来自相关话题

转载一篇关于shard数量设计的文章,很赞

Elasticsearchlz8086 发表了文章 • 2 个评论 • 3259 次浏览 • 2018-05-28 18:34 • 来自相关话题

How many shards should I have in my Elasticsearch cluster?   Elasticsearch is a very versatile platform, that supports a variety of use cases, and provides great flexibility around data organisation and replication strategies. This flexibility can however sometimes make it hard to determine up-front how to best organize your data into indices and shards, especially if you are new to the Elastic Stack. While suboptimal choices  will not necessarily cause problems when first starting out, they have the potential to cause performance problems as data volumes grow over time. The more data the cluster holds, the more difficult it also becomes to correct the problem, as reindexing of large amounts of data can sometimes be required. When we come across users that are experiencing performance problems, it is not uncommon that this can be traced back to issues around how data is indexed and number of shards in the cluster. This is especially true for use-cases involving multi-tenancy and/or use of time-based indices. When discussing this with users, either in person at events or meetings or via our forum, some of the most common questions are “How many shards should I have?” and “How large should my shards be?”. This blog post aims to help you answer these questions and provide practical guidelines for use cases that involve the use of time-based indices, e.g. logging or security analytics, in a single place. What is a shard? Before we start, we need to establish some facts and terminology that we will need in later sections. Data in Elasticsearch is organized into indices. Each index is made up of one or more shards. Each shard is an instance of a Lucene index, which you can think of as a self-contained search engine that indexes and handles queries for a subset of the data in an Elasticsearch cluster. As data is written to a shard, it is periodically published into new immutable Lucene segments on disk, and it is at this time it becomes available for querying. This is referred to as a refresh. How this works is described in greater detail in Elasticsearch: the Definitive Guide. As the number of segments grow, these are periodically consolidated into larger segments. This process is referred to as merging. As all segments are immutable, this means that the disk space used will typically fluctuate during indexing, as new, merged segments need to be created before the ones they replace can be deleted. Merging can be quite resource intensive, especially with respect to disk I/O. The shard is the unit at which Elasticsearch distributes data around the cluster. The speed at which Elasticsearch can move shards around when rebalancing data, e.g. following a failure, will depend on the size and number of shards as well as network and disk performance. TIP: Avoid having very large shards as this can negatively affect the cluster's ability to recover from failure. There is no fixed limit on how large shards can be, but a shard size of 50GB is often quoted as a limit that has been seen to work for a variety of use-cases. Index by retention period As segments are immutable, updating a document requires Elasticsearch to first find the existing document, then mark it as deleted and add the updated version. Deleting a document also requires the document to be found and marked as deleted. For this reason, deleted documents will continue to tie up disk space and some system resources until they are merged out, which can consume a lot of system resources. Elasticsearch allows complete indices to be deleted very efficiently directly from the file system, without explicitly having to delete all records individually. This is by far the most efficient way to delete data from Elasticsearch. TIP: Try to use time-based indices for managing data retention whenever possible. Group data into indices based on the retention period. Time-based indices also make it easy to vary the number of primary shards and replicas over time, as this can be changed for the next index to be generated. This simplifies adapting to changing data volumes and requirements. Are indices and shards not free? For each Elasticsearch index, information about mappings and state is stored in the cluster state. This is kept in memory for fast access. Having a large number of indices in a cluster can therefore result in a large cluster state, especially if mappings are large. This can become slow to update as all updates need to be done through a single thread in order to guarantee consistency before the changes are distributed across the cluster. TIP: In order to reduce the number of indices and avoid large and sprawling mappings, consider storing data with similar structure in the same index rather than splitting into separate indices based on where the data comes from. It is important to find a good balance between the number of indices and the mapping size for each individual index. Each shard has data that need to be kept in memory and use heap space. This includes data structures holding information at the shard level, but also at the segment level in order to define where data reside on disk. The size of these data structures is not fixed and will vary depending on the use-case. One important characteristic of the segment related overhead is however that it is not strictly proportional to the size of the segment. This means that larger segments have less overhead per data volume compared to smaller segments. The difference can be substantial. In order to be able to store as much data as possible per node, it becomes important to manage heap usage and reduce the amount of overhead as much as possible. The more heap space a node has, the more data and shards it can handle. Indices and shards are therefore not free from a cluster perspective, as there is some level of resource overhead for each index and shard. TIP: Small shards result in small segments, which increases overhead. Aim to keep the average shard size between a few GB and a few tens of GB. For use-cases with time-based data, it is common to see shards between 20GB and 40GB in size. TIP: As the overhead per shard depends on the segment count and size, forcing smaller segments to merge into larger ones through a forcemerge operation can reduce overhead and improve query performance. This should ideally be done once no more data is written to the index. Be aware that this is an expensive operation that should ideally be performed during off-peak hours. TIP: The number of shards you can hold on a node will be proportional to the amount of heap you have available, but there is no fixed limit enforced by Elasticsearch. A good rule-of-thumb is to ensure you keep the number of shards per node below 20 to 25 per GB heap it has configured. A node with a 30GB heap should therefore have a maximum of 600-750 shards, but the further below this limit you can keep it the better. This will generally help the cluster stay in good health. How does shard size affect performance? In Elasticsearch, each query is executed in a single thread per shard. Multiple shards can however be processed in parallel, as can multiple queries and aggregations against the same shard. This means that the minimum query latency, when no caching is involved, will depend on the data, the type of query, as well as the size of the shard. Querying lots of small shards will make the processing per shard faster, but as many more tasks need to be queued up and processed in sequence, it is not necessarily going to be faster than querying a smaller number of larger shards. Having lots of small shards can also reduce the query throughput if there are multiple concurrent queries. TIP: The best way to determine the maximum shard size from a query performance perspective is to benchmark using realistic data and queries. Always benchmark with a query and indexing load representative of what the node would need to handle in production, as optimizing for a single query might give misleading results. How do I manage shard size? When using time-based indices, each index has traditionally been associated with a fixed time period. Daily indices are very common, and often used for holding data with short retention period or large daily volumes. These allow retention period to be managed with good granularity and makes it easy to adjust for changing volumes on a daily basis. Data with a longer retention period, especially if the daily volumes do not warrant the use of daily indices, often use weekly or monthly induces in order to keep the shard size up. This reduces the number of indices and shards that need to be stored in the cluster over time. TIP: If using time-based indices covering a fixed period, adjust the period each index covers based on the retention period and expected data volumes in order to reach the target shard size. Time-based indices with a fixed time interval works well when data volumes are reasonably predictable and change slowly. If the indexing rate can vary quickly, it is very difficult to maintain a uniform target shard size. In order to be able to better handle this type of scenarios, the Rollover and Shrink APIs were introduced. These add a lot of flexibility to how indices and shards are managed, specifically for time-based indices. The rollover index API makes it possible to specify the number of documents and index should contain and/or the maximum period documents should be written to it. Once one of these criteria has been exceeded, Elasticsearch can trigger a new index to be created for writing without downtime. Instead of having each index cover a specific time-period, it is now possible to switch to a new index at a specific size, which makes it possible to more easily achieve an even shard size for all indices. In cases where data might be updated, there is no longer a distinct link between the timestamp of the event and the index it resides in when using this API, which may make updates significantly less efficient as each update my need to be preceded by a search. TIP: If you have time-based, immutable data where volumes can vary significantly over time, consider using the rollover index API to achieve an optimal target shard size by dynamically varying the time-period each index covers. This gives great flexibility and can help avoid having too large or too small shards when volumes are unpredictable. The shrink index API allows you to shrink an existing index into a new index with fewer primary shards. If an even spread of shards across nodes is desired during indexing, but this will result in too small shards, this API can be used to reduce the number of primary shards once the index is no longer indexed into. This will result in larger shards, better suited for longer term storage of data. TIP: If you need to have each index cover a specific time period but still want to be able to spread indexing out across a large number of nodes, consider using the shrink API to reduce the number of primary shards once the index is no longer indexed into. This API can also be used to reduce the number of shards in case you have initially configured too many shards. Conclusions This blog post has provided tips and practical guidelines around how to best manage data in Elasticsearch. If you are interested in learning more, "Elasticsearch: the definitive guide" contains a section about designing for scale, which is well worth reading even though it is a bit old. A lot of the decisions around how to best distribute your data across indices and shards will however depend on the use-case specifics, and it can sometimes be hard to determine how to best apply the advice available. For more in-depth and personal advice you can engage with us commercially through a subscription and let our Support and Consulting teams help accelerate your project. If you are happy to discuss your use-case in the open, you can also get help from our community and through our public forum.