沙师弟,师父的充电器掉了

elasticsearch-jdbc导入数据如何优化?

Elasticsearch | 作者 sinomall | 发布于2017年01月19日 | 阅读数:4592

在向测试环境的elasticsearch服务器导入20亿条数据的时候,使用elasticsearch-jdbc脚本实现。在导入过程中发现执行脚本的服务器在并发10个线程的时候会受到elasticsearch服务器传过来的巨大流量包,将网卡带宽占满。请问这些请求是些什么请求?如何优化?
 
elasticsearch-version: 2.4.3
已邀请:

sinomall - 会敲代码 会投三分球

赞同来自: AlixMu

在不断尝试,翻阅文档,终于试验在4个半小时完成20亿级别的导入
解题的关键点在于“分而治之”。
将es集群搭建成10个32G的实例,官网推荐每个es实例最好不要超过32G,因为大内存GC很耗性能。
导入工具将20亿数据分成10份分别导入各个实例,这样发挥整个集群的最大导入容量能力。
 
官网一些优化索引的建议大家可以参考
 a few tips how to improve indexing performance 
This should be fairly obvious, but use bulk indexing requests for optimal performance. Bulk sizing is dependent on your data, analysis, and cluster configuration, but a good starting point is 5–15 MB per bulk. Note that this is physical size. Document count is not a good metric for bulk size. For example, if you are indexing 1,000 documents per bulk, keep the following in mind:
【每批次导入的不能以数量来定,而因以文档总物理大小来看。每一批导入的时候所有文档会加载到内存,不同的物理大小会直接影响到导入速率。】
1,000 documents at 1 KB each is 1 MB.
1,000 documents at 100 KB each is 100 MB.

Those are drastically different bulk sizes. Bulks need to be loaded into memory at the coordinating node, so it is the physical size of the bulk that is more important than the document count.

Start with a bulk size around 5–15 MB and slowly increase it until you do not see performance gains anymore. Then start increasing the concurrency of your bulk ingestion (multiple threads, and so forth).
Monitor your nodes with Marvel and/or tools such as iostat, top, and ps to see when resources start to bottleneck. If you start to receive EsRejectedExecutionException, your cluster can no longer keep up: at least one resource has reached capacity. Either reduce concurrency, provide more of the limited resource (such as switching from spinning disks to SSDs), or add more nodes.
【建议导入的时候以物理大小5-15MB开始不断提高每批次导入量,当性能不再增加时提高线程数直至性能达到最优化。】
 
When ingesting data, make sure bulk requests are round-robined across all your data nodes. Do not send all requests to a single node, since that single node will need to store all the bulks in memory while processing.
【官方的优化导入建议也是“分而治之”】
 
关于存储,分片等还有些优化建议,可以具体参看链接:
https://www.elastic.co/guide/e ... .html
 

sinomall - 会敲代码 会投三分球

赞同来自:

241是搜索服务器(单机,不存在数据同步) 、246是导入数据执行脚本服务器,机器都是500G 32核  并发10个线程,每个线程取200万数据进行导入数据的时候出现个问题,241搜索服务器向246服务器发的流量远大于导入数据的流量,网卡应此被占满,只能保持在3.3万/s 左右的导入速率,继续加线程无法提高导入速率。
请问这是什么原因
使用iftop -N -n -i eth2查看网卡流量 
如下图:
WechatIMG1.jpeg


WechatIMG2.jpeg

 

sinomall - 会敲代码 会投三分球

赞同来自:

脚本样例

WechatIMG3.jpeg

 

要回复问题请先登录注册