如同磁铁吸引四周的铁粉,热情也能吸引周围的人,改变周围的情况。

求脚如何解决sokettimeout问题

Elasticsearch | 作者 yeziblo | 发布于2019年08月27日 | 阅读数:3672

各位伙伴们好,我的情景如下:
需求是需要遍历ES上一些索引的全部数据,并进行针对处理。
我的使用方法是用scroll来进行滚动操作。
但是在滚动的过程中,偶尔会报出sokettimeout与connection refused错误。

下面是我的代码:
SearchRequest request = new SearchRequest(indices);
SearchSourceBuilder builder = new SearchSourceBuilder();
builder.query(QueryBuilders.rangeQuery("CAPTURETIME").gte(starttime).lte(endtime))
.fetchSource(new String[]{"DEVICENUM", "FIRM_CODE", "FIRMCODE_NUM", "BUSINESS_NUM", "DATA_TYPE",
"CLIENTMAC", "SITECODENEW", "AUTH_TYPE", "USERNAME", "CAPTURETIME","INDEX"},
Strings.EMPTY_ARRAY)
.size(3000)
.timeout(TimeValue.timeValueSeconds(120));
request.source(builder).scroll(TimeValue.timeValueSeconds(120));

下面是我的错误信息:
java.net.SocketTimeoutException
at org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:944)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:233)
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1764)
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1734)
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1696)
at org.elasticsearch.client.RestHighLevelClient.scroll(RestHighLevelClient.java:1255)
at com.bh.d406.bigdata.dataetl.statis_db.statis_delta_thread.prepareStatis(statis_delta_thread.java:116)
at com.bh.d406.bigdata.dataetl.statis_db.statis_delta_thread.run(statis_delta_thread.java:68)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketTimeoutException
at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.timeout(HttpAsyncRequestExecutor.java:375)
at org.apache.http.impl.nio.client.InternalRequestExecutor.timeout(InternalRequestExecutor.java:116)
at org.apache.http.impl.nio.client.InternalIODispatch.onTimeout(InternalIODispatch.java:92)
at org.apache.http.impl.nio.client.InternalIODispatch.onTimeout(InternalIODispatch.java:39)
at org.apache.http.impl.nio.reactor.AbstractIODispatch.timeout(AbstractIODispatch.java:175)
at org.apache.http.impl.nio.reactor.BaseIOReactor.sessionTimedOut(BaseIOReactor.java:263)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.timeoutCheck(AbstractIOReactor.java:492)
at org.apache.http.impl.nio.reactor.BaseIOReactor.validate(BaseIOReactor.java:213)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:280)
at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:588)
... 1 more
Suppressed: java.net.SocketTimeoutException
... 12 more


java.net.ConnectException: Connection refused
at org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:959)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:233)
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1764)
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1734)
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1696)
at org.elasticsearch.client.RestHighLevelClient.scroll(RestHighLevelClient.java:1255)
at com.bh.d406.bigdata.dataetl.statis_db.statis_delta_thread.prepareStatis_IMSIIMEI(statis_delta_thread.java:171)
at com.bh.d406.bigdata.dataetl.statis_db.statis_delta_thread.run(statis_delta_thread.java:69)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent(DefaultConnectingIOReactor.java:171)
at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:145)
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:348)
at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:192)
at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64)
... 1 more

我开始认为是查询超时……所以延长了超时时间到120s,并且每次只查3000条数据了……但是这个问题还是偶尔出现orz

请教下各位伙伴有没有解决问题的思路呢?
已邀请:

要回复问题请先登录注册