elasticsearch如何监控tcp读写,获取客户端ip
Elasticsearch • Cheetah 回复了问题 • 2 人关注 • 1 个回复 • 2088 次浏览 • 2017-09-12 09:04
query_string查询疑问
Elasticsearch • kennywu76 回复了问题 • 4 人关注 • 4 个回复 • 18677 次浏览 • 2017-09-12 10:51
ES 5.4+ 引起的Kibana性能问题
Elasticsearch • kennywu76 发表了文章 • 8 个评论 • 6240 次浏览 • 2017-09-11 18:22
【携程旅行网 吴晓刚】
上周有用户在社区发了一例Kibana读取超时的问题:[question#2319](https://elasticsearch.cn/question/2319) 。周末找时间帮其调查了下,发现某些较新的ES版本和Kibana搭配,会产生意想不到的缓慢问题。 考虑到这个问题比较普遍,因此在这里总结一下问题的根源和解决办法,希望用到问题版本的用户不要踩到坑。
首先问题的现象在上面的问题链接里有描述,简而言之就是对于一个硬件配置比较高的集群,每天写入一个20亿左右数据的索引,通过kibana的discovery面板查看数据会一直超时。即使时间范围放到最近半小时,超时依旧,有些蹊跷。
周末拿到用户给的测试账号,登陆集群看了下状态。 从机器的硬件配置,集群和索引的配置看,没找到什么特别不对劲的地方。然而点击到Discovery面板,的确数据显示不出来。 集群监控数据看,并没有其他用户在做查询,cpu利用率和集群负载都比较低。因此初步可以判定,就是查询本身比较缓慢所致。
对于诊断查询缓慢问题,我通常的做法,就是将对应面板下的查询拷贝出来,在Kibana Dev Console里手动执行,然后再加上"profile":true
选项,看看查询是如何解析和执行的。对应的查询形如下面这样:
json<br /> {<br /> "profile": true,<br /> "query": {<br /> "bool": {<br /> "must": [<br /> {<br /> "query_string": {<br /> "analyze_wildcard": true,<br /> "query": "*"<br /> }<br /> },<br /> {<br /> "range": {<br /> "@timestamp": {<br /> "gte": "now-1h",<br /> "lte": "now",<br /> "format": "epoch_millis"<br /> }<br /> }<br /> }<br /> ]<br /> }<br /> }<br /> }<br />
因为用户query框什么都没有输入,因此默认查询串被Kibana设置为*
, 然后根据选择的时间范围加了一个range查询。 profile的输出让我稍微有些吃惊,其中 query_string的里的*
居然被解析成非常复杂的DisjunctionMaxQuery
,主要查询耗时都在这里了。
json<br /> {<br /> "type": "DisjunctionMaxQuery",<br /> "description": "(ConstantScore(_field_names:remote_addr.keyword) | ConstantScore(_field_names:geoip.country_isocode) | ConstantScore(_field_names:geoip.country_name.keyword) | ConstantScore(_field_names:via) | ConstantScore(_field_names:domain.keyword) | ConstantScore(_field_names:request_method.keyword) | ConstantScore(_field_names:protocol) | ConstantScore(_field_names:xff.keyword) | ConstantScore(_field_names:host) | ConstantScore(_field_names:geoip.city_name.keyword) | ConstantScore(_field_names:client_ip) | ConstantScore(_field_names:host.keyword) | ConstantScore(_field_names:geoip.longitude) | ConstantScore(_field_names:geoip.subdivision_name.keyword) | ConstantScore(_field_names:geoip.country_code) | ConstantScore(_field_names:upstream_addr.keyword) | ConstantScore(_field_names:@version.keyword) | ConstantScore(_field_names:request_uri) | ConstantScore(_field_names:tags) | ConstantScore(_field_names:idc_tag) | ConstantScore(_field_names:size) | ConstantScore(_field_names:http_referer) | ConstantScore(_field_names:message.keyword) | ConstantScore(_field_names:domain) | ConstantScore(_field_names:geoip.latitude) | ConstantScore(_field_names:xff) | ConstantScore(_field_names:protocol.keyword) | ConstantScore(_field_names:geoip.country_code.keyword) | ConstantScore(_field_names:status) | ConstantScore(_field_names:upstream_addr) | ConstantScore(_field_names:http_referer.keyword) | ConstantScore(_field_names:tags.keyword) | ConstantScore(_field_names:client_ip.keyword) | ConstantScore(_field_names:request_method) | ConstantScore(_field_names:upstream_status) | ConstantScore(_field_names:request_time) | ConstantScore(_field_names:geoip.location) | ConstantScore(_field_names:@version) | ConstantScore(_field_names:geoip.country_name) | ConstantScore(_field_names:user_agent) | ConstantScore(_field_names:idc_tag.keyword) | ConstantScore(_field_names:remote_addr) | ConstantScore(_field_names:geoip.country_isocode.keyword) | ConstantScore(_field_names:geoip.city_name) | ConstantScore(_field_names:via.keyword) | ConstantScore(_field_names:message) | ConstantScore(_field_names:user_agent.keyword) | ConstantScore(_field_names:request_uri.keyword) | ConstantScore(_field_names:@timestamp) | ConstantScore(_field_names:upstream_response_time) | ConstantScore(_field_names:geoip.subdivision_name))",<br /> "time": "5535.127008ms",<br /> "time_in_nanos": 5535127008<br />
也就是说, ES将只含一个*
的query_string query
解析成了针对mapping里能找到的所有字段的field:*
查询,然后合并所有的查询结果。 可想而知,对于比较大,字段比较多的索引这个查询是非常耗时的。而我对于*
的认知,是其应该被rewrite成一个match_all query
即可,这样几乎没有什么开销。
为什么会这样? 查询了一下ES官方关于Query String Query的文档,其中的default_field和all_fields起到了一定作用:
[elasticsearch/reference/5.5/query-dsl-query-string-query.html](https://www.elastic.co/guide/e ... y.html)
default_field
The default field for query terms if no prefix field is specified. Defaults to the index.query.default_field index settings, which in turn defaults to _all.
all_fields
Perform the query on all fields detected in the mapping that can be queried. Will be used by default when the _all field is disabled and no default_field is specified (either in the index settings or in the request body) and no fields are specified.
根据解释,查询的时候可以带一个default_field
选项,其默认值为索引级别设置index.query.default_field
,如果这个设置没有设置,则默认为_all
。 但一般用户索引日志的时候,都会关掉_all
字段,用于节省磁盘空间,提升索引速率。那么这时候default_field
是什么呢? 答案是all_fields
,也就是ES会将查询转换为对所有字段的查询。
为了验证这个是问题所在,我在索引里加了一个default_field
的设置,随意挑选了一个字段。 果然问题就解决了,discovery面板渲染速度快了差不多有10倍。
但仔细想想,这也只是绕过了问题。 问题的根源,为什么*
不被rewrite成match_all
呢?
这时候想到我们自己生产的集群似乎没有这个问题,于是用我们自己的集群测试了一下,*
果然是正常解析成match_all
了。 于是对比了一下集群ES的版本,我们正常工作的是5.3.2
,用户的集群是5.5.0
。
接下来,我想找到这些版本之间,ES对于query string的解析源码层面做了什么改动。经过一番探查,找到了下面这个变更历史:
可以看到,在[pull/23433](https://github.com/elastic/elasticsearch/pull/23433)里,为了修复一个foo:*
解析歧义的问题,对于field为空,类似光一个*
的Query string查询,不再被解析成match_all
了,而是扩展成全部字段的DisjunctionMaxQuery查询。 由此Kibana默认的*
,会引起非常严重的性能问题。
这个问题会影响5.4和5.5两个小版本的ES/Kibana。
顺着这个issue里的链接摸下去,找到了对应Kibana相关问题讨论:[issues#12097](https://github.com/elastic/kibana/issues/12097),以及对应的修复: [pull/13047](https://github.com/elastic/kibana/pull/13047),修复版本默认发出的查询串是match all
。
修复的版本则是5.5.2
及5.6.0
, 因此有用到5.4.0
到5.5.1
之间版本的ELK用户一定要安排升级!
search guard可以在线上环境使用么?
回复Elasticsearch • JustRun 发起了问题 • 1 人关注 • 0 个回复 • 4034 次浏览 • 2017-09-11 18:08
java.lang.OutOfMemoryError: Java heap space
Elasticsearch • kennywu76 回复了问题 • 5 人关注 • 4 个回复 • 7210 次浏览 • 2018-05-23 13:41
bulk update 重复的文档id 导致更新性能下降?
Elasticsearch • 白衬衣 回复了问题 • 16 人关注 • 10 个回复 • 12740 次浏览 • 2017-09-14 09:32
elasticsearch搜索
Elasticsearch • laoyang360 回复了问题 • 3 人关注 • 2 个回复 • 2052 次浏览 • 2017-09-11 18:21
请问大家现在ES都使用哪个版本?
Elasticsearch • laoyang360 回复了问题 • 3 人关注 • 2 个回复 • 7870 次浏览 • 2017-09-11 18:23
elastic-spark classNotFount EsSpark
Elasticsearch • easesstone 发表了文章 • 4 个评论 • 2670 次浏览 • 2017-09-11 16:07
java.lang.ClassNotFoundException: org.elasticsearch.spark.rdd.EsSpark$$anonfun$doSaveToEs$1
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:66)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1613)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1518)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1774)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:71)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:97)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
at org.apache.spark.scheduler.Task.run(Task.scala:90)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:253)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
写了一个elasticspark demo 如下:
```
package com.sydney.dream.elasticspark
import org.elasticsearch.spark._
import org.apache.spark.{SparkConf, SparkContext}
/**
* 需要手动引入org.elasticsearch.spark._
* 这样使得所有的RDD 都拥有saveToEs 的方法
*/
object ElasticSparkFirstDemo {
def main(args: Array[String]): Unit = {
val conf = new SparkConf()
.setAppName("ElaticSparkFirsDemo")
.set("es.nodes", "172.18.18.114")
.set("es.port", "9200")
.set("es.index.auto.create", "true")
val sc = new SparkContext(conf)
val numbers = Map("one" -> 1, "two" -> 2, "three" -> 3)
val airports = Map("arrival" -> "Otopeni", "SFO" -> "San Fran")
sc.makeRDD(Seq(numbers, airports)).saveToEs("spark/docs")
}
}
pom 文件如下:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/ma ... gt%3B
<parent>
<artifactId>spark</artifactId>
<groupId>com.sydney.dream</groupId>
<version>1.0.0</version>
</parent>
<modelVersion>4.0.0</modelVersion>
<groupId>com.sydney.dream</groupId>
<artifactId>ElasticSpark</artifactId>
<dependencies>
<!--<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-hadoop</artifactId>
<version>5.5.0</version>
</dependency>-->
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-spark-20_2.10</artifactId>
<version>5.5.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>2.2.0</version>
</dependency>
<!--<dependency>
<groupId> org.apache.storm</groupId>
<artifactId>storm-core</artifactId>
<version>1.0.1</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>log4j-over-slf4j</artifactId>
</exclusion>
</exclusions>
</dependency>-->
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<version>2.6</version>
<configuration>
<archive>
<manifest>
<addClasspath>true</addClasspath>
<classpathPrefix>lib/</classpathPrefix>
<mainClass>com.sydney.dream.elasticspark.ElasticSparkFirstDemo</mainClass>
</manifest>
</archive>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<version>2.10</version>
<executions>
<execution>
<id>copy-dependencies</id>
<phase>package</phase>
<goals>
<goal>copy-dependencies</goal>
</goals>
<configuration>
<outputDirectory>${project.build.directory}/lib</outputDirectory>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.scala-tools</groupId>
<artifactId>maven-scala-plugin</artifactId>
<version>2.15.2</version>
<executions>
<execution>
<goals>
<goal>compile</goal>
<goal>testCompile</goal>
</goals>
</execution>
</executions>
</plugin>
<!--
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.4.1</version>
<configuration>
<createDependencyReducedPom>false</createDependencyReducedPom>
</configuration>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<transformers>
<transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer" />
<transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
</transformers>
</configuration>
</execution>
</executions>
</plugin>-->
</plugins>
</build>
</project>
spark-submit 提交:
spark-submit --class com.sydney.dream.elasticspark.ElasticSparkFirstDemo --master yarn --deploy-mode client --executor-memory 5G --num-executors 10 --jars /home/ldl/sparkdemo/ElasticSpark-1.0.0.jar /home/ldl/sparkdemo/lib/activation-1.1.1.jar /home/ldl/sparkdemo/lib/antlr4-runtime-4.5.3.jar /home/ldl/sparkdemo/lib/aopalliance-repackaged-2.4.0-b34.jar /home/ldl/sparkdemo/lib/apacheds-i18n-2.0.0-M15.jar /home/ldl/sparkdemo/lib/apacheds-kerberos-codec-2.0.0-M15.jar /home/ldl/sparkdemo/lib/api-asn1-api-1.0.0-M20.jar /home/ldl/sparkdemo/lib/api-util-1.0.0-M20.jar /home/ldl/sparkdemo/lib/avro-1.7.7.jar /home/ldl/sparkdemo/lib/avro-ipc-1.7.7.jar /home/ldl/sparkdemo/lib/avro-ipc-1.7.7-tests.jar /home/ldl/sparkdemo/lib/base64-2.3.8.jar /home/ldl/sparkdemo/lib/bcprov-jdk15on-1.51.jar /home/ldl/sparkdemo/lib/chill_2.10-0.8.0.jar /home/ldl/sparkdemo/lib/chill-java-0.8.0.jar /home/ldl/sparkdemo/lib/commons-beanutils-1.7.0.jar /home/ldl/sparkdemo/lib/commons-beanutils-core-1.8.0.jar /home/ldl/sparkdemo/lib/commons-cli-1.2.jar /home/ldl/sparkdemo/lib/commons-codec-1.8.jar /home/ldl/sparkdemo/lib/commons-collections-3.2.2.jar /home/ldl/sparkdemo/lib/commons-compiler-3.0.0.jar /home/ldl/sparkdemo/lib/commons-compress-1.4.1.jar /home/ldl/sparkdemo/lib/commons-configuration-1.6.jar /home/ldl/sparkdemo/lib/commons-crypto-1.0.0.jar /home/ldl/sparkdemo/lib/commons-digester-1.8.jar /home/ldl/sparkdemo/lib/commons-httpclient-3.1.jar /home/ldl/sparkdemo/lib/commons-io-2.4.jar /home/ldl/sparkdemo/lib/commons-lang-2.6.jar /home/ldl/sparkdemo/lib/commons-lang3-3.5.jar /home/ldl/sparkdemo/lib/commons-math3-3.4.1.jar /home/ldl/sparkdemo/lib/commons-net-2.2.jar /home/ldl/sparkdemo/lib/compress-lzf-1.0.3.jar /home/ldl/sparkdemo/lib/curator-client-2.6.0.jar /home/ldl/sparkdemo/lib/curator-framework-2.6.0.jar /home/ldl/sparkdemo/lib/curator-recipes-2.6.0.jar /home/ldl/sparkdemo/lib/gson-2.2.4.jar /home/ldl/sparkdemo/lib/guava-16.0.1.jar /home/ldl/sparkdemo/lib/hk2-api-2.4.0-b34.jar /home/ldl/sparkdemo/lib/hk2-locator-2.4.0-b34.jar /home/ldl/sparkdemo/lib/hk2-utils-2.4.0-b34.jar /home/ldl/sparkdemo/lib/htrace-core-3.0.4.jar /home/ldl/sparkdemo/lib/httpclient-4.3.6.jar /home/ldl/sparkdemo/lib/httpcore-4.3.3.jar /home/ldl/sparkdemo/lib/ivy-2.4.0.jar /home/ldl/sparkdemo/lib/jackson-annotations-2.6.5.jar /home/ldl/sparkdemo/lib/jackson-core-2.6.5.jar /home/ldl/sparkdemo/lib/jackson-core-asl-1.9.13.jar /home/ldl/sparkdemo/lib/jackson-databind-2.6.5.jar /home/ldl/sparkdemo/lib/jackson-jaxrs-1.9.13.jar /home/ldl/sparkdemo/lib/jackson-mapper-asl-1.9.13.jar /home/ldl/sparkdemo/lib/jackson-module-paranamer-2.6.5.jar /home/ldl/sparkdemo/lib/jackson-xc-1.9.13.jar /home/ldl/sparkdemo/lib/janino-3.0.0.jar /home/ldl/sparkdemo/lib/javassist-3.18.1-GA.jar /home/ldl/sparkdemo/lib/javax.annotation-api-1.2.jar /home/ldl/sparkdemo/lib/javax.inject-2.4.0-b34.jar /home/ldl/sparkdemo/lib/java-xmlbuilder-1.0.jar /home/ldl/sparkdemo/lib/javax.servlet-api-3.1.0.jar /home/ldl/sparkdemo/lib/javax.ws.rs-api-2.0.1.jar /home/ldl/sparkdemo/lib/jaxb-api-2.2.2.jar /home/ldl/sparkdemo/lib/jcl-over-slf4j-1.7.16.jar /home/ldl/sparkdemo/lib/jersey-client-2.22.2.jar /home/ldl/sparkdemo/lib/jersey-common-2.22.2.jar /home/ldl/sparkdemo/lib/jersey-container-servlet-2.22.2.jar /home/ldl/sparkdemo/lib/jersey-container-servlet-core-2.22.2.jar /home/ldl/sparkdemo/lib/jersey-guava-2.22.2.jar /home/ldl/sparkdemo/lib/jersey-media-jaxb-2.22.2.jar /home/ldl/sparkdemo/lib/jersey-server-2.22.2.jar /home/ldl/sparkdemo/lib/jets3t-0.9.3.jar /home/ldl/sparkdemo/lib/jetty-util-6.1.26.jar /home/ldl/sparkdemo/lib/json4s-ast_2.10-3.2.11.jar /home/ldl/sparkdemo/lib/json4s-core_2.10-3.2.11.jar /home/ldl/sparkdemo/lib/json4s-jackson_2.10-3.2.11.jar /home/ldl/sparkdemo/lib/jsr305-1.3.9.jar /home/ldl/sparkdemo/lib/jul-to-slf4j-1.7.16.jar /home/ldl/sparkdemo/lib/kryo-shaded-3.0.3.jar /home/ldl/sparkdemo/lib/leveldbjni-all-1.8.jar /home/ldl/sparkdemo/lib/log4j-1.2.17.jar /home/ldl/sparkdemo/lib/lz4-1.3.0.jar /home/ldl/sparkdemo/lib/mail-1.4.7.jar /home/ldl/sparkdemo/lib/metrics-core-3.1.2.jar /home/ldl/sparkdemo/lib/metrics-graphite-3.1.2.jar /home/ldl/sparkdemo/lib/metrics-json-3.1.2.jar /home/ldl/sparkdemo/lib/metrics-jvm-3.1.2.jar /home/ldl/sparkdemo/lib/minlog-1.3.0.jar /home/ldl/sparkdemo/lib/mx4j-3.0.2.jar /home/ldl/sparkdemo/lib/netty-3.9.9.Final.jar /home/ldl/sparkdemo/lib/netty-all-4.0.43.Final.jar /home/ldl/sparkdemo/lib/objenesis-2.1.jar /home/ldl/sparkdemo/lib/oro-2.0.8.jar /home/ldl/sparkdemo/lib/osgi-resource-locator-1.0.1.jar /home/ldl/sparkdemo/lib/paranamer-2.3.jar /home/ldl/sparkdemo/lib/parquet-column-1.8.1.jar /home/ldl/sparkdemo/lib/parquet-common-1.8.1.jar /home/ldl/sparkdemo/lib/parquet-encoding-1.8.1.jar /home/ldl/sparkdemo/lib/parquet-format-2.3.0-incubating.jar /home/ldl/sparkdemo/lib/parquet-jackson-1.8.1.jar /home/ldl/sparkdemo/lib/protobuf-java-2.5.0.jar /home/ldl/sparkdemo/lib/py4j-0.10.4.jar /home/ldl/sparkdemo/lib/pyrolite-4.13.jar /home/ldl/sparkdemo/lib/RoaringBitmap-0.5.11.jar /home/ldl/sparkdemo/lib/slf4j-api-1.7.16.jar /home/ldl/sparkdemo/lib/slf4j-log4j12-1.7.16.jar /home/ldl/sparkdemo/lib/snappy-java-1.1.2.6.jar /home/ldl/sparkdemo/lib/stax-api-1.0-2.jar /home/ldl/sparkdemo/lib/stream-2.7.0.jar /home/ldl/sparkdemo/lib/univocity-parsers-2.2.1.jar /home/ldl/sparkdemo/lib/unused-1.0.0.jar /home/ldl/sparkdemo/lib/validation-api-1.1.0.Final.jar /home/ldl/sparkdemo/lib/xbean-asm5-shaded-4.4.jar /home/ldl/sparkdemo/lib/xercesImpl-2.9.1.jar /home/ldl/sparkdemo/lib/xml-apis-1.3.04.jar /home/ldl/sparkdemo/lib/xmlenc-0.52.jar /home/ldl/sparkdemo/lib/xz-1.0.jar /home/ldl/sparkdemo/lib/zookeeper-3.4.6.jar
ES wildcard不能支持特殊字符"|"的检索问题?
Elasticsearch • laoyang360 回复了问题 • 2 人关注 • 2 个回复 • 11139 次浏览 • 2017-09-11 17:33
使用ElasticDump迁移数据,报错trying to auto create mapping, but dynamic mapping is disabled。
Elasticsearch • laoyang360 回复了问题 • 3 人关注 • 2 个回复 • 4465 次浏览 • 2017-09-11 18:24
IBM HTTPServer ACC日志对接logstash
回复Logstash • 匿名用户 发起了问题 • 1 人关注 • 0 个回复 • 3050 次浏览 • 2017-09-11 12:53
现在es能不能用G1来进行内存回收?
Elasticsearch • davinciyxw 回复了问题 • 5 人关注 • 3 个回复 • 6832 次浏览 • 2017-09-11 15:35
error=>"Got response code '401' contacting Elasticsearch at URL 'http://localhost:9200/'"}
Logstash • taowenrui 回复了问题 • 2 人关注 • 7 个回复 • 20181 次浏览 • 2017-09-11 15:47
es 竖表 关联查询
Elasticsearch • Cheetah 回复了问题 • 2 人关注 • 1 个回复 • 6184 次浏览 • 2017-09-11 11:18