沙师弟,师父的充电器掉了

failed to find analyzer type [mmseg_maxword] or tokenizer

Elasticsearch | 作者 lilongsy | 发布于2016年03月30日 | 阅读数:9364

在Elasticsearch2.2.0和2.2.1版本下
用maven手动安装上mmseg、ik之后。
在elasticsearch.yml里配置好分词器、分析器之后,报错如下:
failed to find analyzer type [mmseg_maxword] or tokenizer
Unknown Analyzer type [ik] for [ik]

 
用rtf版本里的mmseg和ik,就没这个问题。
 
配置文件:
index:
analysis:
tokenizer:
my_pinyin:
type: pinyin
first_letter: prefix
padding_char: ''
pinyin_first_letter:
type: pinyin
first_letter: only
mmseg_maxword:
type: mmseg
seg_type: max_word
mmseg_complex:
type: mmseg
seg_type: complex
mmseg_simple:
type: mmseg
seg_type: simple
semicolon_spliter:
type: pattern
pattern: ";"
pct_spliter:
type: pattern
pattern: "[%]+"
ngram_1_to_2:
type: nGram
min_gram: 1
max_gram: 2
ngram_1_to_3:
type: nGram
min_gram: 1
max_gram: 3
filter:
ngram_min_3:
max_gram: 10
min_gram: 3
type: nGram
ngram_min_2:
max_gram: 10
min_gram: 2
type: nGram
ngram_min_1:
max_gram: 10
min_gram: 1
type: nGram
min2_length:
min: 2
max: 4
type: length
min3_length:
min: 3
max: 4
type: length
pinyin_first_letter:
type: pinyin
first_letter: only
analyzer:
lowercase_keyword:
type: custom
filter:
- lowercase
tokenizer: standard
lowercase_keyword_ngram_min_size1:
type: custom
filter:
- lowercase
- stop
- trim
- unique
tokenizer: nGram
lowercase_keyword_ngram_min_size2:
type: custom
filter:
- lowercase
- min2_length
- stop
- trim
- unique
tokenizer: nGram
lowercase_keyword_ngram_min_size3:
type: custom
filter:
- lowercase
- min3_length
- stop
- trim
- unique
tokenizer: ngram_1_to_3
lowercase_keyword_ngram:
type: custom
filter:
- lowercase
- stop
- trim
- unique
tokenizer: ngram_1_to_3
lowercase_keyword_without_standard:
type: custom
filter:
- lowercase
tokenizer: keyword
lowercase_whitespace:
type: custom
filter:
- lowercase
tokenizer: whitespace
ik:
alias:
- ik_analyzer
type: ik
ik_max_word:
type: ik
use_smart: false
ik_smart:
type: ik
use_smart: true
mmseg:
alias:
- mmseg_analyzer
type: mmseg
mmseg_maxword:
type: custom
filter:
- lowercase
tokenizer: mmseg_maxword
mmseg_complex:
type: custom
filter:
- lowercase
tokenizer: mmseg_complex
mmseg_simple:
type: custom
filter:
- lowercase
tokenizer: mmseg_simple
comma_spliter:
type: pattern
pattern: "[,|\\s]+"
pct_spliter:
type: pattern
pattern: "[%]+"
custom_snowball_analyzer:
type: snowball
language: English
simple_english_analyzer:
type: custom
tokenizer: whitespace
filter:
- standard
- lowercase
- snowball
edge_ngram:
type: custom
tokenizer: edgeNGram
filter:
- lowercase
pinyin_ngram_analyzer:
type: custom
tokenizer: my_pinyin
filter:
- lowercase
- nGram
- trim
- unique
pinyin_first_letter_analyzer:
type: custom
tokenizer: pinyin_first_letter
filter:
- standard
- lowercase
pinyin_first_letter_keyword_analyzer:
alias:
- pinyin_first_letter_analyzer_keyword
type: custom
tokenizer: keyword
filter:
- pinyin_first_letter
- lowercase
path_analyzer: #used for tokenize :/something/something/else
type: custom
tokenizer: path_hierarchy

#index.analysis.analyzer.default.type: mmseg
index.analysis.analyzer.default.type: keyword

# rtf.filter.redis.host: 127.0.0.1
# rtf.filter.redis.port: 6379

 
请大神解答下,谢谢!
 
已邀请:

medcl - 今晚打老虎。

赞同来自:

完整配置看一下

medcl - 今晚打老虎。

赞同来自:

你用的什么版本呢?用最新版

myhnuhai

赞同来自:

最简单的办法就是检查一下index单词前面是不是有空格,把空格删除就可以了,ymi参数配置所有的参数都统一起来前面不要留空格。试试看,我一开始遇到这个问题,仔细检查后就是这个原因,希望有帮助。

zplzpl

赞同来自:

这里有可能是你测试的URL。前面还要加上index_name

要回复问题请先登录注册