token filter 中的stop token filter如何设置空格为 stop word
Elasticsearch | 作者 chencc | 发布于2021年08月10日 | 阅读数:1309这里的stopwords_path文件中设置了空格字符(直接输入的),结果无效(除了空格外的其他字符都能生效)
PUT jieba_test
{
"settings": {
"analysis": {
"filter": {
"jieba_stop": {
"type": "stop",
"stopwords_path": "stopwords.txt"
}
},
"analyzer": {
"my_ana": {
"tokenizer": "jieba_index",
"filter": [
"lowercase",
"jieba_stop"
]
}
}
}
}
}
但是换成这样是有效的,想请教一下如何在文件中配置空格
PUT jieba_test02
{
"settings": {
"analysis": {
"filter": {
"jieba_stop": {
"type": "stop",
"stopwords": [ " ", "is", "the" ]
}
},
"analyzer": {
"my_ana": {
"tokenizer": "jieba_index",
"filter": [
"lowercase",
"jieba_stop"
]
}
}
}
}
}
1 个回复
caster_QL
赞同来自: