怎么又是你
拼音

拼音

拼音搜索部分前缀匹配,例如sous,如何实现?

Elasticsearchrochy 回复了问题 • 2 人关注 • 1 个回复 • 2627 次浏览 • 2019-01-09 19:23 • 来自相关话题

请问elasticsearch-analysis-pinyin插件有没有

回复

Elasticsearchweizhuang 发起了问题 • 2 人关注 • 0 个回复 • 1854 次浏览 • 2018-06-19 12:05 • 来自相关话题

关于拼音搜索,像我们这种情况怎样设计比较合理?

Elasticsearchlaoyang360 回复了问题 • 5 人关注 • 1 个回复 • 2425 次浏览 • 2018-06-18 23:08 • 来自相关话题

elasticsearch-analysis-pinyin更新至es2.4.1和5.0.0-rc1

Elasticsearchmedcl 发表了文章 • 3 个评论 • 4422 次浏览 • 2016-10-13 21:49 • 来自相关话题

版本分别支持到最新的 es v2.4.1和 es v5.0.0-rc1 新增若干特性,支持多种选项配置,支持 pinyin 的切分,比之前需要结合 ngram 的方式更加准确, 如:liudehuaalibaba13zhuanghan->liu,de,hua,a,li,ba,ba,13,zhuang,han, 具体配置参加文档: https://github.com/medcl/elast ... inyin   下载: https://github.com/medcl/elast ... eases   欢迎测试:
curl -XPUT http://localhost:9200/medcl/ -d'
{
    "index" : {
        "analysis" : {
            "analyzer" : {
                "pinyin_analyzer" : {
                    "tokenizer" : "my_pinyin"
                    }
            },
            "tokenizer" : {
                "my_pinyin" : {
                    "type" : "pinyin",
                    "keep_separate_first_letter" : false,
                    "keep_full_pinyin" : true,
                    "keep_original" : false,
                    "limit_first_letter_length" : 16,
                    "lowercase" : true
                }
            }
        }
    }
}'

curl http://localhost:9200/medcl/_a ... lyzer
{
  "tokens" : [ {
    "token" : "liu",
    "start_offset" : 0,
    "end_offset" : 1,
    "type" : "word",
    "position" : 0
  }, {
    "token" : "de",
    "start_offset" : 1,
    "end_offset" : 2,
    "type" : "word",
    "position" : 1
  }, {
    "token" : "hua",
    "start_offset" : 2,
    "end_offset" : 3,
    "type" : "word",
    "position" : 2
  }, {
    "token" : "a",
    "start_offset" : 2,
    "end_offset" : 31,
    "type" : "word",
    "position" : 3
  }, {
    "token" : "b",
    "start_offset" : 2,
    "end_offset" : 31,
    "type" : "word",
    "position" : 4
  }, {
    "token" : "c",
    "start_offset" : 2,
    "end_offset" : 31,
    "type" : "word",
    "position" : 5
  }, {
    "token" : "d",
    "start_offset" : 2,
    "end_offset" : 31,
    "type" : "word",
    "position" : 6
  }, {
    "token" : "liu",
    "start_offset" : 2,
    "end_offset" : 31,
    "type" : "word",
    "position" : 7
  }, {
    "token" : "de",
    "start_offset" : 2,
    "end_offset" : 31,
    "type" : "word",
    "position" : 8
  }, {
    "token" : "hua",
    "start_offset" : 2,
    "end_offset" : 31,
    "type" : "word",
    "position" : 9
  }, {
    "token" : "wo",
    "start_offset" : 2,
    "end_offset" : 31,
    "type" : "word",
    "position" : 10
  }, {
    "token" : "bu",
    "start_offset" : 2,
    "end_offset" : 31,
    "type" : "word",
    "position" : 11
  }, {
    "token" : "zhi",
    "start_offset" : 2,
    "end_offset" : 31,
    "type" : "word",
    "position" : 12
  }, {
    "token" : "dao",
    "start_offset" : 2,
    "end_offset" : 31,
    "type" : "word",
    "position" : 13
  }, {
    "token" : "shi",
    "start_offset" : 2,
    "end_offset" : 31,
    "type" : "word",
    "position" : 14
  }, {
    "token" : "shui",
    "start_offset" : 2,
    "end_offset" : 31,
    "type" : "word",
    "position" : 15
  }, {
    "token" : "ldhabcdliudehuaw",
    "start_offset" : 0,
    "end_offset" : 16,
    "type" : "word",
    "position" : 16
  } ]
}
 

拼音搜索部分前缀匹配,例如sous,如何实现?

回复

Elasticsearchrochy 回复了问题 • 2 人关注 • 1 个回复 • 2627 次浏览 • 2019-01-09 19:23 • 来自相关话题

请问elasticsearch-analysis-pinyin插件有没有

回复

Elasticsearchweizhuang 发起了问题 • 2 人关注 • 0 个回复 • 1854 次浏览 • 2018-06-19 12:05 • 来自相关话题

关于拼音搜索,像我们这种情况怎样设计比较合理?

回复

Elasticsearchlaoyang360 回复了问题 • 5 人关注 • 1 个回复 • 2425 次浏览 • 2018-06-18 23:08 • 来自相关话题

elasticsearch-analysis-pinyin更新至es2.4.1和5.0.0-rc1

Elasticsearchmedcl 发表了文章 • 3 个评论 • 4422 次浏览 • 2016-10-13 21:49 • 来自相关话题

版本分别支持到最新的 es v2.4.1和 es v5.0.0-rc1 新增若干特性,支持多种选项配置,支持 pinyin 的切分,比之前需要结合 ngram 的方式更加准确, 如:liudehuaalibaba13zhuanghan->liu,de,hua,a,li,ba,ba,13,zhuang,han, 具体配置参加文档: https://github.com/medcl/elast ... inyin   下载: https://github.com/medcl/elast ... eases   欢迎测试:
curl -XPUT http://localhost:9200/medcl/ -d'
{
    "index" : {
        "analysis" : {
            "analyzer" : {
                "pinyin_analyzer" : {
                    "tokenizer" : "my_pinyin"
                    }
            },
            "tokenizer" : {
                "my_pinyin" : {
                    "type" : "pinyin",
                    "keep_separate_first_letter" : false,
                    "keep_full_pinyin" : true,
                    "keep_original" : false,
                    "limit_first_letter_length" : 16,
                    "lowercase" : true
                }
            }
        }
    }
}'

curl http://localhost:9200/medcl/_a ... lyzer
{
  "tokens" : [ {
    "token" : "liu",
    "start_offset" : 0,
    "end_offset" : 1,
    "type" : "word",
    "position" : 0
  }, {
    "token" : "de",
    "start_offset" : 1,
    "end_offset" : 2,
    "type" : "word",
    "position" : 1
  }, {
    "token" : "hua",
    "start_offset" : 2,
    "end_offset" : 3,
    "type" : "word",
    "position" : 2
  }, {
    "token" : "a",
    "start_offset" : 2,
    "end_offset" : 31,
    "type" : "word",
    "position" : 3
  }, {
    "token" : "b",
    "start_offset" : 2,
    "end_offset" : 31,
    "type" : "word",
    "position" : 4
  }, {
    "token" : "c",
    "start_offset" : 2,
    "end_offset" : 31,
    "type" : "word",
    "position" : 5
  }, {
    "token" : "d",
    "start_offset" : 2,
    "end_offset" : 31,
    "type" : "word",
    "position" : 6
  }, {
    "token" : "liu",
    "start_offset" : 2,
    "end_offset" : 31,
    "type" : "word",
    "position" : 7
  }, {
    "token" : "de",
    "start_offset" : 2,
    "end_offset" : 31,
    "type" : "word",
    "position" : 8
  }, {
    "token" : "hua",
    "start_offset" : 2,
    "end_offset" : 31,
    "type" : "word",
    "position" : 9
  }, {
    "token" : "wo",
    "start_offset" : 2,
    "end_offset" : 31,
    "type" : "word",
    "position" : 10
  }, {
    "token" : "bu",
    "start_offset" : 2,
    "end_offset" : 31,
    "type" : "word",
    "position" : 11
  }, {
    "token" : "zhi",
    "start_offset" : 2,
    "end_offset" : 31,
    "type" : "word",
    "position" : 12
  }, {
    "token" : "dao",
    "start_offset" : 2,
    "end_offset" : 31,
    "type" : "word",
    "position" : 13
  }, {
    "token" : "shi",
    "start_offset" : 2,
    "end_offset" : 31,
    "type" : "word",
    "position" : 14
  }, {
    "token" : "shui",
    "start_offset" : 2,
    "end_offset" : 31,
    "type" : "word",
    "position" : 15
  }, {
    "token" : "ldhabcdliudehuaw",
    "start_offset" : 0,
    "end_offset" : 16,
    "type" : "word",
    "position" : 16
  } ]
}