ElasticSearch入门

Elasticsearch 6.x 倒排索引与分词

2018-08-16  本文已影响148人  小旋锋的简书

倒排索引

示例:
对以下三个文档去除停用词后构造倒排索引


image

倒排索引-查询过程

查询包含“搜索引擎”的文档

  1. 通过倒排索引获得“搜索引擎”对应的文档id列表,有1,3
  2. 通过正排索引查询1和3的完整内容
  3. 返回最终结果

倒排索引-组成

单词词典(Term Dictionary)

单词词典的实现一般用B+树,B+树构造的可视化过程网址: B+ Tree Visualization

关于B树和B+树

  1. 维基百科-B树
  2. 维基百科-B+树
  3. B树和B+树的插入、删除图文详解
image

倒排列表(Posting List)

image

B+树内部结点存索引,叶子结点存数据,这里的 单词词典就是B+树索引,倒排列表就是数据,整合在一起后如下所示

note:
B+树索引中文和英文怎么比较大小呢?unicode比较还是拼音呢?

image

ES存储的是一个JSON格式的文档,其中包含多个字段,每个字段会有自己的倒排索引

分词

分词是将文本转换成一系列单词(Term or Token)的过程,也可以叫文本分析,在ES里面称为Analysis

image

分词器

分词器是ES中专门处理分词的组件,英文为Analyzer,它的组成如下:

分词器调用顺序


image

Analyze API

ES提供了一个可以测试分词的API接口,方便验证分词效果,endpoint是_analyze

image
POST test_index/doc
{
  "username": "whirly",
  "age":22
}

POST test_index/_analyze
{
  "field": "username",
  "text": ["hello world"]
}
POST _analyze
{
  "tokenizer": "standard",
  "filter": ["lowercase"],
  "text": ["Hello World"]
}

预定义的分词器

ES自带的分词器有如下:

示例:停用词分词器

POST _analyze
{
  "analyzer": "stop",
  "text": ["The 2 QUICK Brown Foxes jumped over the lazy dog's bone."]
}

结果

{
  "tokens": [
    {
      "token": "quick",
      "start_offset": 6,
      "end_offset": 11,
      "type": "word",
      "position": 1
    },
    {
      "token": "brown",
      "start_offset": 12,
      "end_offset": 17,
      "type": "word",
      "position": 2
    },
    {
      "token": "foxes",
      "start_offset": 18,
      "end_offset": 23,
      "type": "word",
      "position": 3
    },
    {
      "token": "jumped",
      "start_offset": 24,
      "end_offset": 30,
      "type": "word",
      "position": 4
    },
    {
      "token": "over",
      "start_offset": 31,
      "end_offset": 35,
      "type": "word",
      "position": 5
    },
    {
      "token": "lazy",
      "start_offset": 40,
      "end_offset": 44,
      "type": "word",
      "position": 7
    },
    {
      "token": "dog",
      "start_offset": 45,
      "end_offset": 48,
      "type": "word",
      "position": 8
    },
    {
      "token": "s",
      "start_offset": 49,
      "end_offset": 50,
      "type": "word",
      "position": 9
    },
    {
      "token": "bone",
      "start_offset": 51,
      "end_offset": 55,
      "type": "word",
      "position": 10
    }
  ]
}

中文分词

安装ik中文分词插件

# 在Elasticsearch安装目录下执行命令,然后重启es
bin/elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.3.0/elasticsearch-analysis-ik-6.3.0.zip

# 如果由于网络慢,安装失败,可以先下载好zip压缩包,将下面命令改为实际的路径,执行,然后重启es
bin/elasticsearch-plugin install file:///path/to/elasticsearch-analysis-ik-6.3.0.zip
POST _analyze
{
  "analyzer": "ik_smart",
  "text": ["公安部:各地校车将享最高路权"]
}

# 结果
{
  "tokens": [
    {
      "token": "公安部",
      "start_offset": 0,
      "end_offset": 3,
      "type": "CN_WORD",
      "position": 0
    },
    {
      "token": "各地",
      "start_offset": 4,
      "end_offset": 6,
      "type": "CN_WORD",
      "position": 1
    },
    {
      "token": "校车",
      "start_offset": 6,
      "end_offset": 8,
      "type": "CN_WORD",
      "position": 2
    },
    {
      "token": "将",
      "start_offset": 8,
      "end_offset": 9,
      "type": "CN_CHAR",
      "position": 3
    },
    {
      "token": "享",
      "start_offset": 9,
      "end_offset": 10,
      "type": "CN_CHAR",
      "position": 4
    },
    {
      "token": "最高",
      "start_offset": 10,
      "end_offset": 12,
      "type": "CN_WORD",
      "position": 5
    },
    {
      "token": "路",
      "start_offset": 12,
      "end_offset": 13,
      "type": "CN_CHAR",
      "position": 6
    },
    {
      "token": "权",
      "start_offset": 13,
      "end_offset": 14,
      "type": "CN_CHAR",
      "position": 7
    }
  ]
}
POST _analyze
{
  "analyzer": "ik_max_word",
  "text": ["公安部:各地校车将享最高路权"]
}

# 结果
{
  "tokens": [
    {
      "token": "公安部",
      "start_offset": 0,
      "end_offset": 3,
      "type": "CN_WORD",
      "position": 0
    },
    {
      "token": "公安",
      "start_offset": 0,
      "end_offset": 2,
      "type": "CN_WORD",
      "position": 1
    },
    {
      "token": "部",
      "start_offset": 2,
      "end_offset": 3,
      "type": "CN_CHAR",
      "position": 2
    },
    {
      "token": "各地",
      "start_offset": 4,
      "end_offset": 6,
      "type": "CN_WORD",
      "position": 3
    },
    {
      "token": "校车",
      "start_offset": 6,
      "end_offset": 8,
      "type": "CN_WORD",
      "position": 4
    },
    {
      "token": "将",
      "start_offset": 8,
      "end_offset": 9,
      "type": "CN_CHAR",
      "position": 5
    },
    {
      "token": "享",
      "start_offset": 9,
      "end_offset": 10,
      "type": "CN_CHAR",
      "position": 6
    },
    {
      "token": "最高",
      "start_offset": 10,
      "end_offset": 12,
      "type": "CN_WORD",
      "position": 7
    },
    {
      "token": "路",
      "start_offset": 12,
      "end_offset": 13,
      "type": "CN_CHAR",
      "position": 8
    },
    {
      "token": "权",
      "start_offset": 13,
      "end_offset": 14,
      "type": "CN_CHAR",
      "position": 9
    }
  ]
}

自定义分词

当自带的分词无法满足需求时,可以自定义分词,通过定义Character Filters、Tokenizer和Token Filters实现

Character Filters

Character Filters测试

POST _analyze
{
  "tokenizer": "keyword",
  "char_filter": ["html_strip"],
  "text": ["<p>I&apos;m so <b>happy</b>!</p>"]
}

# 结果
{
  "tokens": [
    {
      "token": """

I'm so happy!

""",
      "start_offset": 0,
      "end_offset": 32,
      "type": "word",
      "position": 0
    }
  ]
}

Tokenizers

Tokenizers 测试

POST _analyze
{
  "tokenizer": "path_hierarchy",
  "text": ["/path/to/file"]
}

# 结果
{
  "tokens": [
    {
      "token": "/path",
      "start_offset": 0,
      "end_offset": 5,
      "type": "word",
      "position": 0
    },
    {
      "token": "/path/to",
      "start_offset": 0,
      "end_offset": 8,
      "type": "word",
      "position": 0
    },
    {
      "token": "/path/to/file",
      "start_offset": 0,
      "end_offset": 13,
      "type": "word",
      "position": 0
    }
  ]
}

Token Filters

Token Filters测试

POST _analyze
{
  "text": [
    "a Hello World!"
  ],
  "tokenizer": "standard",
  "filter": [
    "stop",
    "lowercase",
    {
      "type": "ngram",
      "min_gram": 4,
      "max_gram": 4
    }
  ]
}

# 结果
{
  "tokens": [
    {
      "token": "hell",
      "start_offset": 2,
      "end_offset": 7,
      "type": "<ALPHANUM>",
      "position": 1
    },
    {
      "token": "ello",
      "start_offset": 2,
      "end_offset": 7,
      "type": "<ALPHANUM>",
      "position": 1
    },
    {
      "token": "worl",
      "start_offset": 8,
      "end_offset": 13,
      "type": "<ALPHANUM>",
      "position": 2
    },
    {
      "token": "orld",
      "start_offset": 8,
      "end_offset": 13,
      "type": "<ALPHANUM>",
      "position": 2
    }
  ]
}

自定义分词

自定义分词需要在索引配置中设定 char_filter、tokenizer、filter、analyzer等

自定义分词示例:

PUT test_index_1
{
  "settings": {
    "analysis": {
      "analyzer": {
        "my_custom_analyzer": {
          "type":      "custom",
          "tokenizer": "standard",
          "char_filter": [
            "html_strip"
          ],
          "filter": [
            "uppercase",
            "asciifolding"
          ]
        }
      }
    }
  }
}

自定义分词器测试

POST test_index_1/_analyze
{
  "analyzer": "my_custom_analyzer",
  "text": ["<p>I&apos;m so <b>happy</b>!</p>"]
}

# 结果
{
  "tokens": [
    {
      "token": "I'M",
      "start_offset": 3,
      "end_offset": 11,
      "type": "<ALPHANUM>",
      "position": 0
    },
    {
      "token": "SO",
      "start_offset": 12,
      "end_offset": 14,
      "type": "<ALPHANUM>",
      "position": 1
    },
    {
      "token": "HAPPY",
      "start_offset": 18,
      "end_offset": 27,
      "type": "<ALPHANUM>",
      "position": 2
    }
  ]
}

分词使用说明

分词会在如下两个时机使用:

分词使用建议

更多内容请访问我的个人网站: http://laijianfeng.org
参考文档:

  1. elasticsearch 官方文档
  2. 慕课网 Elastic Stack从入门到实践
上一篇下一篇

猜你喜欢

热点阅读