Elaticsearch进阶

手写ElasticSearch分词器

2018-11-18  本文已影响151人  豪1996

业务要求写一个ElasticSearch的分词插件,使用的是其他的分词框架,但是并没有ES的插件版本,所以需要包装一些。但是搜索了ES的doc,并没有很详细的说明一个插件是如何实现的。
这里以ik插件为例来说明一个分词插件如何按照ES的接口去实现。

概况

一个ES插件需要有如下文件

plugin-descriptor.properties

下面是ik插件的内容,自己写插件可以按照ik的范例做修改

# Elasticsearch plugin descriptor file
# This file must exist as 'plugin-descriptor.properties' at
# the root directory of all plugins.
#
# A plugin can be 'site', 'jvm', or both.
#
### example site plugin for "foo":
#
# foo.zip <-- zip file for the plugin, with this structure:
#   _site/ <-- the contents that will be served
#   plugin-descriptor.properties <-- example contents below:
#
# site=true
# description=My cool plugin
# version=1.0
#
### example jvm plugin for "foo"
#
# foo.zip <-- zip file for the plugin, with this structure:
#   <arbitrary name1>.jar <-- classes, resources, dependencies
#   <arbitrary nameN>.jar <-- any number of jars
#   plugin-descriptor.properties <-- example contents below:
#
# jvm=true
# classname=foo.bar.BazPlugin
# description=My cool plugin
# version=2.0.0-rc1
# elasticsearch.version=2.0
# java.version=1.7
#
### mandatory elements for all plugins:
#
# 'description': simple summary of the plugin
description=IK Analyzer for Elasticsearch
#
# 'version': plugin's version
version=6.4.0
#
# 'name': the plugin name
name=analysis-ik
#
# 'classname': the name of the class to load, fully-qualified.
classname=org.elasticsearch.plugin.analysis.ik.AnalysisIkPlugin
#
# 'java.version' version of java the code is built against
# use the system property java.specification.version
# version string must be a sequence of nonnegative decimal integers
# separated by "."'s and may have leading zeros
java.version=1.8
#
# 'elasticsearch.version' version of elasticsearch compiled against
# You will have to release a new version of the plugin for each new
# elasticsearch release. This version is checked when the plugin
# is loaded so Elasticsearch will refuse to start in the presence of
# plugins with the incorrect elasticsearch.version.
elasticsearch.version=6.4.0

plugin-security.policy

Java可以对代码做权限管理,这样如果是调用别人的代码,可以防止是恶意的代码,权限比如“网络,IO读写等”,
ik插件的内容

grant {
  // needed because of the hot reload functionality
  permission java.net.SocketPermission "*", "connect,resolve";
};

如果没有添加应有的配置,那么在es启动的时候类加载过程中就会报错,启动失败,可以根据异常原因查找缺失的配置项

自定义插件

public class AnalysisIkPlugin extends Plugin implements AnalysisPlugin

插件的接口,插件必须要继承Plugin,分析器插件必须要实现AnalysisPlugin接口
核心在AnalysisPlugin,这个接口实际是一个Factory,返回分词器,过滤器对象等。

@Override
    public Map<String, AnalysisModule.AnalysisProvider<TokenizerFactory>> getTokenizers() {
        Map<String, AnalysisModule.AnalysisProvider<TokenizerFactory>> extra = new HashMap<>();


        extra.put("ik_smart", IkTokenizerFactory::getIkSmartTokenizerFactory);
        extra.put("ik_max_word", IkTokenizerFactory::getIkTokenizerFactory);

        return extra;
    }

    @Override
    public Map<String, AnalysisModule.AnalysisProvider<AnalyzerProvider<? extends Analyzer>>> getAnalyzers() {
        Map<String, AnalysisModule.AnalysisProvider<AnalyzerProvider<? extends Analyzer>>> extra = new HashMap<>();

        extra.put("ik_smart", IkAnalyzerProvider::getIkSmartAnalyzerProvider);
        extra.put("ik_max_word", IkAnalyzerProvider::getIkAnalyzerProvider);

        return extra;
    }

你需要实现
incrementToken reset end方法
大概的调用逻辑如些

t = new Tokenizer()
t.reset();
while(t.incrementToken()){
    //获得词元
}
t.end();

Tokenizer有一个Reader属性,待分词的输入就在这里

/** The text source for this Tokenizer. */
  protected Reader input = ILLEGAL_STATE_READER;

获得词元

这个是使用ES的_analyze接口获得的测试结果

{
 "tokens": [
   {
     "token": "测试",
     "start_offset": 0,
     "end_offset": 2,
     "type": "word",
     "position": 0
   },
   {
     "token": "文字",
     "start_offset": 2,
     "end_offset": 4,
     "type": "word",
     "position": 1
   }
 ]
}

可以看到有这么几个属性:token,start_offset,end_offset,type,position
这些都需要我们去指定。但是Tokenizer却没有返回这些属性
ik的构造方法

/**
    * Lucene 4.0 Tokenizer适配器类构造函数
    */
   public IKTokenizer(Configuration configuration){
   super();
   offsetAtt = addAttribute(OffsetAttribute.class);
   termAtt = addAttribute(CharTermAttribute.class);
   typeAtt = addAttribute(TypeAttribute.class);   
   posIncrAtt = addAttribute(PositionIncrementAttribute.class);
   _IKImplement = new IKSegmenter(input,configuration);
   }

上面的分词属性是通过ES和Tokenizer共享对象来实现的,对象就是offsetAtt,termAtt,typeAtt,posIncrAtt,这些对象并不都是必须的,有的属性有自己默认的构造过程,如type,position

@Override
    public boolean incrementToken() throws IOException {
        //清除所有的词元属性
        clearAttributes();
        skippedPositions = 0;

        Lexeme nextLexeme = _IKImplement.next();
        if(nextLexeme != null){
            posIncrAtt.setPositionIncrement(skippedPositions +1 );

            //将Lexeme转成Attributes
            //设置词元文本
            termAtt.append(nextLexeme.getLexemeText());
            //设置词元长度
            termAtt.setLength(nextLexeme.getLength());
            //设置词元位移
            offsetAtt.setOffset(correctOffset(nextLexeme.getBeginPosition()), correctOffset(nextLexeme.getEndPosition()));

            //记录分词的最后位置
            endPosition = nextLexeme.getEndPosition();
            //记录词元分类
            typeAtt.setType(nextLexeme.getLexemeTypeString());          
            //返会true告知还有下个词元
            return true;
        }
        //返会false告知词元输出完毕
        return false;
    }

注意clearAttributes();这个方法不要忘了调用

QA
1.AccessController异常
之前说到Java的plugin-security.policy文件,但是我发现不知为什么ES无法自动加载插件目录下的文件,如果出现这样的情况,可以在config/jvm.options里面添加-Djava.security.policy=/path/to/plugin-security.policy可以使绝对路径,可以是以es根目录的相对路径

上一篇下一篇

猜你喜欢

热点阅读