Rust

databend中logos的应用

2023-12-05  本文已影响0人  神奇的考拉

关于logos在databend中的应用

databend中关于将sql进行token化,则是基于logos来完成,其本身进行封装定义了Tokenizer

pub struct Tokenizer<'a> {
    source: &'a str,           // 解析的token对应的内容
    lexer: Lexer<'a, TokenKind>, // 解析当前tokenkind的lexer
    prev_token: Option<TokenKind>, // 当前token前面一个token
    eoi: bool,                     // 是否解析到了当前被解析内容的末尾或hint中碰到eoi
}

同时并没有选择derive结构体Token,而是选择了TokenKind

#[derive(Clone, PartialEq, Eq)]
pub struct Token<'a> {
    pub source: &'a str,  // 当前token的内容
    pub kind: TokenKind,  // 当前token的类型
    pub span: Range,      // 当前token在提供的sql字符串的内容的范围
}
#[allow(non_camel_case_types)]
#[derive(Logos, EnumIter, Clone, Copy, Debug, PartialEq, Eq, Hash)]
pub enum TokenKind {
        // 省略内部代码
}

关于logos

logos代码库
logos是一个使用rust编写的词法解析器;更多见logos文档
常见应用logos方式:

补充关于自定义的struct还可以指定logos其他的属性

#[derive(Logos, Debug, PartialEq)]
#[logos(skip r"[ \t\n\f]+")] // Ignore this regex pattern between tokens
#[logos(error = LexingErr)]
enum Token {
    // Tokens can be literal strings, of any length.
    #[token("fast")]
    Fast,

    #[token(".")]
    Period,

    #[token(",")]
    Comma,

    // Or regular expressions.
    #[regex("[a-zA-Z]+")]
    Text,

    // number
    #[regex("[0-9]*")]
    NUM
}

#[derive(Default, Debug, Clone, PartialEq)]
enum LexingErr {
    InvalidInteger(String),
    #[default]
    NonAsciiChar,
}

use std::num::IntErrorKind::{PosOverflow, NegOverflow};


impl From<ParseIntError> for LexingErr {
    fn from(err: ParseIntError) -> Self {
        match err.kind() {
            PosOverflow | NegOverflow=> LexingErr::InvalidInteger("overflow error".to_owned()),
            _ => LexingErr::InvalidInteger("other error".to_owned()),
        }
    }
}


for tk in Token::lexer("select a, b from bar where c 1234"){
        match tk {
            Ok(ref token)=> println!("token={:?}, val={:?}", token, tk),
            Err(e) => panic!("something error: {:?}", e),
        }
    }

一般Lexer有3个方法:

关于logos针对自定义struct/enum每个item的处理

#[derive(Logos)]
enum Token {
    #[token(literal [, callback, priority = <integer>, ignore(<flag>, ...)]]
    #[regex(literal [, callback, priority = <integer>, ignore(<flag>, ...)]]
    SomeVariant,
}

以上token和regex除了literal外其他都是可选的;

/// Update the line count and the char index.
fn newline_callback(lex: &mut Lexer<Token>) -> Skip {
    lex.extras.0 += 1;
    lex.extras.1 = lex.span().end;
    Skip
}

/// Compute the line and column position for the current word.
fn word_callback(lex: &mut Lexer<Token>) -> (usize, usize) {
    let line = lex.extras.0;
    let column = lex.span().start - lex.extras.1;

    (line, column)
}

/// Simple tokens to retrieve words and their location.
#[derive(Debug, Logos)]
#[logos(extras = (usize, usize))]
enum Token {
    #[regex(r"\n", newline_callback)]
    Newline,

    #[regex(r"\w+", word_callback)]
    Word((usize, usize)),
}

let mut lex = Token::lexer("[package]
name = "rustdemos"
version = "0.1.0"
edition = "2021"

# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html

[dependencies]
spin ="0.9.8"
logos = "0.13.0"
regex = "1.10.2"
num = "0.4.1"");

    while let Some(token) = lex.next() {
        match token {
            Ok(Token::Word((line, column))) => {
                println!("Word '{}' found at ({}, {})", lex.slice(), line, column);
            }
            _ => (),
        }
    }

关于logos进行item解析时匹配多个规则时遵循如下要求

另外若是同时满足多个规则,此时可以通过priority来进行调整否则会panic

上一篇 下一篇

猜你喜欢

热点阅读