程序员

WSTA-Review(1):Preprocessing

2018-05-03  本文已影响0人  阿漠不冷漠

1. Words and Corpora

e.g. They picnicked by the pool, then lay back on the grass and looked at the stars.

The sentence above has 16 tokens and 14 types.

2. Text Normalization

possible procedures

2.1 Segmenting/Tokenizing

Tokenization: The task of segmenting running text into words

2.2 Normalizing

lemmatizer = nltk.stem.wordnet.WordNetLemmatizer()
def lemmatize(word):
    lemma = lemmatizer.lemmatize(word,'v')
    if lemma == word:
        lemma = lemmatizer.lemmatize(word,'n')
    return lemma

2.3 Segmenting

The maximum matching algorithm starts by pointing at the beginning of a string. It chooses the longest word in the dictionary that matches the input at the current position. The pointer is then advanced to the end of that word in the string. If no word matches, the pointer is instead advanced one character. The algorithm is then iteratively applied again starting from the new Pointer position.[1]

The code below is an example of the MaxMatch algorithm in English, however, the algorithm works better in Chinese than English.

def max_match(text_string, dictionary, word_list):
    '''
    @para: text_string: an alphabetic characters string
    @para: dictionary: existing dict
    @word_list: a collection used to store matched words.

    This method is used to find words existed in the dictionary from an alphabetic characters string.
    '''
    if len(text_string) <= 0:
          return word_list
    for i in range(len(text_string), 1, -1):
        first_word = text_string[:i]
        remainder = text_string[i:]
        if lemmatize(first_word) in dictionary: # todo first_word need to be lemma
            break
    else:
        first_word = text_string[0]
        remainder = text_string[1:]

    word_list.append(first_word)
    return max_match(remainder, dictionary, word_list)

  1. J&M3 Ch2, P15

上一篇 下一篇

猜你喜欢

热点阅读