利用NLTK进行分句分词

2020-01-08  本文已影响0人  sunney0

.输入一个段落,分成句子(Punkt句子分割器)

import nltk
import nltk.data

def splitSentence(paragraph):
tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
sentences = tokenizer.tokenize(paragraph)
return sentences

if name == 'main':
print splitSentence("My name is Tom. I am a boy. I like soccer!")
结果为['My name is Tom.', 'I am a boy.', 'I like soccer!']
2.输入一个句子,分成词组

from nltk.tokenize import WordPunctTokenizer

def wordtokenizer(sentence):
#分段
words = WordPunctTokenizer().tokenize(sentence)
return words

if name == 'main':
print wordtokenizer("My name is Tom.")
结果为['My', 'name', 'is', 'Tom', '.']

转载于:https://my.oschina.net/u/3346994/blog/911733

上一篇下一篇

猜你喜欢

热点阅读