NLP之手动搭建基础单词纠错器
2019-10-16 本文已影响0人
Jackpot_0213
1.导入词库
# 词典库
vocab = set([line.rstrip() for line in open('vocab.txt')])
print(vocab)
结果展示:
{'Rousseau', 'capsules', 'penetrated', 'predicting', 'unmeshed', 'Epstein', 'Eduardo', 'timetables', 'mahogany', 'catalog', 'Sodium', 'distortion', 'gilded', 'urinals', 'gagwriters', 'Fires', 'against', 'banner', 'Summerspace', 'apartment', 'conjure', '72.6', 'morticians', 'Goodis', 'distraction', 'Vikulov', 'brains', 'kidnapping', 'depraved', 'shock', 'pumping', 'fluxes', 'fecundity', 'pirate', 'willy', ',', 'Danchin', 'superlunary', 'Thurber', 'reminds', 'flattery', 'image', 'kernel', 'synthesize', 'Clint', 'Browne', 'Mahayana', 'sad', 'Len', 'screenings', 'dispenser', 'sustained', 'geographers', 'plug', 'succeeded', 'Falls', 'unkind', 'Severe', 'exploded', 'chronicle', 'honorably', 'fruit', 'lapped', 'rotate', 'crises', 'gentlemanly', 'revival', 'Alsing', 'members', 'synergistic', 'nakedly', 'singular', 'Flood', 'enzymatic', 'eyewitness', 'Premier', 'unsure', 'swarming', 'depot', 'survivals', 'Grace', 'anthropologists', 'Heretic', 'pastes', 'modifier', 'Catholics', 'some', 'pitcher'
2.生成所有候选集合
候选集合:一个正确单词可能会出现的错误输入
编辑距离:一个字符串(错误输出)经过几次字母插入、删除、替换才能转换成相应的正确单词。
注:下面代码将采用编辑距离为1建立模型。即经一个操作就能变成其他单词。
# 需要生成所有候选集合
def generate_candidates(word):
"""
word: 给定的输入(错误的输入)
返回所有(valid)候选集合
"""
"""
"""
# 生成编辑距离为1的单词
# 1.insert 2. delete 3. replace
# appl: replace: bppl, cppl, aapl, abpl...
# insert: bappl, cappl, abppl, acppl....
# delete: ppl, apl, app
# 假设使用26个字符
letters = 'abcdefghijklmnopqrstuvwxyz'
#splits所有可能的插入点信息找出来,然后在不同位置进行插入删除替换
#word[:i], word[i:] 从0到i-1行,从i到最后一行,只有行就是第i个
splits = [(word[:i], word[i:]) for i in range(len(word)+1)]
# insert操作 L:左边部分 c:插入的部分 R:右边部分 LR来自splits中,c来自letter中的每一个字符a-z
inserts = [L+c+R for L, R in splits for c in letters]
# delete L加上R的从第二个字符开始 R部位空
deletes = [L+R[1:] for L,R in splits if R]
# replace L加上c(从letter中截取的每一个字母)加上右边的第二个字符到最后
replaces = [L+c+R[1:] for L,R in splits if R for c in letters]
candidates = set(inserts+deletes+replaces)
# 过来掉不存在于词典库里面的单词
return [word for word in candidates if word in vocab]
generate_candidates("apple")
结果展示:
['apple', 'ample', 'apply', 'apples']
3.读取语料库
from nltk.corpus import reuters
# 读取语料库
categories = reuters.categories()
corpus = reuters.sents(categories=categories)
格式展示:
raining: rainning, raning
writings: writtings
disparagingly: disparingly
yellow: yello
4.构建语言模型
注:下面代码的语言模型使用bigram,并且只考虑第i个单词和第i+1个单词的关系
# 构建语言模型: bigram
term_count = {}
bigram_count = {}
for doc in corpus:
doc = ['<s>'] + doc #给第一个加一个,这样第一个单词也会和前一个产生联系,数据就会统一 trigrams应该加两个<s>
for i in range(0, len(doc)-1): #遍历文章中的每一个词
# bigram: [i,i+1]
term = doc[i] #第i个index里面的单词
bigram = doc[i:i+2] # 第i个单词和第i+1个单词
#统计信息
if term in term_count: #存在
term_count[term]+=1
else:
term_count[term]=1 #第一次出现
bigram = ' '.join(bigram)
if bigram in bigram_count: #前后两个单词在一起的概率
bigram_count[bigram]+=1
else:
bigram_count[bigram]=1
# sklearn里面有现成的包
5.根据数据,为每一个错误输入计算出现的概率
# 用户打错的概率统计 - channel probability
channel_prob = {}
#读取正确的和错误的
for line in open('spell-errors.txt'):
items = line.split(":")
correct = items[0].strip()
mistakes = [item.strip() for item in items[1].strip().split(",")]
channel_prob[correct] = {}
for mis in mistakes:
channel_prob[correct][mis] = 1.0/len(mistakes)
print(channel_prob)
结果展示:
{'raining': {'rainning': 0.5, 'raning': 0.5}, 'writings': {'writtings': 1.0}, 'disparagingly': {'disparingly': 1.0}, 'yellow': {'yello': 1.0},
6.导入测试数据,进行拼写纠错
数据格式:
1 1 They told Reuter correspondents in Asian capitals a U.S. Move against Japan might boost protectionst sentiment in the U.S. And lead to curbs on American imports of their products.
代码实现:
import numpy as np
V = len(term_count.keys())
file = open("testdata.txt", 'r')
for line in file:
#一行三个元素 itmes[0]句子序号 itmes[1]句子中有几个错误单词 itmes[2]句子
items = line.rstrip().split('\t')
line = items[2].split()
# line = ["I", "like", "playing"]
for word in line:
if word not in vocab:
# 需要替换word成正确的单词
# Step1: 生成所有的(valid)候选集合
candidates = generate_candidates(word)
# 一种方式: if candidate = [], 多生成几个candidates, 比如生成编辑距离不大于2的
# TODO : 根据条件生成更多的候选集合
if len(candidates) < 1:
continue # 不建议这么做(这是不对的)
probs = []
# 对于每一个candidate, 计算它的score
# score = p(correct)*p(mistake|correct)
# = log p(correct) + log p(mistake|correct)
# 返回score最大的candidate
for candi in candidates:
prob = 0
# a. 计算channel probability
if candi in channel_prob and word in channel_prob[candi]:
#之前计算好的概率
prob += np.log(channel_prob[candi][word])
else:
prob += np.log(0.0001)
# b. 计算语言模型的概率
idx = items[2].index(word)+1
# 这个biagrams是否存在于之前的语言模型 是:
if items[2][idx - 1] in bigram_count and candi in bigram_count[items[2][idx - 1]]:
prob += np.log((bigram_count[items[2][idx - 1]][candi] + 1.0) / (
term_count[bigram_count[items[2][idx - 1]]] + V))
# TODO: 也要考虑当前 [word, post_word]
# prob += np.log(bigram概率)
#否:赋予一个不为0但很小的值
else:
prob += np.log(1.0 / V)
probs.append(prob)
max_idx = probs.index(max(probs))
print (word, candidates[max_idx])
结果展示:由于没有对句子中的标点进行过滤,有标点的地方也当成了拼写错误。
protectionst protectionist
products. products
long-run, long-run
gain. gain
17, 17e
retaiation retaliation
cost. cost