中文序列标注任务(一)

2021-03-31  本文已影响0人  三方斜阳

简介:

记录使用中文语料,测试实践,序列标注任务,主要使用 huggingface 提供的一系列 数据加载库:datasets, 预训练模型等

1. 数据准备:

text1:中国#1实施 从明年起 将 实施|v 第九个 五年计划#2实施
text2:从明年起将实施第九个五年计划
label:B-S,B-I,O,O,O,O,O,B-V,I-V,O,O,O,B-O,I-O,I-O,I-O

这个任务跟只是跟命名实体识别类似,我的目标是要识别出句子中的标识出来的,主语,谓语和宾语,像 text1 标识出来的那样,将数据处理成 CSV 文件: data

2. huggingface datasets库:

使用datasets读取数据集,huggingface 开源 了上百种公开的数据集,使用 datasets 库可以很方便的下载并且使用这些数据,下面记录官方教程以下载 conll2003 数据集为例:

from datasets import load_dataset, load_metric
datasets = load_dataset("conll2003")
print(datasets )
>>
DatasetDict({
    train: Dataset({
        features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],
        num_rows: 14041
    })
    validation: Dataset({
        features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],
        num_rows: 3250
    })
    test: Dataset({
        features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],
        num_rows: 3453
    })
})

这里还可以指定读取多少:

datasets = load_dataset("conll2003",split="train[:100])
>>
Dataset({
    features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],
    num_rows: 100
})
train_dataset = load_dataset('csv', encoding='utf-8',data_files=r'train.csv')
valid_dataset = load_dataset('csv', encoding='utf-8',data_files=r'valid.csv')
#通过以下的方式访问数据:
train_dataset 
train_dataset ['train']
train_dataset ['train'][0]
train_dataset['train']['text']
>>
DatasetDict({
    train: Dataset({
        features: ['id', 'text', 'ner_tags'],
        num_rows: 10000
    })
})
>>
Dataset({
    features: ['id', 'text', 'ner_tags'],
    num_rows: 10000
})
>>
{'id': 1, 'text': '本报讯记者周晓燕赴京参加中共十四届五中全会刚刚回到厦门的中共中央候补委员中共福建省委常委厦门市委书记石兆彬昨天在厦门市委召开的全市领导干部大会上传达了这次会议的主要精神', 'ner_tags': 'O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,B-V,I-V,O,O,O,O,O,O,O,O,B-O,I-O'}
>>

2. 准备模型

from transformers import AutoTokenizer   
import transformers
from transformers import AutoModelForTokenClassification, TrainingArguments, Trainer

tokenizer = AutoTokenizer.from_pretrained("bert-base-chinese")
assert isinstance(tokenizer, transformers.PreTrainedTokenizerFast)#The assertion ensures that our tokenizer is a fast tokenizers

model_checkpoint="bert-base-chinese"
model = AutoModelForTokenClassification.from_pretrained(model_checkpoint, num_labels=len(label_list))#model_checkpoint = "distilbert-base-uncased"

-下面需要将 label 转换为 索引 Id 并且对应 经过 bert tokenizer 之后的词,因为有的词会切分成子词,这样的话需要把原本的 label 也对应到各个子词上:
其中有几个函数注意:

label_list=['O', 'B-V', 'I-V', 'B-O', 'I-O', 'B-S', 'I-S']
label_all_tokens = True
def tokenize_and_align_labels(examples):
    tokenized_inputs = tokenizer(examples["text"], truncation=True)
    labels = []
    for i, label in enumerate(examples[f"ner_tags"]):
        label=[label_list.index(item) for item in label.strip().split(',')]
        word_ids = tokenized_inputs.word_ids(batch_index=i)
        previous_word_idx = None
        label_ids = []
        for word_idx in word_ids:
            if word_idx is None:
                label_ids.append(-100)
            elif word_idx != previous_word_idx:
                label_ids.append(label[word_idx])
            else:
                label_ids.append(label[word_idx] if label_all_tokens else -100)
            previous_word_idx = word_idx
        labels.append(label_ids)
    tokenized_inputs["labels"] = labels
    return tokenized_inputs
train_tokenized_datasets = train_dataset.map(tokenize_and_align_labels, batched=True)
valid_tokenized_datasets = valid_dataset.map(tokenize_and_align_labels, batched=True)
print(train_tokenized_datasets)
print(valid_tokenized_datasets)
>>
DatasetDict({
    train: Dataset({
        features: ['attention_mask', 'id', 'input_ids', 'labels', 'ner_tags', 'text', 'token_type_ids'],
        num_rows: 10000
    })
})
DatasetDict({
    train: Dataset({
        features: ['attention_mask', 'id', 'input_ids', 'labels', 'ner_tags', 'text', 'token_type_ids'],
        num_rows: 3000
    })
})

3. 开始训练,定义 TrainingArguments,trainer 等:

args = TrainingArguments(
    f"test-{task}",
    evaluation_strategy = "epoch",
    learning_rate=2e-5,
    per_device_train_batch_size=batch_size,# # batch size per device during training
    per_device_eval_batch_size=batch_size,## batch size for evaluation
    num_train_epochs=3,
    weight_decay=0.01,
)
from transformers import DataCollatorForTokenClassification
data_collator = DataCollatorForTokenClassification(tokenizer)

Then we will need a data collator that will batch our processed examples together while applying padding to make them all the same size (each pad will be padded to the length of its longest example). There is a data collator for this task in the Transformers library, that not only pads the inputs, but also the labels:

metric = load_metric("seqeval")
labels = [ 'O', 'O', 'O', 'O', 'B-ORG', 'I-ORG', 'O', 'O', 'O', 'B-PER', 'I-PER']
metric.compute(predictions=[labels], references=[labels])
>>
{'LOC': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2},
 'ORG': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1},
 'PER': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1},
 'overall_precision': 1.0,
 'overall_recall': 1.0,
 'overall_f1': 1.0,
 'overall_accuracy': 1.0}
import numpy as np
def compute_metrics(p):
    predictions, labels = p
    predictions = np.argmax(predictions, axis=2)
    # Remove ignored index (special tokens)
    true_predictions = [
        [label_list[p] for (p, l) in zip(prediction, label) if l != -100]
        for prediction, label in zip(predictions, labels)
    ]
    true_labels = [
        [label_list[l] for (p, l) in zip(prediction, label) if l != -100]
        for prediction, label in zip(predictions, labels)
    ]
    results = metric.compute(predictions=true_predictions, references=true_labels)
    return {
        "precision": results["overall_precision"],
        "recall": results["overall_recall"],
        "f1": results["overall_f1"],
        "accuracy": results["overall_accuracy"],
    }
print(train_tokenized_datasets)
print(valid_tokenized_datasets['train'])
trainer = Trainer(
    model,## the instantiated 🤗 Transformers model to be trained
    args,# # training arguments, defined above
    train_dataset=train_tokenized_datasets['train'],
    eval_dataset=valid_tokenized_datasets['train'],
    data_collator=data_collator,
    tokenizer=tokenizer,
    compute_metrics=compute_metrics
)
>>

trainer.train()
trainer.evaluate()  

下面是一些实验记录,修改一些参数:
将全部长度padding 成 512(我的数据最长的是503左右):稍微

tokenized_inputs = tokenizer(examples["text"], padding='max_length')
上一篇 下一篇

猜你喜欢

热点阅读