python - 即使在大型数据集上训练时,spaCy 空白 NER 模型也会欠拟合

标签 python nlp stanford-nlp spacy named-entity-recognition

我正在尝试创建一个自定义 NER 模型来识别网络安全相关实体(其中 27 个)。我决定使用一个空白模型,因为我认为我有一个足够大(不确定)的训练数据集(从维基百科中提取的约 11k 个句子)。
为了创建 spaCy 所需的训练数据,我使用了 PhraseMatcher 实用程序。这个想法是匹配与我想要识别的实体相关的某些预定义单词/短语,如下图所示:

import spacy
from spacy.matcher import PhraseMatcher
nlp = spacy.load("en")

import pandas as pd
from tqdm import tqdm

from collections import defaultdict
指定匹配器标签
users_pattern = [nlp(text) for text in ("user", "human", "person", "people", "end user")]
devices_pattern =  [nlp(text) for text in ("device", "peripheral", "appliance", "component", "accesory", "equipment", "machine")]
accounts_pattern = [nlp(text) for text in ("account", "user account", "username", "user name", "loginname", "login name", "screenname", "screen name", "account name")]
identifiers_pattern = [nlp(text) for text in ("attribute", "id", "ID", "code", "ID code")]
authentication_pattern = [nlp(text) for text in ("authentication", "authenticity", "certification", "verification", "attestation", "authenticator", "authenticators")]
time_pattern = [nlp(text) for text in ("time", "date", "moment", "present", "pace", "moment")]
unauthorized_pattern = [nlp(text) for text in ("unauthorized", "illegal", "illegitimate", "pirated", "unapproved", "unjustified", "unofficial")]
disclosure_pattern = [nlp(text) for text in ("disclosure", "acknowledgment", "admission", "exposure", "advertisement", "divulgation")]
network_pattern = [nlp(text) for text in ("network", "net", "networking", "internet", "Internet")]
wireless_pattern = [nlp(text) for text in ("wireless", "wifi", "Wi-Fi", "wireless networking")]
password_pattern = [nlp(text) for text in ("password", "passwords", "passcode", "passphrase")]
configuration_pattern = [nlp(text) for text in ("configuration", "composition")]
signatures_pattern = [nlp(text) for text in ("signature", "signatures", "digital signature", "electronic signature")]
certificates_pattern = [nlp(text) for text in ("certificate", "digital certificates", "authorization certificate", "public key certificates", "PKI", "X509", "X.509")]
revocation_pattern = [nlp(text) for text in ("revocation", "annulment", "cancellation")]
keys_pattern = [nlp(text) for text in ("key", "keys")]
algorithms_pattern = [nlp(text) for text in ("algorithm", "algorithms", "formula", "program")]
standard_pattern = [nlp(text) for text in ("standard", "standards", "specification", "specifications", "norm", "rule", "rules", "RFC")]
invalid_pattern = [nlp(text) for text in ("invalid", "false", "unreasonable", "inoperative")]
access_pattern = [nlp(text) for text in ("access", "connection", "entry", "entrance")]
blocking_pattern = [nlp(text) for text in ("blocking", "block", "blacklist", "blocklist", "close", "cut off", "deter", "prevent", "stop")]
notification_pattern = [nlp(text) for text in ("notification", "notifications", "notice", "warning")]
messages_pattern = [nlp(text) for text in ("message", "messages", "note", "news")]
untrusted_pattern = [nlp(text) for text in ("untrusted", "malicious", "unsafe")]
security_pattern = [nlp(text) for text in ("security", "secure", "securely", "protect", "defend", "guard")]
symmetric_pattern = [nlp(text) for text in ("symmetric", "symmetric crypto")]
asymmetric_pattern = [nlp(text) for text in ("asymmetric", "asymmetric crypto")]

matcher = PhraseMatcher(nlp.vocab)
matcher.add("USER", None, *users_pattern)
matcher.add("DEVICE", None, *devices_pattern)
matcher.add("ACCOUNT", None, *accounts_pattern)
matcher.add("IDENTIFIER", None, *identifiers_pattern)
matcher.add("AUTHENTICATION", None, *authentication_pattern)
matcher.add("TIME", None, *time_pattern)
matcher.add("UNAUTHORIZED", None, *unauthorized_pattern)
matcher.add("DISCLOSURE", None, *disclosure_pattern)
matcher.add("NETWORK", None, *network_pattern)
matcher.add("WIRELESS", None, *wireless_pattern)
matcher.add("PASSWORD", None, *password_pattern)
matcher.add("CONFIGURATION", None, *configuration_pattern)
matcher.add("SIGNATURE", None, *signatures_pattern)
matcher.add("CERTIFICATE", None, *certificates_pattern)
matcher.add("REVOCATION", None, *revocation_pattern)
matcher.add("KEY", None, *keys_pattern)
matcher.add("ALGORITHM", None, *algorithms_pattern)
matcher.add("STANDARD", None, *standard_pattern)
matcher.add("INVALID", None, *invalid_pattern)
matcher.add("ACCESS", None, *access_pattern)
matcher.add("BLOCKING", None, *blocking_pattern)
matcher.add("NOTIFICATION", None, *notification_pattern)
matcher.add("MESSAGE", None, *messages_pattern)
matcher.add("UNTRUSTED", None, *untrusted_pattern)
matcher.add("SECURITY", None, *security_pattern)
matcher.add("SYMMETRIC", None, *symmetric_pattern)
matcher.add("ASYMMETRIC", None, *asymmetric_pattern)
准备训练数据
def offsetter(lbl, doc, matchitem):
    """
    Convert PhaseMatcher result to the format required in training (start, end, label)
    """
    o_one = len(str(doc[0:matchitem[1]]))
    subdoc = doc[matchitem[1]:matchitem[2]]
    o_two = o_one + len(str(subdoc))
    return (o_one, o_two, lbl)


to_train_ents = []
count_dic = defaultdict(int)

# Load the original sentences
df = pd.read_csv("sentences.csv", index_col=False)
phrases = df["sentence"].values

for line in tqdm(phrases):

    nlp_line = nlp(line)
    matches = matcher(nlp_line)
    
    if matches:
        
        for match in matches:

            match_id = match[0]
            start = match[1]
            end = match[2]

            label = nlp.vocab.strings[match_id]  # get the unicode ID, i.e. 'COLOR'
            span = nlp_line[start:end]  # get the matched slice of the doc

            count_dic[label] += 1

            res = [offsetter(label, nlp_line, match)]
            to_train_ents.append((line, dict(entities=res)))
           
count_dic = dict(count_dic)
        
TRAIN_DATA =  to_train_ents
执行上述代码后,我得到了spaCy要求的格式的训练数据。这些句子包含我感兴趣的实体,它们的分布如下所示:
print(sorted(count_dic.items(), key=lambda x:x[1], reverse=True), len(count_dic))
sum(count_dic.values())


[('NETWORK', 1962), ('TIME', 1489), ('USER', 1206), ('SECURITY', 981), ('DEVICE', 884), ('STANDARD', 796), ('ACCESS', 652), ('ALGORITHM', 651), ('MESSAGE', 605), ('KEY', 423), ('IDENTIFIER', 389), ('BLOCKING', 354), ('AUTHENTICATION', 141), ('WIRELESS', 109), ('UNAUTHORIZED', 99), ('CONFIGURATION', 89), ('ACCOUNT', 86), ('UNTRUSTED', 77), ('PASSWORD', 62), ('DISCLOSURE', 58), ('NOTIFICATION', 55), ('INVALID', 44), ('SIGNATURE', 41), ('SYMMETRIC', 23), ('ASYMMETRIC', 11), ('CERTIFICATE', 10), ('REVOCATION', 9)] 27
11306
然后我使用标准训练程序在 spaCy 中训练一个空白的 NER 模型,如下图所示。
训练空白模型
# define variables
model = None  
n_iter = 100

if model is not None:
    nlp_new = spacy.load(model)  # load existing spaCy model
    print("Loaded model '%s'" % model)
else:
    nlp_new = spacy.blank("en")  # create blank Language class
    print("Created blank 'en' model")

# Add entity recognizer to model if it's not in the pipeline
# nlp.create_pipe works for built-ins that are registered with spaCy
if "ner" not in nlp_new.pipe_names:
    ner = nlp_new.create_pipe("ner")
    nlp_new.add_pipe(ner)
# otherwise, get it, so we can add labels to it
else:
    ner = nlp_new.get_pipe("ner")


# add labels
for _, annotations in TRAIN_DATA:
    for ent in annotations.get("entities"):
        ner.add_label(ent[2])
            
# get names of other pipes to disable them during training
other_pipes = [pipe for pipe in nlp_new.pipe_names if pipe != "ner"]

with nlp_new.disable_pipes(*other_pipes):  # only train NER
    
    if model is None:
        optimizer = nlp_new.begin_training()
    else:
        optimizer = nlp_new.resume_training()
    
    
    # Set this based on this resource: spacy compounding batch size
    sizes = compounding(1, 16, 1.001)
    
    # batch up the examples using spaCy's minibatch
    for itn in tqdm(range(n_iter)):
        losses = {}
        random.shuffle(TRAIN_DATA)
        batches = minibatch(TRAIN_DATA, size=sizes)
        for batch in batches:
            texts, annotations = zip(*batch)
            nlp_new.update(texts, annotations, sgd=optimizer, drop=0.2, losses=losses)
        print("Losses", losses)
这之后的最终损失大约是500。
最后,我使用训练数据测试了新模型的表现。我希望恢复与训练数据集中最初指定的实体一样多。但是,在运行下面的代码后,我总共只得到了大约 600 个实例,共约 11k。
测试训练模型
count_dic = defaultdict(int)

for text, _ in TRAIN_DATA:
    
    doc = nlp_new(text)
    
    for ent in doc.ents:
        count_dic[ent.label_] += 1
        
print(sorted(count_dic.items(), key=lambda x:x[1], reverse=True), len(count_dic))
sum(count_dic.values())

[('TIME', 369), ('NETWORK', 47), ('IDENTIFIER', 41), ('BLOCKING', 28), ('USER', 22), ('STANDARD', 22), ('SECURITY', 15), ('MESSAGE', 15), ('ACCESS', 7), ('CONFIGURATION', 7), ('DEVICE', 7), ('KEY', 4), ('ALGORITHM', 3), ('SYMMETRIC', 2), ('UNAUTHORIZED', 2), ('SIGNATURE', 2), ('WIRELESS', 1), ('DISCLOSURE', 1), ('INVALID', 1), ('PASSWORD', 1), ('NOTIFICATION', 1)] 21
598
我想知道为什么这个过程会产生一个具有这种欠拟合行为的模型。我知道这些帖子中的评论:NER training using SpacySPACY custom NER is not returning any entity但他们没有解决我的问题。
我希望您能提供有关我所做的工作以及我如何改进训练集中实体检测的任何反馈。我认为 11k 句子就足够了,除非我做错了什么。我正在使用 Python 3.6.9 和 spaCy 2.2.4。
非常感谢你的帮助。
更新
我决定训练模型,包括正样本和负样本。现在训练数据有超过 40k 个句子。然而,这种变化确实改善了训练集中的分类结果。还有其他建议吗?
训练数据集
完整的训练数据集可以从 here 下载.

最佳答案

欠拟合可能是由于空间空白模型太小而无法在您的情况下表现良好。根据我的经验,spacy 空白模型大约 5Mb,这是很小的(特别是如果我们将它与大约 500 Mb 的 spacy 预训练模型的大小进行比较)。
实际上,您有 27 个不同的标签和大量数据。
我不知道是否可以从头开始创建更大的 spacy 模型。欢迎回答。

关于python - 即使在大型数据集上训练时,spaCy 空白 NER 模型也会欠拟合,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/62272190/

相关文章:

python - 如何检索元类方法

python - scikit 特征重要性选择经验

python - 在python中的进程之间共享连续的numpy数组

nlp - 斯坦福 CoreNLP : Use partial existing annotation

stanford-nlp - Stanley Core NLP 版本 3.9.0 何时会出现在 Maven Central 上?

python - 安装 MySQL-python

java - 斯坦福 Java NLP 选区标签缩写

python - Python 中基于字典的文本分类

nlp - Bert 针对语义相似性进行了微调

java - 通过 Stanford 解析器提取所有名词、形容词形式和文本