我在微调 decomposable-attention-elmo 模型时遇到问题。我已经能够下载模型:wget https://s3-us-west-2.amazonaws.com/allennlp/models/decomposable-attention-elmo-2018.02.19.tar.gz
。我正在尝试加载模型,然后使用 AllenNLP 训练命令行命令根据我的数据对其进行微调。
我还创建了一个与 SNLIDatasetReader
类似的自定义数据集读取器,并且似乎运行良好。
我创建了一个 .jsonnet
文件,类似于 here ,但我无法使其正常工作。
当我使用这个版本时:
// Configuraiton for a textual entailment model based on:
// Parikh, Ankur P. et al. “A Decomposable Attention Model for Natural Language Inference.” EMNLP (2016).
{
"dataset_reader": {
"type": "custom_reader",
"token_indexers": {
"elmo": {
"type": "elmo_characters"
}
},
"tokenizer": {
"end_tokens": ["@@NULL@@"]
}
},
"train_data_path": "examples_train_",
"validation_data_path": "examples_val_",
"model": {
"type": "from_archive",
"archive_file": "decomposable-attention-elmo-2018.02.19.tar.gz",
"text_field_embedder": {
"token_embedders": {
"elmo": {
"type": "elmo_token_embedder",
"do_layer_norm": false,
"dropout": 0.2
}
}
},
},
"data_loader": {
"batch_sampler": {
"type": "bucket",
"batch_size": 64
}
},
"trainer": {
"num_epochs": 140,
"patience": 20,
"grad_clipping": 5.0,
"validation_metric": "+accuracy",
"optimizer": {
"type": "adagrad"
}
}
}
我收到错误:
File "lib/python3.6/site-packages/allennlp/common/params.py", line 423, in assert_empty
"Extra parameters passed to {}: {}".format(class_name, self.params)
allennlp.common.checks.ConfigurationError: Extra parameters passed to Model: {'text_field_embedder': {'token_embedders': {'elmo': {'do_layer_norm': False, 'dropout': 0.2, 'type': 'elmo_token_embedder'}}}}
然后,当我取出 text_field_embedder
部分并使用此版本时:
// Configuraiton for a textual entailment model based on:
// Parikh, Ankur P. et al. “A Decomposable Attention Model for Natural Language Inference.” EMNLP (2016).
{
"dataset_reader": {
"type": "fake_news",
"token_indexers": {
"elmo": {
"type": "elmo_characters"
}
},
"tokenizer": {
"end_tokens": ["@@NULL@@"]
}
},
"train_data_path": "examples_train_",
"validation_data_path": "examples_val_",
"model": {
"type": "from_archive",
"archive_file": "decomposable-attention-elmo-2018.02.19.tar.gz",
},
"data_loader": {
"batch_sampler": {
"type": "bucket",
"batch_size": 64
}
},
"trainer": {
"num_epochs": 140,
"patience": 20,
"grad_clipping": 5.0,
"validation_metric": "+accuracy",
"optimizer": {
"type": "adagrad"
}
}
}
我收到错误:
raise ConfigurationError(msg)
allennlp.common.checks.ConfigurationError: key "token_embedders" is required at location "model.text_field_embedder."
这两个错误似乎是矛盾的,我不知道如何进行这种微调。
最佳答案
我们在 GitHub 上发现问题出在 @hockeybro 正在加载的模型的旧版本上。目前最新版本为https://storage.googleapis.com/allennlp-public-models/decomposable-attention-elmo-2020.04.09.tar.gz .
关于python - AllenNLP 中可分解注意力模型的微调问题,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/66844202/