python - Huggingface错误: AttributeError: 'ByteLevelBPETokenizer' object has no attribute 'pad_token_id'

标签 python pytorch tokenize huggingface-transformers huggingface-tokenizers

我正在尝试使用 WordLevel/BPE 分词器对一些数字字符串进行分词,创建数据整理器并最终在 PyTorch DataLoader 中使用它来训练新模型从头开始。

但是,我收到一个错误

AttributeError: 'ByteLevelBPETokenizer' object has no attribute 'pad_token_id'

运行以下代码时

from transformers import DataCollatorForLanguageModeling
from tokenizers import ByteLevelBPETokenizer
from tokenizers.pre_tokenizers import Whitespace
from torch.utils.data import DataLoader, TensorDataset

data = ['4814 4832 4761 4523 4999 4860 4699 5024 4788 <unk>']

# Tokenizer
tokenizer = ByteLevelBPETokenizer()
tokenizer.pre_tokenizer = Whitespace()
tokenizer.train_from_iterator(data, vocab_size=1000, min_frequency=1, 
    special_tokens=[
        "<s>",
        "</s>",
        "<unk>",
        "<mask>",
    ])

# Data Collator
data_collator = DataCollatorForLanguageModeling(
    tokenizer=tokenizer, mlm=False
)

train_dataset = TensorDataset(torch.tensor(tokenizer(data, ......)))

# DataLoader
train_dataloader = DataLoader(
    train_dataset, 
    collate_fn=data_collator
)

这个错误是因为没有为 tokenizer 配置 pad_token_id 吗?如果是这样,我们该怎么做?

谢谢!

错误跟踪:

AttributeError: Caught AttributeError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/opt/anaconda3/envs/x/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 198, in _worker_loop
    data = fetcher.fetch(index)
  File "/opt/anaconda3/envs/x/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
    return self.collate_fn(data)
  File "/opt/anaconda3/envs/x/lib/python3.8/site-packages/transformers/data/data_collator.py", line 351, in __call__
    if self.tokenizer.pad_token_id is not None:
AttributeError: 'ByteLevelBPETokenizer' object has no attribute 'pad_token_id'

Conda 包

pytorch                   1.7.0           py3.8_cuda10.2.89_cudnn7.6.5_0    pytorch
pytorch-lightning         1.2.5              pyhd8ed1ab_0    conda-forge
tokenizers                0.10.1                   pypi_0    pypi
transformers              4.4.2                    pypi_0    pypi

最佳答案

错误告诉您分词器需要一个名为 pad_token_id 的属性。您可以将 ByteLevelBPETokenizer 包装到具有此类属性的类中(...并在路上遇到其他缺失的属性)或使用 transformers 库中的包装器类:

from transformers import PreTrainedTokenizerFast

#your code
tokenizer.save(SOMEWHERE)
tokenizer = PreTrainedTokenizerFast(tokenizer_file=tokenizer_path)

关于python - Huggingface错误: AttributeError: 'ByteLevelBPETokenizer' object has no attribute 'pad_token_id' ,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/66824985/

相关文章:

c - 函数 ‘strtok_r’ 的隐式声明尽管包含 <string.h>

python - 如何优化从网站提取的 json 输出?

python - OpenCV imread 发布 libpng 错误 : IEND: CRC error and not loading images

pytorch - PyTorch 中的可微分图像压缩操作

numpy - 如何避免 pytorch 或 numpy 中的碎片分割和总和

token - 聊天GPT : How to use long texts of unknown content in a prompt?

python - 如何修改 Pandas 的 Read_html 用户代理?

python - 为什么这个 CNN 脚本无法正确预测?

python - 如何在似乎拒绝合作的 HPC 中安装 pytorch 1.9?

python - 使用 Counter 创建字典