python - HuggingFace 保存加载模型 (Colab) 进行预测

标签 python google-colaboratory huggingface-transformers

使用 HuggingFace 训练 Transformer 模型以预测目标变量(例如,电影评级)。我是 Python 的新手,这可能是一个简单的问题,但我不知道如何保存经过训练的分类器模型(通过 Colab)然后重新加载以便对新数据进行目标变量预测。例如,我使用 HuggingFace 资源中的示例训练了一个模型来预测 imbd 评级,如下所示。我尝试了多种方法(save_model、save_pretrained),要么根本无法保存它,要么在加载时不知道调用什么来获得预测。对于涉及保存、加载然后根据测试数据模型创建新预测分数的步骤,我们将不胜感激。

#example mainly from here: https://huggingface.co/transformers/training.html
!pip install transformers
!pip install datasets

from datasets import load_dataset
raw_datasets = load_dataset("imdb")

from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")

def tokenize_function(examples):
    return tokenizer(examples["text"], max_length = 128, padding="max_length", truncation=True) 

tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)

#choosing small datasets for example#
small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000))
small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(500))

### TRAINING classification ###
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=2)

from transformers import TrainingArguments
from transformers import Trainer

training_args = TrainingArguments("test_trainer", evaluation_strategy="epoch", num_train_epochs=2, weight_decay=.0001, learning_rate=0.00001, per_device_train_batch_size=32) 

trainer = Trainer(model=model, args=training_args, train_dataset=small_train_dataset, eval_dataset=small_eval_dataset)
trainer.train()

y_test_predicted_original = model_loaded.predict(small_eval_dataset)

#### Saving ###
from google.colab import drive
drive.mount('/content/gdrive')
%cd /content/gdrive/My\ Drive/FOLDER

trainer.save_pretrained ("Trained model") #assumed this would save but did not
model.save_pretrained ("Trained model") #did save

### Loading Model and Creating Predicted Scores ###

#perhaps this....#
from transformers import BertConfig, BertModel
conf = BertConfig.from_pretrained("Trained model", num_labels=2)
model_loaded = AutoModelForSequenceClassification.from_pretrained("Trained model", config=conf)

#or...#
model_loaded = AutoModelForSequenceClassification.from_pretrained("Trained model", local_files_only=True)
model_loaded 

#with ultimate goal of getting predicted scores (not sure what to call here)...
y_test_predicted_loaded = model_loaded.predict(small_eval_dataset)

最佳答案

保存模型

trainer.save_model("Trained model")

加载模型

model_loaded = AutoModelForSequenceClassification.from_pretrained("Trained model")

预测

trainer = Trainer(model = model_loaded)
test_results = trainer.predict(test_dataset)

关于python - HuggingFace 保存加载模型 (Colab) 进行预测,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/67949960/

相关文章:

python - time.strftime() 不更新

python - Django索引错误: pop from empty list within DeleteView

python - 长文档的 Huggingface 文档摘要

pytorch - 与每个项目相比,批量使用 Transformer 分词器时是否有显着的速度提升?

python - GAE - 包括外部 python 模块而不将它们添加到存储库?

pyspark - findspark.init() IndexError : list index out of range: PySpark on Google Colab

python - 无法在 Google Colab 中加载 OpenAI Gym 环境

python - 有没有办法动态改变文本颜色?

python - 在 Huggingface BERT 模型之上添加密集层

python - 如何在 UV 贴图边缘获取数据并进行编辑?