我目前的神经网络训练的batch_size = 1,要在多个GPU上运行它,我需要将批处理大小增加到大于GPU的数量,所以我想要batch_size = 16,尽管我有我的方式数据设置我不知道如何更改
数据是从csv文件读取的
raw_data = pd.read_csv("final.csv")
train_data = raw_data[:750]
test_data = raw_data[750:]
然后将数据标准化并转换为张量
# normalize features
scaler = MinMaxScaler(feature_range=(-1, 1))
scaled_train = scaler.fit_transform(train_data)
scaled_test = scaler.transform(test_data)
# Turn into Tensorflow Tensors
train_data_normalized = torch.FloatTensor(scaled_train).view(-1)
test_data_normalized = torch.FloatTensor(scaled_test).view(-1)
然后将数据转为[输入列表,输出]格式的Tensor Tuple 例如 (张量([1,3,56,63,3]),张量([34]))
# Convert to tensor tuples
def input_series_sequence(input_data, tw):
inout_seq = []
L = len(input_data)
i = 0
for index in range(L - tw):
train_seq = input_data[i:i + tw]
train_label = input_data[i + tw:i + tw + 1]
inout_seq.append((train_seq, train_label))
i = i + tw
return inout_seq
train_inout_seq = input_series_sequence(train_data_normalized, train_window)
test_input_seq = input_series_sequence(test_data_normalized, train_window)
然后模型就这样训练
for i in range(epochs):
for seq, labels in train_inout_seq:
optimizer.zero_grad()
model.module.hidden_cell = model.module.init_hidden()
seq = seq.to(device)
labels = labels.to(device)
y_pred = model(seq)
single_loss = loss_function(y_pred, labels)
single_loss.backward()
optimizer.step()
所以我想知道如何准确地将batch_size从1更改为16,我需要使用Dataset和Dataloader吗?如果是的话,它到底如何适合我当前的代码,谢谢!
编辑:模型是这样定义的,可能需要更改前向函数?
class LSTM(nn.Module):
def __init__(self, input_size=1, hidden_layer_size=100, output_size=1):
super().__init__()
self.hidden_layer_size = hidden_layer_size
self.lstm = nn.LSTM(input_size, hidden_layer_size)
self.linear = nn.Linear(hidden_layer_size, output_size)
self.hidden_cell = (torch.zeros(1, 1, self.hidden_layer_size),
torch.zeros(1, 1, self.hidden_layer_size))
def init_hidden(self):
return (torch.zeros(1, 1, self.hidden_layer_size),
torch.zeros(1, 1, self.hidden_layer_size))
def forward(self, input_seq):
lstm_out, self.hidden_cell = self.lstm(input_seq.view(len(input_seq), 1, -1), self.hidden_cell)
predictions = self.linear(lstm_out.view(len(input_seq), -1))
return predictions[-1]
最佳答案
您可以通过使用 nn.DataParallel 类包装模型来实现此目的。
model = nn.DataParallel(model)
由于我现在无法访问多个 GPU 和您的数据来进行测试,因此我将指导您 here
关于python - 增加 Pytorch 神经网络数据集的批量大小,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58681854/