machine-learning - 当批量大小 > 1 并使用可变长度数据时,RNN 不会进行训练

标签 machine-learning nlp pytorch recurrent-neural-network

我正在实现一个简单的 RNN 网络,它预测某些可变长度的 1/0 时间序列数据。该网络首先将训练数据输入 LSTM 单元,然后使用线性层进行分类。

通常,我们会使用小批量来训练网络。但是,问题是当我使用 batch_size > 1 时,这个简单的 RNN 网络并未进行训练。

我设法创建了一个可以重现问题的最小代码示例。如果您在第 95 行设置 batch_size=1,网络会成功训练,但如果您设置 batch_size=2,网络根本不会训练,损失只是来回跳动。 (需要 python3,pytorch >= 0.4.0)

import numpy as np
import random
import torch
import torch.nn as nn
import torch.optim as optim
from torch.nn.utils.rnn import pad_sequence, pack_padded_sequence


class ToyDataLoader(object):

    def __init__(self, batch_size):
        self.batch_size = batch_size
        self.index = 0
        self.dataset_size = 10

        # generate 10 random variable length training samples,
        # each time step has 1 feature dimension
        self.X = [
            [[1], [1], [1], [1], [0], [0], [1], [1], [1]],
            [[1], [1], [1], [1]],
            [[0], [0], [1], [1]],
            [[1], [1], [1], [1], [1], [1], [1]],
            [[1], [1]],
            [[0]],
            [[0], [0], [0], [0], [0], [0], [0]],
            [[1]],
            [[0], [1]],
            [[1], [0]]
        ]

        # assign labels for the toy traning set
        self.y = torch.LongTensor([0, 0, 0, 0, 0, 1, 1, 1, 1, 1])

    def __len__(self):
        return self.dataset_size // self.batch_size

    def __iter__(self):
        return self

    def __next__(self):
        if self.index + self.batch_size > self.dataset_size:
            self.index = 0
            raise StopIteration()
        if self.index == 0:  # shufle the dataset
            tmp = list(zip(self.X, self.y))
            random.shuffle(tmp)
            self.X, self.y = zip(*tmp)
            self.y = torch.LongTensor(self.y)
        X = self.X[self.index: self.index + self.batch_size]
        y = self.y[self.index: self.index + self.batch_size]
        self.index += self.batch_size
        return X, y


class NaiveRNN(nn.Module):
    def __init__(self):
        super(NaiveRNN, self).__init__()
        self.lstm = nn.LSTM(1, 128)
        self.linear = nn.Linear(128, 2)

    def forward(self, X):
        '''
        Parameter:
            X: list containing variable length training data
        '''

        # get the length of each seq in the batch
        seq_lengths = [len(x) for x in X]

        # convert to torch.Tensor
        seq_tensor = [torch.Tensor(seq) for seq in X]

        # sort seq_lengths and seq_tensor based on seq_lengths, required by torch.nn.utils.rnn.pad_sequence
        pairs = sorted(zip(seq_lengths, seq_tensor),
                       key=lambda pair: pair[0], reverse=True)
        seq_lengths = torch.LongTensor([pair[0] for pair in pairs])
        seq_tensor = [pair[1] for pair in pairs]

        # padded_seq shape: (seq_len, batch_size, feature_size)
        padded_seq = pad_sequence(seq_tensor)

        # pack them up
        packed_seq = pack_padded_sequence(padded_seq, seq_lengths.numpy())

        # feed to rnn
        packed_output, (ht, ct) = self.lstm(packed_seq)

        # linear classification layer
        y_pred = self.linear(ht[-1])

        return y_pred


def main():
    trainloader = ToyDataLoader(batch_size=2)  # not training at all! !!
    # trainloader = ToyDataLoader(batch_size=1) # it converges !!!

    model = NaiveRNN()
    criterion = nn.CrossEntropyLoss()
    optimizer = optim.Adadelta(model.parameters(), lr=1.0)

    for epoch in range(30):
        # switch to train mode
        model.train()

        for i, (X, labels) in enumerate(trainloader):

            # compute output
            outputs = model(X)
            loss = criterion(outputs, labels)

            # measure accuracy and record loss
            _, predicted = torch.max(outputs, 1)
            accu = (predicted == labels).sum().item() / labels.shape[0]

            # compute gradient and do SGD step
            optimizer.zero_grad()
            loss.backward()

            optimizer.step()

            print('Epoch: [{}][{}/{}]\tLoss {:.4f}\tAccu {:.3f}'.format(
                epoch, i, len(trainloader), loss, accu))


if __name__ == '__main__':
    main()

batch_size=1时的示例输出:

...
Epoch: [28][7/10]       Loss 0.1582     Accu 1.000
Epoch: [28][8/10]       Loss 0.2718     Accu 1.000
Epoch: [28][9/10]       Loss 0.0000     Accu 1.000
Epoch: [29][0/10]       Loss 0.2808     Accu 1.000
Epoch: [29][1/10]       Loss 0.0000     Accu 1.000
Epoch: [29][2/10]       Loss 0.0001     Accu 1.000
Epoch: [29][3/10]       Loss 0.0149     Accu 1.000
Epoch: [29][4/10]       Loss 0.1445     Accu 1.000
Epoch: [29][5/10]       Loss 0.2866     Accu 1.000
Epoch: [29][6/10]       Loss 0.0170     Accu 1.000
Epoch: [29][7/10]       Loss 0.0869     Accu 1.000
Epoch: [29][8/10]       Loss 0.0000     Accu 1.000
Epoch: [29][9/10]       Loss 0.0498     Accu 1.000

batch_size=2时的示例输出:

...
Epoch: [27][2/5]        Loss 0.8051     Accu 0.000
Epoch: [27][3/5]        Loss 1.2835     Accu 0.000
Epoch: [27][4/5]        Loss 1.0782     Accu 0.000
Epoch: [28][0/5]        Loss 0.5201     Accu 1.000
Epoch: [28][1/5]        Loss 0.6587     Accu 0.500
Epoch: [28][2/5]        Loss 0.3488     Accu 1.000
Epoch: [28][3/5]        Loss 0.5413     Accu 0.500
Epoch: [28][4/5]        Loss 0.6769     Accu 0.500
Epoch: [29][0/5]        Loss 1.0434     Accu 0.000
Epoch: [29][1/5]        Loss 0.4460     Accu 1.000
Epoch: [29][2/5]        Loss 0.9879     Accu 0.000
Epoch: [29][3/5]        Loss 1.0784     Accu 0.500
Epoch: [29][4/5]        Loss 0.6051     Accu 1.000

查了很多资料还是没明白为什么。

最佳答案

我认为一个主要问题是您将 ht[-1] 作为输入传递给线性层。
ht[-1] 将包含最后一个时间步的状态,该状态仅对最大长度的输入有效。

要解决此问题,您需要解压输出并获取与相应输入的最后一个长度相对应的输出。
以下是我们需要做出的改变:

# feed to rnn
packed_output, (ht, ct) = self.lstm(packed_seq)

# Unpack output
lstm_out, seq_len = pad_packed_sequence(packed_output)

# get vector containing last input indices
last_input = seq_len - 1

indices = torch.linspace(0, (seq_len.size(0)-1), steps=seq_len.size(0)).long()

# linear classification layer
y_pred = self.linear(lstm_out[last_input, indices, :])
return y_pred

我仍然无法使其与其余参数收敛,但这应该会有所帮助。

关于machine-learning - 当批量大小 > 1 并使用可变长度数据时,RNN 不会进行训练,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50669405/

相关文章:

machine-learning - 大丰杂货数据集下载链接

r - 基于重要性的变量缩减

r - glmnet 的标准化参数如何处理虚拟变量?

python-3.x - 神经网络成本函数实现

python - 快速索引期间矢量化出现整数太大错误

python - Nltk安装

python - 从句子列表中删除某个单词

python - 如何在特定轴上连接张量列表?

pytorch - src_mask 和 src_key_padding_mask 的区别

pytorch - 运行时错误: CUDNN_STATUS_INTERNAL_ERROR