python - DQN Pytorch Loss 不断增加

标签 python machine-learning pytorch reinforcement-learning q-learning

我正在实现简单的 DQN算法使用 pytorch , 解决来自 gym 的 CartPole 环境.我已经调试了一段时间,我无法弄清楚为什么模型没有学习。
观察:

  • 使用 SmoothL1Loss性能比 MSEloss 差,但两者的损失都会增加
  • 较小 LRAdam不起作用,我已经使用 0.0001、0.00025、0.0005 和默认值
  • 进行了测试

    笔记:
  • 我已经分别调试了算法的各个部分,并且可以很有信心地说问题出在 learn 中。功能。我想知道这个bug是不是因为我的误解detach在 pytorch 或其他一些框架错误中。
  • 我试图尽可能贴近原始论文(上面链接)

  • 引用:
  • example : GitHub 要点
  • example : pytroch 官方

  • import torch as T
    import torch.nn as nn
    import torch.nn.functional as F
    
    import gym
    import numpy as np
    
    
    class ReplayBuffer:
        def __init__(self, mem_size, input_shape, output_shape):
            self.mem_counter = 0
            self.mem_size = mem_size
            self.input_shape = input_shape
    
            self.actions = np.zeros(mem_size)
            self.states = np.zeros((mem_size, *input_shape))
            self.states_ = np.zeros((mem_size, *input_shape))
            self.rewards = np.zeros(mem_size)
            self.terminals = np.zeros(mem_size)
    
        def sample(self, batch_size):
            indices = np.random.choice(self.mem_size, batch_size)
            return self.actions[indices], self.states[indices], \
                self.states_[indices], self.rewards[indices], \
                self.terminals[indices]
    
        def store(self, action, state, state_, reward, terminal):
            index = self.mem_counter % self.mem_size
    
            self.actions[index] = action
            self.states[index] = state
            self.states_[index] = state_
            self.rewards[index] = reward
            self.terminals[index] = terminal
            self.mem_counter += 1
    
    
    class DeepQN(nn.Module):
        def __init__(self, input_shape, output_shape, hidden_layer_dims):
            super(DeepQN, self).__init__()
    
            self.input_shape = input_shape
            self.output_shape = output_shape
    
            layers = []
            layers.append(nn.Linear(*input_shape, hidden_layer_dims[0]))
            for index, dim in enumerate(hidden_layer_dims[1:]):
                layers.append(nn.Linear(hidden_layer_dims[index], dim))
            layers.append(nn.Linear(hidden_layer_dims[-1], *output_shape))
    
            self.layers = nn.ModuleList(layers)
    
            self.loss = nn.MSELoss()
            self.optimizer = T.optim.Adam(self.parameters())
    
        def forward(self, states):
            for layer in self.layers[:-1]:
                states = F.relu(layer(states))
            return self.layers[-1](states)
    
        def learn(self, predictions, targets):
            self.optimizer.zero_grad()
            loss = self.loss(input=predictions, target=targets)
            loss.backward()
            self.optimizer.step()
    
            return loss
    
    
    class Agent:
        def __init__(self, epsilon, gamma, input_shape, output_shape):
            self.input_shape = input_shape
            self.output_shape = output_shape
            self.epsilon = epsilon
            self.gamma = gamma
    
            self.q_eval = DeepQN(input_shape, output_shape, [64])
            self.memory = ReplayBuffer(10000, input_shape, output_shape)
    
            self.batch_size = 32
            self.learn_step = 0
    
        def move(self, state):
            if np.random.random() < self.epsilon:
                return np.random.choice(*self.output_shape)
            else:
                self.q_eval.eval()
                state = T.tensor([state]).float()
                action = self.q_eval(state).max(axis=1)[1]
                return action.item()
    
        def sample(self):
            actions, states, states_, rewards, terminals = \
                self.memory.sample(self.batch_size)
    
            actions = T.tensor(actions).long()
            states = T.tensor(states).float()
            states_ = T.tensor(states_).float()
            rewards = T.tensor(rewards).view(self.batch_size).float()
            terminals = T.tensor(terminals).view(self.batch_size).long()
    
            return actions, states, states_, rewards, terminals
    
        def learn(self, state, action, state_, reward, done):
            self.memory.store(action, state, state_, reward, done)
    
            if self.memory.mem_counter < self.batch_size:
                return
    
            self.q_eval.train()
            self.learn_step += 1
            actions, states, states_, rewards, terminals = self.sample()
            indices = np.arange(self.batch_size)
            q_eval = self.q_eval(states)[indices, actions]
            q_next = self.q_eval(states_).detach()
            q_target = rewards + self.gamma * q_next.max(axis=1)[0] * (1 - terminals)
    
            loss = self.q_eval.learn(q_eval, q_target)
            self.epsilon *= 0.9 if self.epsilon > 0.1 else 1.0
    
            return loss.item()
    
    
    def learn(env, agent, episodes=500):
        print('Episode: Mean Reward: Last Loss: Mean Step')
    
        rewards = []
        losses = [0]
        steps = []
        num_episodes = episodes
        for episode in range(num_episodes):
            done = False
            state = env.reset()
            total_reward = 0
            n_steps = 0
    
            while not done:
                action = agent.move(state)
                state_, reward, done, _ = env.step(action)
                loss = agent.learn(state, action, state_, reward, done)
    
                state = state_
                total_reward += reward
                n_steps += 1
    
                if loss:
                    losses.append(loss)
    
            rewards.append(total_reward)
            steps.append(n_steps)
    
            if episode % (episodes // 10) == 0 and episode != 0:
                print(f'{episode:5d} : {np.mean(rewards):5.2f} '
                      f': {np.mean(losses):5.2f}: {np.mean(steps):5.2f}')
                rewards = []
                losses = [0]
                steps = []
    
        print(f'{episode:5d} : {np.mean(rewards):5.2f} '
              f': {np.mean(losses):5.2f}: {np.mean(steps):5.2f}')
        return losses, rewards
    
    
    if __name__ == '__main__':
        env = gym.make('CartPole-v1')
        agent = Agent(1.0, 1.0,
                      env.observation_space.shape,
                      [env.action_space.n])
    
        learn(env, agent, 500)
    

    最佳答案

    我认为的主要问题是 折扣系数 , Gamma 。您将其设置为 1.0,这意味着您对 future 的奖励给予与当前奖励相同的权重。通常在强化学习中,我们更关心眼前的奖励而不是 future ,所以 gamma 应该总是小于 1。
    只是为了尝试一下,我设置了 gamma = 0.99并运行您的代码:

    Episode: Mean Reward: Last Loss: Mean Step
      100 : 34.80 :  0.34: 34.80
      200 : 40.42 :  0.63: 40.42
      300 : 65.58 :  1.78: 65.58
      400 : 212.06 :  9.84: 212.06
      500 : 407.79 : 19.49: 407.79
    
    正如您所看到的,损失仍在增加(即使没有以前那么多),但奖励也在增加。您应该考虑到这里的损失不是衡量性能的好指标,因为您有 移动目标 .您可以通过使用 来降低目标的不稳定性。目标网络 .通过额外的参数调整和目标网络,可能会使损失更加稳定。
    还要注意,在强化学习中,损失值不像在监督中那么重要;损失的减少并不总是意味着性能的提高,反之亦然。
    问题是在训练步骤发生时 Q 目标正在移动;当代理播放时,预测 正确 奖励总和变得非常困难(例如,探索更多的状态和奖励意味着更高的奖励方差),因此损失增加。这在更复杂的环境(更多状态、不同奖励等)中更加清晰。
    同时Q网越来越好在近似 每个 Action 的 Q 值,因此奖励(可能)增加。

    关于python - DQN Pytorch Loss 不断增加,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/67789148/

    相关文章:

    deep-learning - PyTorch 属性错误 : 'UNet3D' object has no attribute 'size'

    python - for x in range(y) 与 while 循环 : Python

    python - 如何在python中加快Elasticsearch滚动

    javascript - 突触js lstm rnn算法的死简单例子

    python - ValueError : Error when checking input: expected input_3 to have shape (34, )但得到形状为(36,)的数组

    machine-learning - 如何使用机器学习让两个相似输入的单词表示相同的意思?

    python - PyTorch 获取二维张量中的值索引

    google-cloud-platform - Google Cloud Function Gen 1 部署失败

    python - 如何使用generate_events事件?

    python返回字符串的匹配和非匹配模式