我正在尝试实现 Dueling DQN,但如果我以这种方式构建 NN 架构,它看起来就不会学习
X_input = Input(shape=(self.state_size,))
X = X_input
X = Dense(512, input_shape= (self.state_size,), activation="relu")(X_input)
X = Dense(260, activation="relu")(X)
X = Dense(100, activation="relu")(X)
state_value = Dense(1)(X)
state_value = Lambda(lambda v: v, output_shape=(self.action_size,))(state_value)
action_advantage = Dense(self.action_size)(X)
action_advantage = Lambda(lambda a: a[:, :] - K.mean(a[:, :], keepdims=True), output_shape=(self.action_size,))(action_advantage)
X = Add()([state_value, action_advantage])
model = Model(inputs = X_input, outputs = X)
model.compile(loss="mean_squared_error", optimizer=Adam(lr=self.learning_rate))
return model
我在网上搜索并找到了一些代码(比我的更好)唯一的区别是
state_value = Lambda(lambda s: K.expand_dims(s[:, 0],-1), output_shape=(self.action_size,))(state_value)
代码链接 https://github.com/pythonlessons/Reinforcement_Learning/blob/master/03_CartPole-reinforcement-learning_Dueling_DDQN/Cartpole_Double_DDQN.py#L31 我不明白为什么我的不(学习)因为它运行。而且我不明白为什么他只取张量每一行的第一个值?
最佳答案
扩展状态值的范围可确保在发生 Add() 时将其添加到每个优势值。
你也可以这样写: 去掉 lambda 函数并按以下方式写出 Q 值的实际计算:
X = (state_value + (action_advantage - tf.math.reduce_mean(action_advantage, axis=1, keepdims=True)))
结果是相同的,但代码可能更具可读性。
所以总的来说,您的代码将如下所示:
X_input = Input(shape=(self.state_size,))
X = X_input
X = Dense(512, input_shape= (self.state_size,), activation="relu")(X_input)
X = Dense(260, activation="relu")(X)
X = Dense(100, activation="relu")(X)
state_value = Dense(1)(X)
action_advantage = Dense(self.action_size)(X)
X = (state_value + (action_advantage - tf.math.reduce_mean(action_advantage, axis=1, keepdims=True)))
model = Model(inputs = X_input, outputs = X)
model.compile(loss="mean_squared_error", optimizer=Adam(lr=self.learning_rate))
return model
关于python - DQN 与 Keras 的对决,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/62336594/