我使用 Tensorflow 构建了 Deep-Q-Network。当我尝试创建其中两个时(我想让网络与自身对抗),我得到:
ValueError: Trying to share variable dense/kernel, but specified shape (100, 160) and found shape (9, 100).
这是我的网络:
class QNetwork:
"""
A Q-Network implementation
"""
def __init__(self, input_size, output_size, hidden_layers_size, gamma, maximize_entropy, reuse):
self.q_target = tf.placeholder(shape=(None, output_size), dtype=tf.float32)
self.r = tf.placeholder(shape=None, dtype=tf.float32)
self.states = tf.placeholder(shape=(None, input_size), dtype=tf.float32)
self.enumerated_actions = tf.placeholder(shape=(None, 2), dtype=tf.int32)
self.learning_rate = tf.placeholder(shape=[], dtype=tf.float32)
layer = self.states
for l in hidden_layers_size:
layer = tf.layers.dense(inputs=layer, units=l, activation=tf.nn.relu,
kernel_initializer=tf.contrib.layers.xavier_initializer(),
reuse=reuse)
self.output = tf.layers.dense(inputs=layer, units=output_size,
kernel_initializer=tf.contrib.layers.xavier_initializer(),
reuse=reuse)
self.predictions = tf.gather_nd(self.output, indices=self.enumerated_actions)
if maximize_entropy:
self.future_q = tf.log(tf.reduce_sum(tf.exp(self.q_target), axis=1))
else:
self.future_q = tf.reduce_max(self.q_target, axis=1)
self.labels = self.r + (gamma * self.future_q)
self.cost = tf.reduce_mean(tf.losses.mean_squared_error(labels=self.labels, predictions=self.predictions))
self.optimizer = tf.train.AdamOptimizer(learning_rate=self.learning_rate).minimize(self.cost)
此代码失败:
q1 = QNetwork(9, 9, [100, 160, 160, 100], gamma=0.99, maximize_entropy=False, reuse=tf.AUTO_REUSE)
q2 = QNetwork(9, 9, [100, 160, 160, 100], gamma=0.99, maximize_entropy=False, reuse=tf.AUTO_REUSE)
知道如何解决这个问题吗? (运行TF 1.10.1,Python 3.6.5)
最佳答案
已解决。
我需要:
- 为每个图层指定一个唯一的名称
- 使用
reuse=tf.AUTO_REUSE
将所有内容放入variable_scope
中(对于 Adam 优化器)
关于python - Tensorflow:无法共享密集/内核 - ValueError:尝试共享变量密集/内核,但指定形状 (100, 160) 并找到形状 (9, 100),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55600309/