python-3.x - Tensorflow 2.1 全内存和 tf.function 调用两次

标签 python-3.x tensorflow tensorflow2.0 tf.keras

我正在使用 Tensorflow 2.1 开发卷积自动编码器。

这是代码

class ConvAutoencoder:

def __init__(self, input_shape, latent_dim):
    self.input_shape = input_shape
    self.latent_dim = latent_dim
    self.__create_model()

def __create_model(self):
    # Define Encoder
    encoder_input = Input(shape=self.input_shape, name='encoder_input')
    x = Conv2D(filters=16, kernel_size=5, activation='relu', padding='same')(encoder_input)
    x = Conv2D(filters=32, kernel_size=3, strides=2, activation='relu', padding='same')(x)
    x = Conv2D(filters=64, kernel_size=3, strides=2, activation='relu', padding='same')(x)
    x = Conv2D(filters=128, kernel_size=2, strides=2, activation='relu', padding='same')(x)
    last_conv_shape = x.shape
    x = Flatten()(x)
    x = Dense(256, activation='relu')(x)
    x = Dense(units=self.latent_dim, name='encoded_rep')(x)
    self.encoder = Model(encoder_input, x, name='encoder_model')
    self.encoder.summary()

    # Define Decoder
    decoder_input = Input(shape=self.latent_dim, name='decoder_input')
    x = Dense(units=256)(decoder_input)
    x = Dense(units=(last_conv_shape[1] * last_conv_shape[2] * last_conv_shape[3]), activation='relu')(x)
    x = Reshape(target_shape=(last_conv_shape[1], last_conv_shape[2], last_conv_shape[3]))(x)
    x = Conv2DTranspose(filters=128, kernel_size=2, activation='relu', padding='same')(x)
    x = Conv2DTranspose(filters=64, kernel_size=3, strides=2, activation='relu', padding='same')(x)
    x = Conv2DTranspose(filters=32, kernel_size=3, strides=2, activation='relu', padding='same')(x)
    x = Conv2DTranspose(filters=16, kernel_size=5, strides=2, activation='relu', padding='same')(x)
    x = Conv2DTranspose(filters=self.input_shape[2], kernel_size=5, activation='sigmoid', padding='same')(x)
    self.decoder = Model(decoder_input, x, name='decoder_model')
    self.decoder.summary()

    # Define Autoencoder from encoder input to decoder output
    self.autoencoder = Model(encoder_input, self.decoder(self.encoder(encoder_input)))
    self.optimizer = Adam()
    self.autoencoder.summary()


@tf.function
def compute_loss(model, batch):
    decoded = model.autoencoder(batch)
    return tf.reduce_mean(tf.reduce_sum(tf.square(batch - decoded), axis=[1, 2, 3]))


@tf.function
def train(train_data, model, epochs=2, batch_size=32):
    for epoch in range(epochs):
        for i in tqdm(range(0, len(train_data), batch_size)):
            batch = train_data[i: i + batch_size]
            with tf.GradientTape() as tape:
                loss = compute_loss(model, batch)
            gradients = tape.gradient(loss, model.autoencoder.trainable_variables)
            model.optimizer.apply_gradients(zip(gradients, model.autoencoder.trainable_variables))


if __name__ == "__main__":
    img_dim = 64
    channels = 1

    (x_train, _), (x_test, _) = mnist.load_data()
    # Resize images to (img_dim x img_dim)
    x_train = np.array([cv2.resize(img, (img_dim, img_dim)) for img in x_train])
    x_test = np.array([cv2.resize(img, (img_dim, img_dim)) for img in x_test])

    # Normalize images
    x_train = x_train.astype('float32') / 255.
    x_test = x_test.astype('float32') / 255.

    # Reshape datasets for tensorflow
    x_train = x_train.reshape((-1, img_dim, img_dim, channels))
    x_test = x_test.reshape((-1, img_dim, img_dim, channels))

    # Create autoencoder and fit the model
    autoenc = ConvAutoencoder(input_shape=(img_dim, img_dim, channels), latent_dim=4)

    # Train autoencoder
    train(train_data=x_train, model=autoenc, epochs=2, batch_size=32)

现在,问题有两个:

  • 标记为@tf.function 的函数train() 被调用了两次。如果没有 @tf.function 标签
  • ,这不会发生
  • 训练的每个 epoch 都会增加大约 3GB 的内存消耗

我做错了什么?

其他信息:

  • Tensorflow 版本:2.1.0
  • Python 版本 3.7.5
  • Tensorflow 没有使用 GPU,因为我仍然有驱动问题

除此之外没什么好说的,但 StackOverflow 逼着我写点东西

最佳答案

对于您的第一个问题,当您使用@tf.function 时,函数会被执行和跟踪。
在此上下文中禁用此 Eager execution 期间,因此每个 tf.方法 只是定义了一个tf.Operation 节点,它产生一个tf.Tensor 输出。

代码调试1:

# Train autoencoder
    train(train_data=x_train, model=autoenc, epochs=5, batch_size=32)

注意:使用更短的数据集将 epochs 增加到 5 以便更好地调试。

训练函数:

@tf.function
def train(train_data, model, epochs=2, batch_size=32):
    for epoch in range(epochs):
      print("Python execution: ", epoch)   ## This Line only Prints during Python Execution
      tf.print("Graph execution: ", epoch) ## This Line only Print during Graph Execution

      # for i in tqdm(range(0, len(train_data), batch_size)): ## RAISES ERROR
      for i in range(0, len(train_data), batch_size):
          batch = train_data[i: i + batch_size]
          with tf.GradientTape() as tape:
              loss = compute_loss(model, batch)
          gradients = tape.gradient(loss, model.autoencoder.trainable_variables)
          model.optimizer.apply_gradients(zip(gradients, model.autoencoder.trainable_variables))

tf_function_exec_5epoch 这是使用 Python print( )Tensorflow print tf.print( ) 调试时原始代码的输出> 功能。
可以看到该函数看起来像“执行”了两次,但它是为了跟踪和执行构建Graph,但是该函数的成功调用已经在使用AutoGraph 已生成。

观察到这一点,在使用 @tf.function 进行优化时,最好使用训练循环之外的时期

代码调试2:

    # Train autoencoder
    epochs = 5
    print('Loop Training using Dataset (Epochs : {})'.format(epochs))
    for epoch in range(epochs):
      train(train_data=x_train, model=autoenc, batch_size = 32)

训练函数:

@tf.function
def train(train_data, model, batch_size=32):
      print("Python execution")   ## This Line only Prints during Python Execution
      tf.print("Graph execution") ## This Line only Print during Graph Execution

      # for i in tqdm(range(0, len(train_data), batch_size)):
      for i in range(0, len(train_data), batch_size):
          batch = train_data[i: i + batch_size]
          with tf.GradientTape() as tape:
              loss = compute_loss(model, batch)
          gradients = tape.gradient(loss, model.autoencoder.trainable_variables)
          model.optimizer.apply_gradients(zip(gradients, model.autoencoder.trainable_variables))
      print("#################") # For Debugging Purpose

tf_outsideloop_epoch5 这是修改后的流程函数的输出,您可以仍然看到函数被“执行”了两次>。并使用为 5 个时期 构建的 AutoGraph 执行训练。 在这里,对 train 函数的每次后续调用都已经在 Graph 中执行,由于 Tensorflow 优化,导致更短的执行时间


对于您的第二个问题,关于内存不足

您可以尝试使用 Tensorflow 数据集生成器,而不是将您的整个数据集加载到内存中。

您可以在 link 中阅读更多相关信息.

关于python-3.x - Tensorflow 2.1 全内存和 tf.function 调用两次,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/60683254/

相关文章:

python-3.x - 在python包中查找某些方法和函数的所有用法

tensorflow - 浏览器中的离线语音识别

python-3.x - 断言错误 : Some objects had attributes which were not restored

python - LSTM Keras 网络的恒定输出和预测语法

python - CSVLogger 不适用于 keras 的 model.evaluate 流程

tensorflow2.0 - 如何在 Ubuntu 20.04 上安装 cuda 11

python - isinstance(False, int) 返回 True

python Flask mysql 仅使用一个连接

python - 在 PyCharm 中运行测试如何导致 "Model class doesn' t declare an explicit app_label"错误?

python - tf.distribute.Strategy.scope() 中必须包含什么?