Keras - K.eval() 的逆

标签 keras keras-layer

我正在尝试编写一个 lambda 层,它将输入张量转换为 numpy 数组,并对所述数组的切片执行一组仿射变换。为了获取张量的底层 numpy 数组,我调用 K.eval()。一旦我完成了 numpy 数组的所有处理,我需要将其转换回 keras 张量,以便可以返回它。 keras 后端是否有可以用来执行此操作的操作?或者我应该使用不同的后端函数更新原始输入张量?

def apply_affine(x, y):
    # Get dimensions of main tensor
    dimens = K.int_shape(x)
    # Get numpy array behind main tensor
    filter_arr = K.eval(x)
    if dimens[0] is not None:
        # Go through batch...
        for i in range(0, dimens[0]):
            # Get the correpsonding affine transformation in the form of a numpy array
            affine = K.eval(y)[i, :, :]
            # Create an skimage affine transform from the numpy array
            transform = AffineTransform(matrix=affine)
            # Loop through each filter output from the previous layer of the CNN
            for j in range(0, dims[1]):
                # Warp each filter output according to the corresponding affine transform
                warp(filter_arr[i, j, :, :], transform)
    # Need to convert filter array back to a keras tensor HERE before return
    return None

transformed_twin = Lambda(function=lambda x: apply_affine(x[0], x[1]))([twin1, transformInput])

编辑:添加了一些上下文...

仿射变换:https://github.com/scikit-image/scikit-image/blob/master/skimage/transform/_geometric.py#L715

扭曲:https://github.com/scikit-image/scikit-image/blob/master/skimage/transform/_warps.py#L601

我正在尝试在“通过分解空间嵌入对对象地标进行无监督学习”中重新实现 CNN。 filter_arr 是包含 10 个滤波器的卷积层的输出。我想对所有滤波器输出应用相同的仿射变换。每个数据输入都有一个关联的仿射变换。每个数据输入的仿射变换作为张量传递到神经网络,并作为第二个输入 transformInput 传递到 lambda 层。我在下面留下了当前网络的结构。

twin = Sequential()
twin.add(Conv2D(20, (3, 3), activation=None, input_shape=(28, 28, 1)))

# print(twin.output_shape)
# twin.add(BatchNormalization(axis=1, momentum=0.99, epsilon=0.001, center=True))
twin.add(Activation('relu'))
twin.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2), padding='same'))
# print(twin.output_shape)

twin.add(Conv2D(48, (3, 3), activation=None))

# print(twin.output_shape)
twin.add(BatchNormalization(axis=1, momentum=0.99, epsilon=0.001, center=True))
twin.add(Activation('relu'))

twin.add(Conv2D(64, (3, 3), activation=None))
twin.add(BatchNormalization(axis=1, momentum=0.99, epsilon=0.001, center=True))
twin.add(Activation('relu'))
# print(twin.output_shape)

twin.add(Conv2D(80, (3, 3), activation=None))
twin.add(BatchNormalization(axis=1, momentum=0.99, epsilon=0.001, center=True))
twin.add(Activation('relu'))
# print(twin.output_shape)

twin.add(Conv2D(256, (3, 3), activation=None))
twin.add(BatchNormalization(axis=1, momentum=0.99, epsilon=0.001, center=True))
twin.add(Activation('relu'))
# print(twin.output_shape)

twin.add(Conv2D(no_filters, (3, 3), activation=None))
twin.add(BatchNormalization(axis=1, momentum=0.99, epsilon=0.001, center=True))
twin.add(Activation('relu'))
# print(twin.output_shape)


# Reshape the image outputs to a 1D list so softmax can be used on them
finalDims = twin.layers[-1].output_shape

twin.add(Reshape((finalDims[1], finalDims[2]*finalDims[3])))
twin.add(Activation('softmax'))
twin.add(Reshape(finalDims[1:]))

originalInput = Input(shape=(28, 28, 1))
warpedInput = Input(shape=(28, 28, 1))
transformInput = Input(shape=(3, 3))

twin1 = twin(originalInput)


def apply_affine(x, y):
    # Get dimensions of main tensor
    dimens = K.int_shape(x)
    # Get numpy array behind main tensor
    filter_arr = K.eval(x)
    if dimens[0] is not None:
        # Go through batch...
        for i in range(0, dimens[0]):
            # Get the correpsonding affine transformation in the form of a numpy array
            affine = K.eval(y)[i, :, :]
            # Create an skimage affine transform from the numpy array
            transform = AffineTransform(matrix=affine)
            # Loop through each filter output from the previous layer of the CNN
            for j in range(0, dims[1]):
                # Warp each filter output according to the corresponding affine transform
                warp(filter_arr[i, j, :, :], transform)
    # Need to convert filter array back to a keras tensor
    return None

transformed_twin = Lambda(function=lambda x: apply_affine(x[0], x[1]))([twin1, transformInput])

twin2 = twin(warpedInput)


siamese = Model([originalInput, warpedInput, transformInput], [transformed_twin, twin2])

编辑:使用 K.variable() 时的回溯

Traceback (most recent call last):
  File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\client\session.py", line 1039, in _do_call
    return fn(*args)
  File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\client\session.py", line 1021, in _run_fn
    status, run_metadata)
  File "C:\Users\nickb\Anaconda3\envs\py35\lib\contextlib.py", line 66, in __exit__
    next(self.gen)
  File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 466, in raise_exception_on_not_ok_status
    pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'batch_normalization_1/keras_learning_phase' with dtype bool
     [[Node: batch_normalization_1/keras_learning_phase = Placeholder[dtype=DT_BOOL, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):

 Traceback (most recent call last):
  File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\client\session.py", line 1039, in _do_call
    return fn(*args)
  File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\client\session.py", line 1021, in _run_fn
    status, run_metadata)
  File "C:\Users\nickb\Anaconda3\envs\py35\lib\contextlib.py", line 66, in __exit__
    next(self.gen)
  File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 466, in raise_exception_on_not_ok_status
    pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'batch_normalization_1/keras_learning_phase' with dtype bool
     [[Node: batch_normalization_1/keras_learning_phase = Placeholder[dtype=DT_BOOL, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:/Users/nickb/PycharmProjects/testing/MNIST_implementation.py", line 96, in <module>
    transformed_twin = Lambda(function=lambda x: apply_affine(x[0], x[1]))([twin1, transformInput])
  File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\keras\engine\topology.py", line 585, in __call__
    output = self.call(inputs, **kwargs)
  File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\keras\layers\core.py", line 659, in call
    return self.function(inputs, **arguments)
  File "C:/Users/nickb/PycharmProjects/testing/MNIST_implementation.py", line 96, in <lambda>
    transformed_twin = Lambda(function=lambda x: apply_affine(x[0], x[1]))([twin1, transformInput])
  File "C:/Users/nickb/PycharmProjects/testing/MNIST_implementation.py", line 81, in apply_affine
    filter_arr = K.eval(x)
  File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\keras\backend\tensorflow_backend.py", line 533, in eval
    return to_dense(x).eval(session=get_session())
  File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\framework\ops.py", line 569, in eval
    return _eval_using_default_session(self, feed_dict, self.graph, session)
  File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\framework\ops.py", line 3741, in _eval_using_default_session
    return session.run(tensors, feed_dict)
  File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\client\session.py", line 778, in run
    run_metadata_ptr)
  File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\client\session.py", line 982, in _run
    feed_dict_string, options, run_metadata)
  File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\client\session.py", line 1032, in _do_run
    target_list, options, run_metadata)
  File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\client\session.py", line 1052, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'batch_normalization_1/keras_learning_phase' with dtype bool
     [[Node: batch_normalization_1/keras_learning_phase = Placeholder[dtype=DT_BOOL, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

Caused by op 'batch_normalization_1/keras_learning_phase', defined at:
  File "C:/Users/nickb/PycharmProjects/testing/MNIST_implementation.py", line 36, in <module>
    twin.add(BatchNormalization(axis=1, momentum=0.99, epsilon=0.001, center=True))
  File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\keras\models.py", line 466, in add
    output_tensor = layer(self.outputs[0])
  File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\keras\engine\topology.py", line 585, in __call__
    output = self.call(inputs, **kwargs)
  File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\keras\layers\normalization.py", line 190, in call
    training=training)
  File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\keras\backend\tensorflow_backend.py", line 2559, in in_train_phase
    training = learning_phase()
  File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\keras\backend\tensorflow_backend.py", line 112, in learning_phase
    name='keras_learning_phase')
  File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\ops\array_ops.py", line 1507, in placeholder
    name=name)
  File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 1997, in _placeholder
    name=name)
  File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 768, in apply_op
    op_def=op_def)
  File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\framework\ops.py", line 2336, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\framework\ops.py", line 1228, in __init__
    self._traceback = _extract_stack()

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'batch_normalization_1/keras_learning_phase' with dtype bool
     [[Node: batch_normalization_1/keras_learning_phase = Placeholder[dtype=DT_BOOL, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

Exception ignored in: <bound method BaseSession.__del__ of <tensorflow.python.client.session.Session object at 0x0000023AB66D9C88>>
Traceback (most recent call last):
  File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\client\session.py", line 587, in __del__
AttributeError: 'NoneType' object has no attribute 'TF_NewStatus'

Process finished with exit code 1

最佳答案

如上面的评论所述,最好使用 Keras 后端实现 lambda 层函数。由于 Keras 后端中目前没有执行仿射变换的函数,因此我决定在 Lambda 层中使用 tensorflow 函数,而不是使用现有的 Keras 后端函数从头开始实现仿射变换函数:

def apply_affine(x):
       import tensorflow as tf
       return tf.contrib.image.transform(x[0], x[1])

def apply_affine_output_shape(input_shapes):
       return input_shapes[0]

这种方法的缺点是我的 lambda 层仅在使用 Tensorflow 作为后端时才起作用(而不是 Theano 或 CNTK)。如果您想要一个与任何后端兼容的实现,您可以检查 Keras 当前使用的后端,然后从当前使用的后端执行转换函数。

关于Keras - K.eval() 的逆,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/44504251/

相关文章:

tensorflow - 如何在tensorflow keras中使用CRF?

python - 如何从 keras 中父模型的摘要中公开子模型的各层?

tensorflow - 如何将任何 Keras 层作为单个层运行以了解其行为?

python - Keras 中 LSTM 的时间分布式层和返回序列等

machine-learning - model.output.op 在 keras 中做什么?

python - 如何将图像转换为 .tflite 输入

machine-learning - Keras 中的一对多 LSTM

python - 如何在 Keras 中从 HDF5 文件加载模型?

keras - 导入错误: cannot import name '_obtain_input_shape' from keras

python - "A ` 连接 ` layer requires inputs with matching shapes except for the concat axis."如何解决这个问题?