我正在尝试将简单的 tensorflow 数学函数添加到 Keras 模型的末尾,但它不起作用。这是我使用 native Keras Add() 函数的荒谬但最小的工作代码:
import numpy as np
import matplotlib.pyplot as plt
import scipy.signal as ss
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Conv1D, Add
import tensorflow as tf
kernel_size = 64
epochs = 1000
## Data generation for training
x_train = np.random.randn(1024, 512)
t = np.linspace(0, x_train.shape[1], x_train.shape[1], endpoint=False)
sine = np.sin(2*np.pi*t/32)
cosine = np.cos(2*np.pi*t/32)
x_I = np.multiply(x_train, cosine)
x_Q = np.multiply(x_train, sine)
b_I = ss.tukey(kernel_size)
b_Q = ss.tukey(kernel_size)
x_I_filt = np.array([np.convolve(b_I, x_I_i, mode='valid') for x_I_i in x_I])
x_Q_filt = np.array([np.convolve(b_Q, x_Q_i, mode='valid') for x_Q_i in x_Q])
y_train = x_Q_filt + x_I_filt
x_I = np.expand_dims(x_I, axis=2)
x_Q = np.expand_dims(x_Q, axis=2)
y_train = np.expand_dims(y_train, axis=2)
## Keras model
input_I = Input(shape=(x_I.shape[1], 1))
input_Q = Input(shape=(x_Q.shape[1], 1))
conv_I_1D = Conv1D(filters=1, kernel_size=kernel_size, activation=None, padding='valid', use_bias=False)(input_I)
conv_Q_1D = Conv1D(filters=1, kernel_size=kernel_size, activation=None, padding='valid', use_bias=False)(input_Q)
out_I_Q = Add()([conv_I_1D, conv_Q_1D])
# out_I_Q = tf.math.add(conv_I_1D, conv_I_1D)
model_1D = Model([input_I, input_Q], out_I_Q)
model_1D.compile(optimizer='sgd', loss='mean_squared_error')
history_1D = model_1D.fit([x_I, x_Q], y_train, epochs=epochs, verbose=0)
40 个 epoch 后,我得到了几乎完美的初始过滤器内核:
plt.semilogy(history_1D.history['loss'])
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.show()
但是如果我用 tensorflow 等效添加函数替换 Keras Add()
函数: out_I_Q = tf.math.add(conv_I_1D, conv_I_1D)
我会得到这个悲伤的损失图形:
我认为 tensorflow 数学函数不是此配置中 Keras 模型的一部分。更改优化器类型根本没有帮助。我使用的是tensorflow 2.0 和Keras 2.2.5。
最佳答案
您应该使用tf.keras.layers.Lambda layer
与 tf.math.add()
结合如下:
def add_func(inputs):
return tf.math.add(inputs[0], inputs[1])
out_I_Q = Lambda(add_func)([conv_I_1D, conv_Q_1D])
或
out_I_Q = Lambda(lambda x: tf.math.add(x[0], x[1]))([conv_I_1D, conv_Q_1D])
来自文档:
The Lambda layer exists so that arbitrary TensorFlow functions can be used when constructing Sequential and Functional API models.
完整示例:
import numpy as np
import matplotlib.pyplot as plt
import scipy.signal as ss
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Conv1D, Add, Lambda
import tensorflow as tf
kernel_size = 64
epochs = 100
## Data generation for training
x_train = np.random.randn(1024, 512)
t = np.linspace(0, x_train.shape[1], x_train.shape[1], endpoint=False)
sine = np.sin(2*np.pi*t/32)
cosine = np.cos(2*np.pi*t/32)
x_I = np.multiply(x_train, cosine)
x_Q = np.multiply(x_train, sine)
b_I = ss.tukey(kernel_size)
b_Q = ss.tukey(kernel_size)
x_I_filt = np.array([np.convolve(b_I, x_I_i, mode='valid') for x_I_i in x_I])
x_Q_filt = np.array([np.convolve(b_Q, x_Q_i, mode='valid') for x_Q_i in x_Q])
y_train = x_Q_filt + x_I_filt
x_I = np.expand_dims(x_I, axis=2)
x_Q = np.expand_dims(x_Q, axis=2)
y_train = np.expand_dims(y_train, axis=2)
## Keras model
input_I = Input(shape=(x_I.shape[1], 1))
input_Q = Input(shape=(x_Q.shape[1], 1))
conv_I_1D = Conv1D(filters=1, kernel_size=kernel_size, activation=None, padding='valid', use_bias=False)(input_I)
conv_Q_1D = Conv1D(filters=1, kernel_size=kernel_size, activation=None, padding='valid', use_bias=False)(input_Q)
out_I_Q = Lambda(lambda x: tf.math.add(x[0], x[1]))([conv_I_1D, conv_Q_1D])
model_1D = Model([input_I, input_Q], out_I_Q)
model_1D.compile(optimizer='sgd', loss='mean_squared_error')
history_1D = model_1D.fit([x_I, x_Q], y_train, epochs=epochs, verbose=0)
plt.semilogy(history_1D.history['loss'])
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.show()
输出:
关于python - 如何在Keras模型末尾添加 tensorflow 数学函数?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59469404/