我正在尝试将第 k 个 Action 数据集提供给 cnn。我在 reshape 数据时遇到困难。我创建了这个数组 (99,75,120,160) type=uint8 即,属于一个类的 99 个视频,每个视频有 75 帧,每个帧的尺寸为 120x160。
model = Sequential()
model.add(TimeDistributed(Conv2D(64, (3, 3), activation='relu', padding='same'),
input_shape=()))
###need to reshape data in input_shape
我应该先指定一个密集层吗?
这是我的代码
model = Sequential()
model.add(TimeDistributed(Conv2D(64, (3, 3), activation='relu', padding='same'),
input_shape=(75,120,160)))
###need to reshape data in input_shape
model.add(TimeDistributed(MaxPooling2D(pool_size=(2, 2))))
model.add(TimeDistributed(Conv2D(32, (3, 3), activation='relu', padding='same')))
model.add(TimeDistributed(MaxPooling2D(pool_size=(2, 2))))
model.add(TimeDistributed(Conv2D(16, (3, 3), activation='relu', padding='same')))
model.add(TimeDistributed(MaxPooling2D(pool_size=(2, 2))))
model.add(TimeDistributed(Flatten()))
model.add(LSTM(units=64, return_sequences=True))
model.add(TimeDistributed(Reshape((8, 8, 1))))
model.add(TimeDistributed(UpSampling2D((2,2))))
model.add(TimeDistributed(Conv2D(16, (3,3), activation='relu', padding='same')))
model.add(TimeDistributed(UpSampling2D((2,2))))
model.add(TimeDistributed(Conv2D(32, (3,3), activation='relu', padding='same')))
model.add(TimeDistributed(UpSampling2D((2,2))))
model.add(TimeDistributed(Conv2D(64, (3,3), activation='relu', padding='same')))
model.add(TimeDistributed(UpSampling2D((2,2))))
model.add(TimeDistributed(Conv2D(1, (3,3), padding='same')))
model.compile(optimizer='adam', loss='mse')
data = np.load(r"C:\Users\shj_k\Desktop\Project\handclapping.npy")
print (data.shape)
(x_train,x_test) = train_test_split(data)
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
print (x_train.shape)
print (x_test.shape)
model.fit(x_train, x_train,
epochs=100,
batch_size=1,
shuffle=False,
validation_data=(x_test, x_test))
变量是 x_test (25,75,120,160) 类型=float32 x_train (74,75,120,160) 类型=float32
评论中的完整错误是
runfile('C:/Users/shj_k/Desktop/Project/cnn_lstm.py', wdir='C:/Users/shj_k/Desktop/Project') (99, 75, 120, 160) (74, 75, 120, 160) (25, 75, 120, 160) Traceback (most recent call last):
File "", line 1, in runfile('C:/Users/shj_k/Desktop/Project/cnn_lstm.py', wdir='C:/Users/shj_k/Desktop/Project')
File "C:\Users\shj_k\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 668, in runfile execfile(filename, namespace)
File "C:\Users\shj_k\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 108, in execfile exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/shj_k/Desktop/Project/cnn_lstm.py", line 63, in validation_data=(x_test, x_test))
File "C:\Users\shj_k\Anaconda3\lib\site-packages\keras\engine\training.py", line 952, in fit batch_size=batch_size)
File "C:\Users\shj_k\Anaconda3\lib\site-packages\keras\engine\training.py", line 751, in _standardize_user_data exception_prefix='input')
File "C:\Users\shj_k\Anaconda3\lib\site-packages\keras\engine\training_utils.py", line 128, in standardize_input_data 'with shape ' + str(data_shape))
ValueError: Error when checking input: expected time_distributed_403_input to have 5 dimensions, but got array with shape (74, 75, 120, 160)
谢谢回复
最佳答案
一些事情:
Keras 中的 TimeDistributed 层需要一个时间维度,因此对于视频图像处理,这里可能是 75(帧)。
它还期望以形状 (120, 60, 3) 发送图像。所以 TimeDistributed 层 input_shape 应该是 (75, 120, 160, 3)。 3代表RGB channel 。如果您有灰度图像,最后一个维度应该为 1。
input_shape 始终忽略示例的“行”维度,在您的示例中为 99。
要检查模型每一层创建的输出形状,请在编译后放入 model.summary()
。
参见:https://www.tensorflow.org/api_docs/python/tf/keras/layers/TimeDistributed
您可以使用 Keras.preprocessing.image 将图像转换为形状为 (X, Y, 3) 的 numpy 数组。
from keras.preprocessing import image
# loads RGB image as PIL.Image.Image type
img = image.load_img(img_file_path, target_size=(120, 160))
# convert PIL.Image.Image type to 3D tensor with shape (120, 160, 3)
x = image.img_to_array(img)
更新: 您必须使所有图像平方 (128,128,1) 的原因似乎是在 model.fit() 中,训练示例 (x_train) 和标签(通常是 y_train)是同一组。如果您查看下面的模型摘要,在 Flatten 层之后,一切都变成了正方形。因此期望标签是正方形。这是有道理的:使用此模型进行预测会将 (120,160,1) 图像转换为 (128, 128, 1) 的形状。因此,将模型训练更改为以下代码应该有效:
x_train = random.random((90, 5, 120, 160, 1)) # training data
y_train = random.random((90, 5, 128, 128, 1)) # labels
model.fit(x_train, y_train)
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
time_distributed_1 (TimeDist (None, 5, 120, 160, 64) 320
_________________________________________________________________
time_distributed_2 (TimeDist (None, 5, 60, 80, 64) 0
_________________________________________________________________
time_distributed_3 (TimeDist (None, 5, 60, 80, 32) 18464
_________________________________________________________________
time_distributed_4 (TimeDist (None, 5, 30, 40, 32) 0
_________________________________________________________________
time_distributed_5 (TimeDist (None, 5, 30, 40, 16) 4624
_________________________________________________________________
time_distributed_6 (TimeDist (None, 5, 15, 20, 16) 0
_________________________________________________________________
time_distributed_7 (TimeDist (None, 5, 4800) 0
_________________________________________________________________
lstm_1 (LSTM) (None, 5, 64) 1245440
_________________________________________________________________
time_distributed_8 (TimeDist (None, 5, 8, 8, 1) 0
_________________________________________________________________
time_distributed_9 (TimeDist (None, 5, 16, 16, 1) 0
_________________________________________________________________
time_distributed_10 (TimeDis (None, 5, 16, 16, 16) 160
_________________________________________________________________
time_distributed_11 (TimeDis (None, 5, 32, 32, 16) 0
_________________________________________________________________
time_distributed_12 (TimeDis (None, 5, 32, 32, 32) 4640
_________________________________________________________________
time_distributed_13 (TimeDis (None, 5, 64, 64, 32) 0
_________________________________________________________________
time_distributed_14 (TimeDis (None, 5, 64, 64, 64) 18496
_________________________________________________________________
time_distributed_15 (TimeDis (None, 5, 128, 128, 64) 0
_________________________________________________________________
time_distributed_16 (TimeDis (None, 5, 128, 128, 1) 577
=================================================================
Total params: 1,292,721
Trainable params: 1,292,721
Non-trainable params: 0
更新 2: 要使其在不更改 y 的情况下处理非方形图像,请设置 LSTM(300)、Reshape(15, 20, 1),然后删除其中一个 Conv2D + 上采样层。然后你甚至可以在自动编码器中使用形状为 (120,160) 的图片。诀窍是查看模型摘要,并确保在 LSTM 之后您从正确的形状开始,以便在添加所有其他层之后,最终结果是 (120,160) 的形状。
model = Sequential()
model.add(
TimeDistributed(Conv2D(64, (2, 2), activation="relu", padding="same"), =(5, 120, 160, 1)))
model.add(TimeDistributed(MaxPooling2D(pool_size=(2, 2))))
model.add(TimeDistributed(Conv2D(32, (3, 3), activation='relu', padding='same')))
model.add(TimeDistributed(MaxPooling2D(pool_size=(2, 2))))
model.add(TimeDistributed(Conv2D(16, (3, 3), activation='relu', padding='same')))
model.add(TimeDistributed(MaxPooling2D(pool_size=(2, 2))))
model.add(TimeDistributed(Flatten()))
model.add(LSTM(units=300, return_sequences=True))
model.add(TimeDistributed(Reshape((15, 20, 1))))
model.add(TimeDistributed(UpSampling2D((2, 2))))
model.add(TimeDistributed(Conv2D(16, (3, 3), activation='relu', padding='same')))
model.add(TimeDistributed(UpSampling2D((2, 2))))
model.add(TimeDistributed(Conv2D(32, (3, 3), activation='relu', padding='same')))
model.add(TimeDistributed(UpSampling2D((2, 2))))
model.add(TimeDistributed(Conv2D(1, (3, 3), padding='same')))
model.compile(optimizer='adam', loss='mse')
model.summary()
x_train = random.random((90, 5, 120, 160, 1))
y_train = random.random((90, 5, 120, 160, 1))
model.fit(x_train, y_train)
关于python - 如何 reshape 3 channel 数据集以输入神经网络,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55312701/