python - keras 用于添加两个密集层

标签 python tensorflow keras

有两个输入 x 和 u,生成输出 y。 x、u、y 之间存在线性关系,即 y = x wx + u wx。我正在尝试根据数据计算 wx 和 wu 。这是模型构建/拟合的代码。

    n_train = 400
    n_val = 100
    train_u = u[:(n_train+n_val)]
    train_x = x[:(n_train+n_val)]
    train_y = y[:(n_train+n_val)]
    test_u = u[(n_train+n_val):]
    test_x = x[(n_train+n_val):]
    test_y = y[(n_train+n_val):]
    val_u = train_u[-n_val:]
    val_x = train_x[-n_val:]
    val_y = train_y[-n_val:]
    train_u = train_u[:-n_val]
    train_x = train_x[:-n_val]
    train_y = train_y[:-n_val]

    # RNN derived classes want a shape of (batch_size, timesteps, input_dim)
    # batch_size. One sequence is one sample. A batch is comprised of one or more samples.
    # timesteps. One time step is one point of observation in the sample.
    # input_dim. number of observation at a time step.
    # I believe n_train = one_epoch = batch_size * time_steps, features = nx_lags or nu_lags
    # I also thing an epoch is one pass through the training data

    n_batches_per_epoch = 8
    n_iterations_per_batch = round(n_train / n_batches_per_epoch)
    batch_size = n_batches_per_epoch
    time_steps = n_iterations_per_batch
    features_x = train_x.shape[1]
    features_u = train_u.shape[1]
    features_y = train_y.shape[1]

    keras_train_u = train_u.values.reshape((batch_size, time_steps, features_u))
    keras_train_x = train_x.values.reshape((batch_size, time_steps, features_x))
    keras_train_y = train_y.reshape((batch_size, time_steps, features_y))
    keras_val_u = val_u.values.reshape((2, time_steps, features_u))
    keras_val_x = val_x.values.reshape((2, time_steps, features_x))
    keras_val_y = val_y.reshape((2, time_steps, features_y))
    keras_test_u = test_u.values.reshape((1, test_u.shape[0], features_u))
    keras_test_x = test_x.values.reshape((1, test_u.shape[0], features_x))
    keras_test_y = test_y.reshape((1, test_u.shape[0], features_y))

    print('u.values.shape: ', u.values.shape)
    # Now try a tensorflow model
    # x_input = keras.Input(shape=(batch_size, time_steps, features_x), name='x_input')
    # u_input = keras.Input(shape=(batch_size, time_steps, features_u), name='u_input')
    x_input = keras.Input(shape=(time_steps, features_x), name='x_input')
    u_input = keras.Input(shape=(time_steps, features_u), name='u_input')
    da = layers.Dense(ny, name='dense_a', use_bias=False)(x_input)
    db = layers.Dense(ny, name='dense_b', use_bias=False)(u_input)
    output = layers.Add()([da, db])

    model = keras.Model(inputs=[x_input, u_input], outputs=output)

    model.compile(optimizer=keras.optimizers.RMSprop(),  # Optimizer
                  # Loss function to minimize
                  loss=keras.losses.SparseCategoricalCrossentropy(),
                  # List of metrics to monitor
                  metrics=[keras.metrics.SparseCategoricalAccuracy()])
    print(model.summary())
    print('keras_train_x.shape: ', keras_train_x.shape)
    print('keras_train_u.shape: ', keras_train_u.shape)
    print('keras_train_y.shape: ', keras_train_y.shape)
    print('keras_val_x.shape: ', keras_val_x.shape)
    print('keras_val_u.shape: ', keras_val_u.shape)
    print('keras_val_y.shape: ', keras_val_y.shape)
    history = model.fit([keras_train_x, keras_train_u], keras_train_y,
                        batch_size=64,
                        epochs=3,
                        # We pass some validation for
                        # monitoring validation loss and metrics
                        # at the end of each epoch
                        validation_data=([keras_val_x, keras_val_u], keras_val_y))

而且,这是输出,有错误。

Model: "model"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
x_input (InputLayer)            [(None, 50, 7)]      0                                            
__________________________________________________________________________________________________
u_input (InputLayer)            [(None, 50, 7)]      0                                            
__________________________________________________________________________________________________
dense_a (Dense)                 (None, 50, 2)        14          x_input[0][0]                    
__________________________________________________________________________________________________
dense_b (Dense)                 (None, 50, 2)        14          u_input[0][0]                    
__________________________________________________________________________________________________
add (Add)                       (None, 50, 2)        0           dense_a[0][0]                    
                                                                 dense_b[0][0]                    
==================================================================================================
Total params: 28
Trainable params: 28
Non-trainable params: 0
__________________________________________________________________________________________________
None
keras_train_x.shape:  (8, 50, 7)
keras_train_u.shape:  (8, 50, 7)
keras_train_y.shape:  (8, 50, 2)
keras_val_x.shape:  (2, 50, 7)
keras_val_u.shape:  (2, 50, 7)
keras_val_y.shape:  (2, 50, 2)
Train on 8 samples, validate on 2 samples

Epoch 1/3
Traceback (most recent call last):
  File "arx_rnn.py", line 487, in <module>
    main()
  File "/arx_rnn.py", line 481, in main
    rnn_prediction = x.rnn_n_steps(y_measured, u_control, n_to_predict)
  File "arx_rnn.py", line 387, in rnn_n_steps
    validation_data=([keras_val_x, keras_val_u], keras_val_y))
  File "venv\lib\site-packages\tensorflow\python\keras\engine\training.py", line 780, in fit
    steps_name='steps_per_epoch')
  File "venv\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py", line 363, in model_iteration
    batch_outs = f(ins_batch)
  File "venv\lib\site-packages\tensorflow\python\keras\backend.py", line 3292, in __call__
    run_metadata=self.run_metadata)
  File "venv\lib\site-packages\tensorflow\python\client\session.py", line 1458, in __call__
    run_metadata_ptr)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Can not squeeze dim[2], expected a dimension of 1, got 2
     [[{{node metrics/sparse_categorical_accuracy/Squeeze}}]]

Process finished with exit code 1

错误消息告诉我什么以及如何纠正?

最佳答案

Keras 分类准确性指标期望输出和标签形状为 (batch_size,num_classes)。错误消息中的 dim[2] 表示输出形状为 3d:(None,50,2)

简单的解决方法是通过任何方式确保输出层为每个批处理的每个类提供一个预测 - 即具有形状(batch_size,num_classes) - 其中可以通过ReshapeFlatten来完成。

更好的解决办法是根据设计需求改变输入输出拓扑 - 即,您到底要分类什么?您的数据维度表明您寻求对各个时间步进行分类 - 在这种情况下,一次一个时间步馈送数据:(batch_size,features)。或者,在批处理轴中提供时间步,一次一批,因此 1000 个时间步将对应于 (1000,features) - 但> 如果模型具有任何有状态层,则执行此操作,该层将每个批处理轴条目视为独立序列

要使用 timesteps>1 对序列进行分类,请再次确保图层数据流最终产生 2d 输出。

关于python - keras 用于添加两个密集层,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57494484/

相关文章:

Python Pandas : Group by one column and see the content of all columns?

python - tensorflow 多 GPU 并行使用

tensorflow - Keras flow_from_directory()仅从所选子目录中读取

python - keras中不同批量大小的损失计算

Tensorflow lite 模型给出错误的输出

python - 为什么 GCC 忽略 Snow Leopard 中的 ARCHFLAGS?

python - 在 python 中使用 random.randint 帮助

python - Python 中的矩阵逆

tensorflow - 将 tensorflow 从 2.1.2 升级到 2.3.0 后,Tensorboard 无法使用

java - 如何处理 org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs() 的结果