我的问题
我正在使用 Keras 构建卷积神经网络。我遇到了以下问题:
model = tf.keras.Sequential()
model.add(layers.Dense(10*10*256, use_bias=False, input_shape=(100,)))
我很好奇 - 这里究竟在数学上发生了什么?
我最好的猜测
我的猜测是,对于大小为 [100,N] 的输入,网络将被评估 N 次,每个训练示例一次。由
layers.Dense
创建的 Dense 层包含 (10*10*256) * (100)
将在反向传播期间更新的参数。
最佳答案
密集实现操作:output = activation(dot(input, kernel) + bias)
其中,activation 是作为激活参数传递的逐元素激活函数,kernel 是层创建的权重矩阵,bias 是层创建的偏置向量(仅适用于 use_bias 为 True)。
Note: If the input to the layer has a rank greater than 2, then it is flattened prior to the initial dot product with kernel.
例子:
# as first layer in a sequential model:
model = Sequential()
model.add(Dense(32, input_shape=(16,)))
# now the model will take as input arrays of shape (*, 16)
# and output arrays of shape (*, 32)
# after the first layer, you don't need to specify
# the size of the input anymore:
model.add(Dense(32))
参数:
> units: Positive integer, dimensionality of the output space.
> activation: Activation function to use. If you don't specify anything,
> no activation is applied (ie. "linear" activation: a(x) = x).
> use_bias: Boolean, whether the layer uses a bias vector.
> kernel_initializer: Initializer for the kernel weights matrix.
> bias_initializer: Initializer for the bias vector.
>kernel_regularizer:Regularizer function applied to the kernel weights matrix.
> bias_regularizer: Regularizer function applied to the bias vector.
> activity_regularizer: Regularizer function applied to the output of the layer (its "activation")..
>kernel_constraint: Constraint function applied to the kernel weights matrix.
>bias_constraint: Constraint function applied to the bias vector.
输入形状:
具有形状的 N-D 张量:(batch_size, ..., input_dim)。最常见的情况是具有形状 (batch_size, input_dim) 的 2D 输入。
输出形状:
具有形状的 N-D 张量:(batch_size, ..., units)。例如,对于形状为 (batch_size, input_dim) 的 2D 输入,输出将具有形状 (batch_size, units)。
关于keras - tf.keras.layers.Dense 到底做了什么?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/60783216/