python - TensorFlow:2 层前馈神经网络

标签 python machine-learning neural-network tensorflow

我正在尝试在 TensorFlow(Python 3 版本)中实现一个简单的全连接前馈神经网络。该网络有 2 个输入和 1 个输出,我正在尝试训练它输出两个输入的 XOR。我的代码如下:

import numpy as np
import tensorflow as tf

sess = tf.InteractiveSession()

inputs = tf.placeholder(tf.float32, shape = [None, 2])
desired_outputs = tf.placeholder(tf.float32, shape = [None, 1])

weights_1 = tf.Variable(tf.zeros([2, 3]))
biases_1 = tf.Variable(tf.zeros([1, 3]))
layer_1_outputs = tf.nn.sigmoid(tf.matmul(inputs, weights_1) + biases_1)

weights_2 = tf.Variable(tf.zeros([3, 1]))
biases_2 = tf.Variable(tf.zeros([1, 1]))
layer_2_outputs = tf.nn.sigmoid(tf.matmul(layer_1_outputs, weights_2) + biases_2)

error_function = -tf.reduce_sum(desired_outputs * tf.log(layer_2_outputs))
train_step = tf.train.GradientDescentOptimizer(0.05).minimize(error_function)

sess.run(tf.initialize_all_variables())

training_inputs = [[0.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 1.0]]
training_outputs = [[0.0], [1.0], [1.0], [0.0]]

for i in range(10000):
    train_step.run(feed_dict = {inputs: np.array(training_inputs), desired_outputs: np.array(training_outputs)})

print(sess.run(layer_2_outputs, feed_dict = {inputs: np.array([[0.0, 0.0]])}))
print(sess.run(layer_2_outputs, feed_dict = {inputs: np.array([[0.0, 1.0]])}))
print(sess.run(layer_2_outputs, feed_dict = {inputs: np.array([[1.0, 0.0]])}))
print(sess.run(layer_2_outputs, feed_dict = {inputs: np.array([[1.0, 1.0]])}))

这看起来很简单,但最后的打印语句显示神经网络远未达到所需的输出,无论训练迭代次数或学习率如何。谁能看出我做错了什么?

谢谢。

编辑: 我还尝试了以下替代错误函数:

error_function = 0.5 * tf.reduce_sum(tf.sub(layer_2_outputs, desired_outputs) * tf.sub(layer_2_outputs, desired_outputs))

该误差函数是误差的平方和。它总是会导致网络输出恰好为 0.5 的值——这是我代码中某处错误的另一个迹象。

编辑 2: 我发现我的代码适用于 AND 和 OR,但不适用于 XOR。我现在非常困惑。

最佳答案

您的代码中存在多个问题。在下文中,我将对每一行进行注释,以带您找到解决方案。

注意:XOR 不是线性可分的。您需要超过 1 个隐藏层。

注意:以 # [!] 开头的行是您出错​​的行。

import numpy as np
import tensorflow as tf

sess = tf.InteractiveSession()

# a batch of inputs of 2 value each
inputs = tf.placeholder(tf.float32, shape=[None, 2])

# a batch of output of 1 value each
desired_outputs = tf.placeholder(tf.float32, shape=[None, 1])

# [!] define the number of hidden units in the first layer
HIDDEN_UNITS = 4 

# connect 2 inputs to 3 hidden units
# [!] Initialize weights with random numbers, to make the network learn
weights_1 = tf.Variable(tf.truncated_normal([2, HIDDEN_UNITS]))

# [!] The biases are single values per hidden unit
biases_1 = tf.Variable(tf.zeros([HIDDEN_UNITS]))

# connect 2 inputs to every hidden unit. Add bias
layer_1_outputs = tf.nn.sigmoid(tf.matmul(inputs, weights_1) + biases_1)

# [!] The XOR problem is that the function is not linearly separable
# [!] A MLP (Multi layer perceptron) can learn to separe non linearly separable points ( you can
# think that it will learn hypercurves, not only hyperplanes)
# [!] Lets' add a new layer and change the layer 2 to output more than 1 value

# connect first hidden units to 2 hidden units in the second hidden layer
weights_2 = tf.Variable(tf.truncated_normal([HIDDEN_UNITS, 2]))
# [!] The same of above
biases_2 = tf.Variable(tf.zeros([2]))

# connect the hidden units to the second hidden layer
layer_2_outputs = tf.nn.sigmoid(
    tf.matmul(layer_1_outputs, weights_2) + biases_2)

# [!] create the new layer
weights_3 = tf.Variable(tf.truncated_normal([2, 1]))
biases_3 = tf.Variable(tf.zeros([1]))

logits = tf.nn.sigmoid(tf.matmul(layer_2_outputs, weights_3) + biases_3)

# [!] The error function chosen is good for a multiclass classification taks, not for a XOR.
error_function = 0.5 * tf.reduce_sum(tf.sub(logits, desired_outputs) * tf.sub(logits, desired_outputs))

train_step = tf.train.GradientDescentOptimizer(0.05).minimize(error_function)

sess.run(tf.initialize_all_variables())

training_inputs = [[0.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 1.0]]

training_outputs = [[0.0], [1.0], [1.0], [0.0]]

for i in range(20000):
    _, loss = sess.run([train_step, error_function],
                       feed_dict={inputs: np.array(training_inputs),
                                  desired_outputs: np.array(training_outputs)})
    print(loss)

print(sess.run(logits, feed_dict={inputs: np.array([[0.0, 0.0]])}))
print(sess.run(logits, feed_dict={inputs: np.array([[0.0, 1.0]])}))
print(sess.run(logits, feed_dict={inputs: np.array([[1.0, 0.0]])}))
print(sess.run(logits, feed_dict={inputs: np.array([[1.0, 1.0]])}))

我增加了训练迭代的次数,以确保无论随机初始化值是多少,网络都会收敛。

20000 次训练迭代后的输出是:

[[ 0.01759939]]
[[ 0.97418505]]
[[ 0.97734243]]
[[ 0.0310041]]

看起来还不错。

关于python - TensorFlow:2 层前馈神经网络,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/38575578/

相关文章:

python - 运行 fork 脚本时 SSH session 保持打开状态

python - 如何使用 Python 2.4 解压缩文件?

python - 深度学习图像分类器架构配置

tensorflow - 不确定函数是否会破坏反向传播

ruby - 在 Ruby 中训练神经网络

python - 将层的一半过滤器设置为不可训练的keras/tensorflow

python - Python 与 Perl 中的小写脚本

python - 使用 open cv 检测弧的中心

machine-learning - LSTM 预测的初始部分中的摆动

python - tf.layers.dense 是单层吗?