python - tensorflow :具有单个神经元的输出层,预期浮点输出[0.0,1.0]

标签 python tensorflow

我尝试构建一个神经网络,其输出层仅由单个神经元组成。我的输入数据包含分配给“0”或“1”的 500 个 float 。最终的 nn 应输出一个“概率”值 [0.0, 1.0]。 由于我是tensorflow新手,所以我从Aurélien Géron的优秀书中获取了MNIST示例,并根据我的需要进行了修改。然而,我被困在几个点上。基本上,他在某些时候使用了“softmax”函数,这对于我的示例来说是不正确的。另外,他的评估函数(“tf.nn.in_top_k”)不可能是正确的。最后,我想知道输出层是否需要一个激活函数(“sigmoid”?)。 非常感谢您的反馈!

这是我的代码:

import tensorflow as tf
import numpy as np

n_inputs = 500

n_hidden1 = 400
n_hidden2 = 300

n_outputs = 1


# import training, test and validation data...
X_train,y_train = <import my training data as "np.array" objects>
X_valid,y_valid = <import my validation data as "np.array" objects>
X_test,y_test   = <import my testing data as "np.array" objects>


seed    = 42
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)

X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X")
y = tf.placeholder(tf.float32, shape=(None), name="y")

def neuron_layer(X, n_neurons, name, activation=None):
    with tf.name_scope(name):
        n_inputs = int(X.get_shape()[1])
        stddev = 2 / np.sqrt(n_inputs)
        init = tf.truncated_normal((n_inputs, n_neurons), stddev=stddev)
        W = tf.Variable(init, name="kernel")
        b = tf.Variable(tf.zeros([n_neurons]), name="bias")
        Z = tf.matmul(X, W) + b
        if activation is not None:
            return activation(Z)
        else:
            return Z

with tf.name_scope("dnn"):
    hidden1 = neuron_layer(X, n_hidden1, name="hidden1",activation=tf.nn.relu)
    hidden2 = neuron_layer(hidden1, n_hidden2, name="hidden2",activation=tf.nn.relu)
    # do I need an activation function here?
    logits = neuron_layer(hidden2, n_outputs, name="outputs")


with tf.name_scope("loss"):
    # this is probably not correct - I should most likely use something like "sigmoid"... but how exactly do I do that?
    xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y,logits=logits)
    loss = tf.reduce_mean(xentropy, name="loss")

learning_rate = 0.01

with tf.name_scope("train"):
    optimizer = tf.train.GradientDescentOptimizer(learning_rate)
    training_op = optimizer.minimize(loss)


with tf.name_scope("eval"):
    # same thing here. what is the right function to be used here?
    correct = tf.nn.in_top_k(logits, y, 1)
    accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))


init = tf.global_variables_initializer()
saver = tf.train.Saver()

n_epochs = 100
batch_size = 50

def shuffle_batch(X, y, batch_size):
    rnd_idx = np.random.permutation(len(X))
    n_batches = len(X) // batch_size
    for batch_idx in np.array_split(rnd_idx, n_batches):
        X_batch, y_batch = X[batch_idx], y[batch_idx]
        yield X_batch, y_batch

with tf.Session() as sess:
    init.run()
    for epoch in range(n_epochs):
        for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):
            sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
        acc_batch = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
        acc_val = accuracy.eval(feed_dict={X: X_valid, y: y_valid})
        print(epoch, "Batch accuracy:", acc_batch, "Val accuracy:", acc_val)

    save_path = saver.save(sess, "./my_model_final.ckpt")
<小时/>

其他信息:

非常感谢您的回复。引入“sigmoid”函数是朝着正确方向迈出的一步。但是,仍然存在一些问题:

1.) 训练神经网络时,准确度不是很好:

(95, 'Batch accuracy:', 0.54, 'Val accuracy:', 0.558)
(96, 'Batch accuracy:', 0.52, 'Val accuracy:', 0.558)
(97, 'Batch accuracy:', 0.56, 'Val accuracy:', 0.558)
(98, 'Batch accuracy:', 0.58, 'Val accuracy:', 0.558)
(99, 'Batch accuracy:', 0.52, 'Val accuracy:', 0.558)

2.) 测试训练模型时返回的结果似乎太低了。值均在 [0.0,0.3] 之间:

('Predicted classes:', array([[0.2000685 ],[0.17176622],[0.14039296],[0.15600625],[0.15928227],[0.15543781],[0.1348885 ],[0.17185831],[0.170376],[0.17732298],[0.17864114],[0.16391528],[0.18579942],[0.12997991],[0.13886571],[0.24408364],       [0.17308617],[0.16365634],[0.1782803 ],[0.11332873]], dtype=float32))
('Actual classes:   ', array([0., 0., 0., 1., 0., 0., 1., 1., 1., 1., 1., 1., 0., 0., 1., 1., 1.,1., 0., 0.]))

我想,我的验证函数仍然不正确:

with tf.name_scope("eval"):
    predicted = tf.nn.sigmoid(logits)
    correct_pred = tf.equal(tf.round(predicted), y)
    accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

正确的验证函数必须是什么样子?

再次非常感谢您的帮助!

最佳答案

  1. Logits 不应激活。
  2. 损失应该是能够处理 logits 的 sigmoid,tf.nn.sigmoid_cross_entropy_with_logits 就是这样的。
  3. 您可以通过检查最终 logit 是否小于零或大于零来计算准确性。如果是第一种情况,则分类为 0,如果是第二种,则分类为 1。我不确定 tf 是否有内置的功能。

关于python - tensorflow :具有单个神经元的输出层,预期浮点输出[0.0,1.0],我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54146761/

相关文章:

python - buildbot:使用 --trust-server-cert 运行 svn

python-3.x - 从张量中获取值的随机索引

windows - 尝试通过 Docker 运行 TensorFlow 时难以访问 Jupyter notebook

python - 如何在训练和测试阶段使用不同的损失函数

python - 掩码-rcnn :Need advice for the Prediction about the root/handler and orientation of balloons

python - 如何在python中下载csv文件和zip文件?

python - 使用 BeautifulSoup/Python 提取网站背景图像的 URL

Python repr 字符串带真正的换行符

tensorflow - 如何从张量板下载图形?

python - 如何使用 django 在 url 中添加问号?