java - 神经网络为每个输入返回相同的输出

标签 java neural-network artificial-intelligence classification

作为项目的一部分,我用 Java 编写了一个简单的人工神经网络。当我开始训练数据时(使用我收集的训练集),每个时期的错误计数很快稳定下来(准确度约为 30%),然后停止。测试 ANN 时,任何给定输入的所有输出都完全相同。

我试图输出一个 0 到 1 之间的数字(0 将股票分类为下跌股票,1 将股票分类为上升股票 - 0.4-0.6 应表示稳定)

将相同的训练数据添加到 RapidMiner Studios 中时,会创建一个具有更高 (70+%) 准确度的适当 ANN,因此我知道数据集很好。 ANN 逻辑一定有问题。

下面是运行和调整权重的代码。感谢任何和所有帮助!

    public double[] Run(double[] inputs) {
    //INPUTS
    for (int i = 0; i < inputNeurons.length; i++) {
        inputNeurons[i] = inputs[i];
    }

    for (int i = 0; i < hiddenNeurons.length; i++) {
        hiddenNeurons[i] = 0;
    } //RESET THE HIDDEN NEURONS

    for (int e = 0; e < inputNeurons.length; e++) {
        for (int i = 0; i < hiddenNeurons.length; i++) {
            //Looping through each input neuron connected to each hidden neuron

            hiddenNeurons[i] += inputNeurons[e] * inputWeights[(e * hiddenNeurons.length) + i];
            //Summation (with the adding of neurons)  - Done by taking the sum of each (input * connection weight)
            //The more weighting a neuron has the more "important" it is in decision making
        }
    }

    for (int j = 0; j < hiddenNeurons.length; j++) {
        hiddenNeurons[j] = 1 / (1 + Math.exp(-hiddenNeurons[j]));
        //sigmoid function transforms the output into a real number between 0 and 1
    }

    //HIDDEN
    for (int i = 0; i < outputNeurons.length; i++) {
        outputNeurons[i] = 0;
    } //RESET THE OUTPUT NEURONS

    for (int e = 0; e < hiddenNeurons.length; e++) {
        for (int i = 0; i < outputNeurons.length; i++) {
            //Looping through each hidden neuron connected to each output neuron

            outputNeurons[i] += hiddenNeurons[e] * hiddenWeights[(e * outputNeurons.length) + i];
            //Summation (with the adding of neurons) as above
        }
    }

    for (int j = 0; j < outputNeurons.length; j++) {
        outputNeurons[j] = 1 / (1 + Math.exp(-outputNeurons[j])); //sigmoid function as above
    }

    double[] outputs = new double[outputNeurons.length];
    for (int j = 0; j < outputNeurons.length; j++) {
        //Places all output neuron values into an array
        outputs[j] = outputNeurons[j];
    }
    return outputs;
}

public double[] CalculateErrors(double[] targetValues) {
    //Compares the given values to the actual values
    for (int k = 0; k < outputErrors.length; k++) {
        outputErrors[k] = targetValues[k] - outputNeurons[k];
    }
    return outputErrors;
}

    public void tuneWeights() //Back Propagation
{
    // Start from the end - From output to hidden
    for (int p = 0; p < this.hiddenNeurons.length; p++)     //For all Hidden Neurons
    {
        for (int q = 0; q < this.outputNeurons.length; q++)  //For all Output Neurons
        {
            double delta = this.outputNeurons[q] * (1 - this.outputNeurons[q]) * this.outputErrors[q];
            //DELTA is the error for the output neuron q
            this.hiddenWeights[(p * outputNeurons.length) + q] += this.learningRate * delta * this.hiddenNeurons[p];
            /*Adjust the particular weight relative to the error
             *If the error is large, the weighting will be decreased
             *If the error is small, the weighting will be increased
             */
        }
    }

    // From hidden to inps -- Same as above
    for (int i = 0; i < this.inputNeurons.length; i++)       //For all Input Neurons
    {
        for (int j = 0; j < this.hiddenNeurons.length; j++)  //For all Hidden Neurons
        {
            double delta = this.hiddenNeurons[j] * (1 - this.hiddenNeurons[j]);
            double x = 0;       //We do not have output errors here so we must use extra data from Output Neurons
            for (int k = 0; k < this.outputNeurons.length; k++) {
                double outputDelta = this.outputNeurons[k] * (1 - this.outputNeurons[k]) * this.outputErrors[k];
                //We calculate the output delta again
                x = x + outputDelta * this.hiddenWeights[(j * outputNeurons.length) + k];
                //We then calculate the error based on the hidden weights (x is used to add the error values of all weights)
                delta = delta * x;
            }
            this.inputWeights[(i * hiddenNeurons.length) + j] += this.learningRate * delta * this.inputNeurons[i];
            //Adjust weight like above
        }
    }
}

最佳答案

经过长时间的交谈,我认为您可以在以下几点找到问题的答案:

  1. 偏见确实很重要。实际上 - 关于神经网络最流行的问题之一是关于偏差:): Role of Bias in Neural Networks
  2. 你应该照顾你的学习过程。最好跟踪准确性和验证集的测试,并在训练期间使用适当的学习率。我建议您在知道很容易找到真正的解决方案时使用更简单的数据集(例如 - 三角形或正方形 - 然后使用 4 - 5 个隐藏单元)。我还建议您使用以下 Playground :

http://playground.tensorflow.org/#activation=tanh&batchSize=10&dataset=circle&regDataset=reg-plane&learningRate=0.03&regularizationRate=0&noise=0&networkShape=4,2&seed=0.36368&showTestData=false&discretize=false&percTrainData=50&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification

关于java - 神经网络为每个输入返回相同的输出,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/36747388/

相关文章:

java - 如何在 rest api jersey 中获取文件的大小

c# - 对抗性搜索问题

java - 面板上的 Vaadin ActionHandler 不起作用

python - 输入到神经网络的 NP 数组会导致形状错误

python - 在 Tensorflow 中使用您自己的数据

artificial-intelligence - 神经网络需要二进制输入吗?

algorithm - TSP 的模拟退火成本函数

c++ - FANN XOR 训练

java - 列表页面 RecyclerView 无限滚动在放入 NestedScrollView 时不起作用

java - Hive 爆炸功能