我有一个训练集,其中包含 6 个不同多米诺骨牌的 89 张图像以及一个婴儿的“控制”组 - 全部分为 7 组。因此,输出 y 为 7。每个图像都是 100x100 并且是黑白的,导致 X 为 100.000。
我正在使用 Andrew Ng 使用 Octave 的 coursera 类(class)中的 1 个隐藏层神经网络代码。已稍作修改。
我首先对 3 个不同的组(两个多米诺骨牌,一个婴儿)进行了尝试,并且获得了接近 100% 的准确率。我现在已将其增加到 7 个不同的图像组。准确性已经下降了很多,除了婴儿照片(与多米诺骨牌有很大不同)之外,几乎没有什么是正确的。
我尝试了 10 个不同的 lambda 值、10 个介于 5-20 之间的不同神经元数量,并尝试了不同数量的迭代,并将其与成本和准确性进行了比较,以便找到最佳拟合。
我还尝试了功能标准化(在下面的代码中注释掉),但没有帮助。
这是我正在使用的代码:
% Initialization
clear ; close all; clc; more off;
pkg load image;
fprintf('Running Domino Identifier ... \n');
%iteration_vector = [100, 300, 1000, 3000, 10000, 30000];
%accuracies = [];
%costs = [];
%for iterations_i = 1:length(iteration_vector)
# INPUTS
input_layer_size = 10000; % 100x100 Input Images of Digits
hidden_layer_size = 50; % Hidden units
num_labels = 7; % Number of different outputs
iterations = 100000; % Number of iterations during training
lambda = 0.13;
%hidden_layer_size = hidden_layers(hidden_layers_i);
%lambda = lambdas(lambda_i)
%iterations = %iteration_vector(iterations_i)
[X,y] = loadTrainingData(num_labels);
%[X_norm, mu, sigma] = featureNormalize(X_unnormed);
%X = X_norm;
initial_Theta1 = randInitializeWeights(input_layer_size, hidden_layer_size);
initial_Theta2 = randInitializeWeights(hidden_layer_size, num_labels);
initial_nn_params = [initial_Theta1(:) ; initial_Theta2(:)];
[J grad] = nnCostFunction(initial_nn_params, input_layer_size, hidden_layer_size, num_labels, X, y, lambda);
fprintf('\nTraining Neural Network... \n')
% After you have completed the assignment, change the MaxIter to a larger
% value to see how more training helps.
options = optimset('MaxIter', iterations);
% Create "short hand" for the cost function to be minimized
costFunction = @(p) nnCostFunction(p, input_layer_size, hidden_layer_size, num_labels, X, y, lambda);
% Now, costFunction is a function that takes in only one argument (the
% neural network parameters)
[nn_params, cost] = fmincg(costFunction, initial_nn_params, options);
% Obtain Theta1 and Theta2 back from nn_params
Theta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ...
hidden_layer_size, (input_layer_size + 1));
Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ...
num_labels, (hidden_layer_size + 1));
displayData(Theta1(:, 2:end));
[predictionData, images] = loadTrainingData(num_labels);
[h2_training, pred_training] = predict(Theta1, Theta2, predictionData);
fprintf('\nTraining Accuracy: %f\n', mean(double(pred_training' == y)) * 100);
%if length(accuracies) > 0
% accuracies = [accuracies; mean(double(pred_training' == y))];
%else
% accuracies = [mean(double(pred_training' == y))];
%end
%last_cost = cost(length(cost));
%if length(costs) > 0
% costs = [costs; last_cost];
%else
% costs = [last_cost];
%end
%endfor % Testing samples
fprintf('Loading prediction images');
[predictionData, images] = loadPredictionData();
[h2, pred] = predict(Theta1, Theta2, predictionData)
for i = 1:length(pred)
figure;
displayData(predictionData(i, :));
title (strcat(translateIndexToTile(pred(i)), " Certainty:", num2str(max(h2(i, :))*100)));
pause;
endfor
%y = provideAnswers(im_vector);
我现在的问题是:
就 X 与其他数据之间的巨大差异而言,我的数字是否“偏离”?
我应该如何改进这个神经网络?
如果我进行标准化,我是否需要在某个地方再次将数字乘回到 0-255 范围?
最佳答案
What should I do to improve this Neural Network?
使用多层(例如 5 层)的卷积神经网络 (CNN)。对于视觉问题,CNN 的性能大幅优于 MLP。在这里,您使用的是具有单个隐藏层的 MLP。该网络在 7 个类别的图像问题上可能表现不佳。其中一个问题是您拥有的训练数据量。一般来说,我们希望每个类至少有数百个样本。
If I do feature normalization, do I need to multiply the numbers back to the 0-255 range again somewhere?
一般不用于分类。标准化可以被视为预处理步骤。但是,如果您正在处理图像重建等问题,那么您最终需要转换回原始域。
关于matlab - 选择用于图像识别的神经网络变量,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49861324/