Dienstag, 29. August 2017

NN cannot determine the correct output bias

Training a feedforward neural net with one hidden layer leads always to an output value that is shifted for a certain amount. Increasing the number of iterations or the size of the testdataset does not improve the the result.

These are some of the parameters I used to build/train the NN.

titleBias
classclass vsoc.training.BatchSizeTraining$
learningRate1.0E-04
trainingDataplayerpos_x A 500000
batchSizeTrainingDataRelative0.10
testDataplayerpos_x A 1000, playerpos_x B 1000, playerpos_x C 1000, playerpos_x D 1000, playerpos_x E 1000
iterations200
optAlgoSTOCHASTIC_GRADIENT_DESCENT
numHiddenNodes100, 300, 500
regularisationNone
seed-751785241836862251
 
Testing the trained NNs against different testdatasets leads to the following result.



The output value is always shifted for a certain amount no matter what testdataset is used.

Open Question

Is there a way to determine the correct output bias during training the NN ?

Keine Kommentare:

Kommentar veröffentlichen