These are some of the parameters I used to build/train the NN.
| title | Bias |
| class | class vsoc.training.BatchSizeTraining$ |
| learningRate | 1.0E-04 |
| trainingData | playerpos_x A 500000 |
| batchSizeTrainingDataRelative | 0.10 |
| testData | playerpos_x A 1000, playerpos_x B 1000, playerpos_x C 1000, playerpos_x D 1000, playerpos_x E 1000 |
| iterations | 200 |
| optAlgo | STOCHASTIC_GRADIENT_DESCENT |
| numHiddenNodes | 100, 300, 500 |
| regularisation | None |
| seed | -751785241836862251 |
Testing the trained NNs against different testdatasets leads to the following result.

The output value is always shifted for a certain amount no matter what testdataset is used.
Open Question
Is there a way to determine the correct output bias during training the NN ?







