Hi,

I am trying to build a Neural Network to study one problem with a continuous output variable. A schematic representation of the neural network used is described below in Figure 1.

[Figure 1: Schematic representation of neural network: input layer size = 1; hidden layer size = 8; output layer size = 1.]

I am trying to understand the learning curve (error vs. number of training samples) and validation curve (error vs. regularization parameter lambda) plotted in Fig. 2.

[Figure 2: Learning curves (lambda = 0.01, and lambda = 10) and validation curve.]

I am relatively new to machine learning and I was wondering if someone could give me some advice on the analysis of these results. Do you think the learning curve looks ok for lambda = 0.01? Regarding the validation curve, do you also observe a minimum close to \lambda = 0.01? Would you recommend to increase the number of hidden layer?

Thanks in advance,

David

More David Cereceda's questions See All
Similar questions and discussions