Hi everyone,

I am trying to build a Neural Network to study one problem with a continuous output variable.

I am trying to understand the attached learning  (error vs. number of training samples) and validation (error vs. regularization parameter lambda)  curves.

[Figure 1: Learning curves  and validation curve.]

I am relatively new to machine learning and I was wondering if someone could give me some advice on the analysis of these results.

Do these curves look ok for you? I can see that both training and validation error do not improve with increasing the number of samples (something characteristic of high bias situations) but the errors are relatively small, right?

I have also tried to include an additional hidden layer but the results are very similar.

Any comments or suggestions are more than welcome.

Thanks in advance,

David

Similar questions and discussions