Hello all,

I have a dataset of 105 records, with 4 attributes. All the attributes take whole numbers as its values and three of them are predictors. I train 80% of the data with a Multilayer perceptron (MLP) as well as with a decision tree. When I measure the performance of these two models by testing the remaining 20% of data, I found that the decision tree has Mean Squared error (MSE) around 0.45 and the MLP has around 1.05. Why is that so?

I have tried changing the number of hidden layers and number of perceptrons in each layer and number of iterations, but 1.05 is the lowest MSE that I could achieve. Implementation is carried out in python with the sklearn package.

Can anyone help me?

Thanks in advance

Similar questions and discussions