Finding the best features (sometimes called metrics or parameters) to input into a neural network by trial and error can be a very lengthy process. Classic feature selection methods in machine learning are 'extra tree classifiers', 'univariate feature selection', 'recursive feature elimination', and 'linear discrimination analysis' (a supervised learning version of PCA). Are there other more modern methods that have evolved recently which are more powerful than these?

Inputting too many redundant or worthless features into a neural network reduces the accuracy, as does omitting the most useful features. Restricting the neural network input those most relavant features is key to getting the highest accuracy from a neural network.

More Neil Salmon's questions See All
Similar questions and discussions