In literature, we are familiar with the terms feature selection and feature reduction. Is there any similarity or dissimilarity between those two? What are the impacts of feature selection and/or feature reduction in the information of a data set?
The main objectives of data analysis is choose/select useful/optimal features of concerns for object recognition, detection or classification. The recognition of object or phenomena may be analyzed via various models as well as many data smoothing/cleaning/dimension-reduction/linearization etc., type data pre-processing techniques for optimal results of more productivity/yield/optimization/shortest-way-of reaching-to-the-destinations via minimum inputs. In data analysis we select the features of concerns for optimization, whilst we reject/reduce the features(the domain variables = dimensions) for enhancing computational acceleration or saving time. Dimension reduction is a kind of trimming as we do in many other areas of practicability for high visualization with more productivity.
Feature selection is simply selecting and excluding given features without changing them. Dimensionality reduction transforms features into a lower dimension.
In the case of feature selection, you have to select the features that the result depends on it and then neglect unwanted features for gaining accurate results, fast processing and etc.
on the other hand, feature reduction is a dimension reduction.
Mahit Kumar Paul Feature selection is utmost important for achieving a targeted objectives/goals while the features reduction means to drop the un necessary features for make best and efficient evaluation of achieved objectives.