There are various methods for feature selection in Machine learning. Which one is suitable for you depends on which type of dataset you are using and which techniques fit your dataset. I think this will help you,
Tree- and rule-based models, MARS and the lasso, for example, intrinsically conduct feature selection. Applied Predictive Modeling; Feature selection is also related to dimensionally reduction techniques in that both methods seek fewer input variables to a predictive model.
Statistics for Filter-Based Feature Selection Methods. It is common to use correlation type statistical measures between input and output variables as the basis for filter feature selection. As such, the choice of statistical measures is highly dependent upon the variable data types. https://machinelearningmastery.com/feature-selection-with-real-and-categorical-data/
Filter methods. Chi-square Test. The Chi-square test is used for categorical features in a dataset. Fisher's Score. Correlation Coefficient. Dispersion ratio. Backward Feature Elimination. Recursive Feature Elimination. Random Forest Importance. https://www.analyticsvidhya.com/blog/2020/10/feature-selection-techniques-in-machine-learning/
You need to elaborate on your question because the answer will depend on that. What problem are you solving? what data are you using? is it supervised or unsupervised feature selection?
Usually, for filter-based FS algorithms, we tend to use a variety of classifiers that have been shown to work for the domain. These can include, for instance, Random Forest, SVM, kNN, etc. In other cases, you may prefer domain-specific algorithms. The idea is to show that using feature selection improves the overall performance (Accuracy, NMI, F1, etc.)