The usual approach is to have hypotheses that specify the expected relationships between independent and dependent variables. Going on a "fishing expedition" to maximize explained variance is not recommended.
Various strategies exist for the selection of significant variables. The choice of research method depends on the specific objective. For instance, if the goal is to identify important variables using conventional statistics such as important features based on p-value and correlation, it is advisable to employ filter methods like chi-squared feature selection, correlation based feature selection, T-Test etc. These approaches are more suitable for interpreting important features.
However, if one aims to build machine learning models with high classification scores, the wrapper method which includes Genetic algorithm, Boruta, Recursive feature elimination as well many other commonly employed. These method selects random combinations of features and builds models based on those variables. The features combination that result in high accuracy are then identified as important variables. Also other technique called embedded feature selection which simultaneously select features and build ml models such as Lasso, ridge regression and xgboost algorithm.