You need to understand the causal relationships that are implied by your hypothesis. I suggest that you visit http://www.dagitty.net which allows you to build a graphic representation of the relationships between your variables and, based on this, will allow you to build statistical models that test informative hypotheses.
I fully agree with Ronán Michael Conroy 's reply that your specific hypotheses should guide the identification of a model.
One exception, of course, is the broad fishing expedition approach for models, generally referred to as data mining. One conceptual challenge with this class of methods, however, is that they define structure/models by imposing structure/models. That's a very different tactic than starting with a hypothesized structure.
There are a number of methods for identifying "optimal" models, which depend on the specific metric for model adequacy. Implicit in all these approaches, however, is the requirement for model validation with additional/external data, due to the highly opportunistic behavior of such methods.
Here's an excellent resource for a guided tour through many of the ever-growing list of methods for model hunting: Whitten, I. H., Frank, E., Hall, M. A., & Pal, C. (2016). Data mining: Practical machine learning tools and techniques (4th ed.). Morgan Kaufmann.
At the risk of sounding old fashioned I would suggest that you first identify the goals of the study. If it is explanatory then professors Conroy and Morse give great advice. If it is prediction then IMO the game changes a bit. I have attached an example of what I would do in such a situation. Best wishes David Booth
Four selection procedures are used to yield the most appropriate regression equation: forward selection, backward elimination, stepwise selection, and block-wise selection. The first three of these four procedures are considered statistical regression methods. Many times researchers use sequential regression (hierarchical or block-wise) entry methods that do not rely upon statistical results for selecting predictors. Sequential entry allows the researcher greater control of the regression process. Items are entered in a given order based on theory, logic or practicality, and are appropriate when the researcher has an idea as to which predictors may impact the dependent variable.
In multivariable logistic regression, selecting predictors is a crucial step that can significantly influence the model's performance and interpretability. Several strategies are commonly employed for this purpose, each with its advantages and limitations.
1. Backward Elimination
2. Forward Selection
3.Stepwise Selection
4.All Possible Subsets
References: 1. ZoranBursac, C. Heath Gauss, Hosmer; DKWaD. A Purposeful Selection of Variables Macro for Logistic Regression, Statistics and Data Analysis, University of Arkansas for Medical Sciences and University of Massachusetts: SAS GlobalForum 2007.