Bivariate screening of candidate predictors for a multivariable regression model is considered a bad practice that tends to produce overfitted models. See Mike Babyak's 2004 article, for example:
Article What You See May Not Be What You Get: A Brief, Nontechnical ...
And having non-significant regressions in a multivariable model is not problematic either. There is more likely a problem when all variables are statistically significant, in fact--unless n is quite large. See these two sections in the DataMethods.org author checklist, for example (link below):
Use of stepwise variable selection
Lack of insignificant variables in the final model
The predictors work together such that what you want is the best set of variables which will give you the best predicted-y values. Not too many variables, and not too few, and just the right ones. You can use "graphical residual analysis" scatterplots (you can search on the quote here), and on the same scatterplot you can compare the performance of different sets of variables used to form predicted-y, for a given sample. Results will likely vary for other samples, and that is why you don't want to "overfit" your model. (See "cross-validation.")
If you remove one of your variables to compare how well your model works with and without it, please note that the difference in performance may vary substantially depending on the other variables present.
I really suggest graphical residual analyses, but be careful not to overfit to a given sample, which may often just mean not to use more predictors than you can justify.
I suspect that your subject matter knowledge may help you decide on good candidate predicted-y "formulas," to test. Remember that collinearity means using variables that are redundant in some way. That may be complicated such as using four variables which convey about the same information as three others.
I would not worry about p-values at all. Like standard errors, they are impacted by sample size. Unlike standard errors, they are not individually very meaningful.
Also, sometimes you have nonessential heteroscedasticity if your model is flawed or you mix data that should have been modeled separately. But essential heteroscedasticity is to be expected, just because predicted-y values differ. (See Brewer, K.R.W.(2002), Combined Survey Sampling Inference: Weighing Basu's Elephants, Arnold: London and Oxford University Press, mid-page 111, discussed in https://www.researchgate.net/publication/320853387_Essential_Heteroscedasticity.) So you may expect heteroscedasticity, which could mean using a regression weight which impacts the 'formulas' for the regression coefficients, and impacts variance. (See https://www.researchgate.net/project/OLS-Regression-Should-Not-Be-a-Default-for-WLS-Regression, and the various updates and references there.)