Usually the correct answer, comes after understanding in depth the question and the reason behind it. In this case, I can't say I understand what you want to do, I can only guess why you want to compare beta coefficients from different models/samples. But I think that it would be hard to get any meaningful results from conducting a hypothesis test to compare them.
Perhaps you can present your results by using a simple comparison: if according to one's sample coefficient a variable is important and for the other it is not, you can say that you see a difference there. Without any hypothesis tests.
Statistically what you want to do seems much more complicated and difficult, compared to the simple solution of combining the datasets and defining a binary variable and adding it to the model and then checking the importance of this variable and the interaction of this variable with the other factors. Or just performing a two-sample hypothesis test without combing the samples. But most likely you are not interested in comparing the two groups, but the model.
Analyze the data from the two studes together in one model including the interaction of the factors that are represented by your betas and the factor "study".
Since you say that you don't want to combine the samples, a less-desirable, but reasonable, approach would be to construct e.g. 95% confidence intervals for the beta coefficients. There is a formula for confidence intervals at the following link. This should be easy if the standard error for the coefficients were reported. ...The second link discusses using bootstrap when fitting regression models.