There simply is no such rule. For linear models, the least one can say is that you need more observations than you have estimates. For multilevel models not even that is true, by the virtue of what we call shrinkage.
Note however, that the implementation of the regression engine matters. Classic frequentist engines use a lot of asymptotic theory and approximations which require larger minimum sample sizes. Bayesian engines using MCMC are more reliable at small sample sizes.
In relation to "How to determine whether the variables should be placed in the fixed or random part of the model?"; there is quite a bit of debate between those arguing 'go maximal' and those arguing for parsimony due to serious difficulties of algorithms finding the maxim likelihood . We cover this debate briefly in
Article Fixed and Random effects models: making an informed choice
and provide some simulation results.
A useful trick is to fit a model with random intercepts , specify a full fixed part and then estimate with usual standard errors and robust standard error- a big change in the standard errors may be being caused by the need for that variable to be additionally specified as having random effects
Book Developing multilevel models for analysing contextuality, he...
page 79
@Martin Schmettow is correct , there is no simple rule of thumb, but there is now good software to allow you to play with your situation through simulation