Tae-Yeol, this cannot be answered easily. It depends on your design, on the number of levels, the number of level 1 subjects, the number of level 2 groups, on the number and size of random and fixed effects, etc.
A good starting point is William Browne’ (University of Nottingham) power point presentation: Sample Size calculations in multilevel modelling. Just search the internet. He calculates power for different parameters and provides some very useful explanations.
Tom Snijders provides some formulae for obtaining insight in the design aspects that are most influential for standard errors and power. His book chapter is just great:
Search for his book chapter on this page, denoted as:
Power and Sample Size in Multilevel Linear Models (Volume 3, 1570-1573).
This is the full reference:
Snijders, T.A.B. (2005). Power and sample size in multilevel linear models. In B.S. Everitt & D.C. Howell (eds.), Encyclopedia of Statistics in Behavioral Science (Volume 3, pp. 1570–1573). Chicester: Wiley.
There are other nice articles on this subject, but these are a good starting point.
Snijders (2005) addressed this somewhat in Encyclopedia of Statistics in Behavioral Science with an article Power and Sample size in multi-level modelling
More specifically this was addressed by Heonig and Heisey (2001) in the American Statistician with The Abuse of Power: The Pervasive Fallacy of Power Calculations for Data Analysis
Doing power after you have collected the data is frowned on - or even regarded as cheating/ unethical as you can set the constraints to meet what you have collected and have found.
Meinck, S. & Vandenplas, C. (2012). Sample Size Requirements in HLM: An Empirical Study - The Relationship between the Sample Sizes at Each Level of a Hierarchical Model and the Precision of the Outcome Model. IERI Monograph Series. Special Issue 1. The Netherlands: Educational Testing Service and International Association for the Evaluation of Educational Achievement
Just to elaborate on Kelvin and others comments. Post-hoc power can be defined in various ways, but if you design power as the probability of detecting a true effect then post-hoc power is 0 (for NS tests) and 1 (for significant tests).
Post hoc power can also be the power to detect an effect in a near-identical replication had certain characteristics of the study been different. This sort of counterfactual reasoning can be useful in understanding the power to detect an effect in studies similar to the one you ran or in planning future studies. However it is a rather dangerous procedure - particularly if used to reason about a completed study. It needs to be done carefully and it is usually useful to plot power as a function of various parameters rather than a point estimate (which is open to major abuse).
One reviewer asked to report post-hoc power. Then, is it unethical to report a post-hoc power? All the estimates are signficant, but I have many control variables in the equation. After I delete several control variables (that are highly correated to the key variables), the estimates are still significant. Anyway, do I still need to report the post-hoc power in this case?
If you want to know whether you need to do it for the revision versus should do as discussed here, I recommend after reading through what is said here and the sources linked, to write your reasons (briefly) for not doing post-hoc power and send these to the editor (don't use "people said ResearchGate"). Only the editor can say if you NEED to in order for acceptance. Most (not all) editors are sensible and won't enforce reviewers' requests if the author makes valid points. Editors will often not agree with all the reviewers' points, but don't make this clear in their letter. If the editor says you have to do something that you don't feel comfortable doing you can either just do it (I did this once when the editor said to write p
Thanks. It makes sense. I don't want to be engaged in cheating, but need to be wise to avoid unnecessary conflict with the Editor and Reviewers. Thanks.
I recommend you also consider the following article: Hoenig, J. M., & Heisey, D. M. (2001). The abuse of power: the pervasive fallacy of power calculations for data analysis. The American Statistician, 55(1), 19-24. These authors strongly recommend not to use post-hoc power analyses.