That question is very difficult to answer. If there are no previous studies in a similar area that calculated required sample sizes, you will benefit by doing a little pilot study before the actual sampling takes place. With this pilot study you will be able to estimate the sample size needed through power analysis.
If you just need to estimate mean cover with a certain confidence, you need to have an idea of the variance of your data. Then you can estimate the n needed to say that "the mean cover is X with 95% confidence", based on the central limit theorem (more variation requires more quadrats, but keep in mind that the size of the quadrats also affects variation, so 1x1m quadrats will be more variable than 4x4m quadrats).
For hypothesis testing, you need to have an idea about:
- the variation among sampling units
- the confidence level or alpha at which you will be testing for differences in coral cover (usually 95% or alpha=0.05 by convention, but the higher it is, the more quadrats you need)
- the statistical test (to detect differences in mean cover something like a t-test would do it)
- the ecologically relevant magnitude of differences (effect size) that you want to be able to detect with the survey (you need more quadrats to detect a difference in 1'% cover than you need to detect a difference of 50% cover)
- the statistical power you need (how much confident you want to be that the differences you found are true differences, i.e. that the null hypothesis is false).
This doesn't have to be ultra precise, but the more information you can have in advance (pilot studies, literature), the better.
Play around with some free software like G*Power (http://www.gpower.hhu.de/en.html) and try to estimate missing parameters to have an idea about the sample size needed. It is better to plan carefully than to find out later that you should have increased your sampling effort.
I completely agree with Miguel. An ad hoc power analysis would be the best way to go. If there is lots of literature on the area, you can estimate the variance and a biologically relevant effect size from previous papers. If not, this is going to take 2 things: a pilot study and a lot of critical thinking. It's important to not blindly choose an effect size that you want to detect (much like we often blindly choose alpha and beta levels of .05 and .8, respectively), but REALLY think about what is biologically meaningful to the organism or system being tested. Delving deep into the literature is crucial here and, if there are no data on sufficient effect sizes, you will need to really think about what size of effect is REALLY important to your question (or you could also consider testing a number of different effect sizes to establish this practice and baseline data within the scientific literature relating to these types of studies).
There are no previous studies related to my area,and i am focusing on coral reefs (sorry i dint mention that)..previous study only mentions different types of corals found in this region,Corals found in this area varies in size from 10 cm to 50 cm (horizontal width) and corals here are of isolated, sub-massive and encrusting types found in patches
I agree with Morgan. I am using 30 and 50 m transects and sampling 60 and 100 points (respectively) per transect, for reef studies in Camiguin and kelp forest studies in British Columbia. The variance in benthic cover data is much less than in previous studies I have done using randomly located 1m x 1m quadrats.
Regardless of your method, a pilot study to collect data and calculate variance, followed by a power analysis to determine sample size (as per Miguel and Jeff above), is definitely the way to go.
I also agree with Morgan and do favour transects over quadrats to estimate assemblage cover, but 1m * 1m photo quadrats do have their advantages over video transects as well. Still images are now high resolution and it is far easier to species level with greater confidence using stills images than using video footage. New methods are using repeated photographic still images (2 shots per second) along transects, which can be analysed by free software such as CPCe or Squidle that is available online at http://squidle.acfr.usyd.edu.au/
I have compared both video and still quadrat methods and have found utility in both methods for assessing different assemblages, but find still images better for quantifying assemblage to coral species level in less complex environments. In order to overcome increased variance, you can use a statified random sampling technique to assign your points onto your images, this eliminates some clustering of the point onto the image.
I agree with the above and you should complete a pilot study which should assess the area in your study region that has the greatest heterogeneity to determine survey design and sample methods via power analysis.
I have nothing to add to Deepthi's advantage here. Merely to thank her for asking this question and also my deepest gratitude for those answering her. Thanks once again for clearing up this muddle of puddle for me.