I have been computing pairwise Fst for a number of populations with a large SNP dataset and it got me thinking about the methods we use to assess significance of these values.

The R packages I have found to do this efficiently for many SNPs (StAMPP & diveRsity) seem to use the technique of bootstrapping over loci to generate a confidence interval around the observed Fst. You can then see if your CI includes 0 and use that to decide if your marker set is reliably estimating Fst for a given population pair.  

I am curious why bootstrapping over loci is preferred to permuting individuals in the target populations? It would seem that this would be a way to generate a null distribution that you could compare to your observed Fst to compute a p-value. I haven't found any references or software that do this, so I wonder if I am missing something?

So far the only suggestion I have found of doing it this way was in a reply from the author of the adegenet package (http://lists.r-forge.r-project.org/pipermail/adegenet-forum/2011-February/000214.html). This method would permute individuals across all populations in the dataset to generate a null distribution of Fst under panmixia. An alternative method that I was considering was to permute individuals 2 populations at a time to get a null distribution for each population pair that you could compare to the corresponding pairwise Fst.

Does anyone out there have any thoughts on these methods and the benefits/pitfalls of each?

http://onlinelibrary.wiley.com/doi/10.1111/1755-0998.12129/abstract

http://dx.doi.org/10.1111/2041-210X.12067

More Nicholas R Polato's questions See All
Similar questions and discussions