Have very clear null-hypothesis. Only permute within the limits of the null-hypothesis.
I am a bit bothered that you have chosen a permutation test. A permutation test is only used when the number of possible permutations is small. By the time you get to 10^8 permutations it takes too long for most desktop computers to run a permutation test. It is more efficient to do a randomization test with 50,000 to 100,000 randomizations. That tends to give you a fairly stable result. By "stable" I mean when you plot a histogram of the outcome at a scale relevant to a typical journal figure that the variability in the graph from doing one run of 50,000 versus another run of 50,000 is so small that it is hard (or impossible) to detect the differences in outcome. A better criteria would be to look at the variability in estimated p-value when 1000 sets of 50,000 randomizations are run, and to then decide if the accuracy in the p-value attributable to the number of randomizations is accurate enough to justify any claim of significance.
If you only have 3 replicates per treatment, then the whole computer intensive method is questionable. I have seen suggestions of 50 to 100 replicates being a minimum number for this type of test. I have used 20 replicates per treatment. On one hand it is difficult to clearly identify the underlying distribution based on a random sample of 100 (let alone 20). On the other hand there is plenty of published research that uses sample sizes under 10. If one is willing to suggest that a sample of 6 accurately represents the underlying population with the assumption that the underlying distribution is Gaussian, then I see no reason why I cannot say the same for my sample of 20 and drop the assumption of normality. One can continue this argument back down to 3 replicates, but as sample size gets smaller your results become ever more an artifact of which three samples you happened to acquire in each treatment.