I am planning a functional response experiment with an invasive fish species and a native comparator. I am using 2 prey types, at 6 densities, resulting in 24 individual treatments (2 fish species*2 prey species*6 densities). All treatments are being replicated 3 times. We only have 18 fish of each species, and we want to make a comparisons in consumption among the fish species and the prey species. To avoid time confounds, we have randomised all treatments for a single replicate, and are planning to repeat this list 3 times for the 3 replicates. In between each replicate we allow the fish 3 days of recovery to minimise the effect of a learned response towards certain prey types. But because we use the same numbers of fish of each species, every individual fish has an equal chance of being selected, which should "evens out" the effect of re-using fish. 

My question boils down to this: Is it better to fully randomise each replicate such that fish will inevitably be re-used between replicates but in a random way, or to run all replicates of a single fish species/prey type combination at once and run species/prey types sequentially, which introduces time as a confound and reduces our ability to make comparisons between the consumption of the fish towards the 2 prey species?

Similar questions and discussions