I have a 30 year dataset of fish sampling. I know from previous analyses that there is a huge amount of variation between years in catch rates. What I would like to do is create a plot that shows the average catch rate for each month that was sampled for a single species after removing this inter-year variation.

I started by just taking the average for each month and using that, but the amount of inter-year variation creates huge error bars, so it's not clear that anything is happening. I thought about scaling everything so that all within year catch data was presented as a proportion of the maximum monthly catch rate, so trends would appear as percentage changes rather than the absolute value, but in a given year samples weren't collected for every single month; so I don't think this would work.

Is there a statistical test which would allow me to do this? I thought about a GLMM with year as a random effect, but I don't anticipate there being a linear increase in catch rates over time. Would an ANOVA with year as a random effect and month as a categorical variable work?

More Dugan Finn Maynard's questions See All
Similar questions and discussions