Because stream water quality is often strongly correlated with flow, it is common to "flow-adjust" the data before testing for time trends (e.g. with the seasonal Kendall test). This is usually done by subtracting an "average" data value at each flow (often determined by fitting a non-parametric LOESS curve to the flow-data or log-flow-data scatter plot).

I have not been able to find a good justification for this practice.

This method makes me nervous, because the very notion of flow-adjustment ASSUMES that there is no change in the flow-data relationship with time. However, we then use this to look for changes with time. At best, this approach is valid only under very restricted conditions, i.e., when the flow-data-relationship and the data trend are additive. This means that trends in the data are the same across the entire flow range. However, trends in waterborne contaminants are usually linked to either surface, shallow or groundwater processes, not all three simultaneously.

Is flow-adjustment fundamentally flawed?

More Simon Woodward's questions See All
Similar questions and discussions