I noticed a quite interesting paradox. In modeling the impact of temperature on mortality, DLNMs (Distributed Lag Non-linear Models) are considered a kind of gold standard. I understand the allure, as they neatly model the delayed impact of low temperatures increasing the spread of respiratory infections or of heatwaves, when an initial mortality spike is followed by subsequent below-trendline mortality.
There are a few known problems, like DLNMs’ tendency to correctly match spikes of mortality during heatwaves; thus, when extrapolating, they achieve spectacularly high numbers that later completely overshoot observational data.
Nevertheless, I notice an even more serious problem: they assume the relationship remains effectively constant or, in fancier models, assume some modest adaptation. By crunching larger datasets on seasonality, I noticed consistent shifts in mortality seasonality completely overshadowing any impact of climate change. For assumptions to hold, seasonality should have remained mostly constant over decades until starting to shift in the 1970s due to a changing warming trend. Yet, historical data (STL-extracted cycles) show the opposite. Any ideas? (Especially how to recalibrate DLNMs to match historical data?)
Article Anthropocene mortality cycle convergence: Global pathogen sp...