To what extent can wavelet-based decompositions challenge or redefine the classical statistical idea of sufficiency, particularly when analyzing hierarchical or self-similar structures (e.g., in financial time series, EEG, or turbulent flows)?
Are wavelets merely efficient compression tools for variance decomposition, or do they offer a deeper epistemological shift in how we define "information" and "structure" in data?