AIC/BIC/MDL judge models after fitting. But we often need a before-fitting rule: the minimal set of genuinely informative variables for a stated accuracy. Let’s collect saturation cases: projects where adding one more variable stopped reducing the error; and, conversely, cases where a single new, independent quantity unlocked progress. Please post compact summaries: target accuracy, variables included, when improvement stalled, and why (hidden dependence, dimensional redundancy, etc.). The outcome could be a practical “required simplicity” rule of thumb for different fields: how to reach the goal with the fewest variables—without wasting effort.