NFL theorem is valid for algorithms training in fixed training set. However, the general characteristic of algorithms in expanded or open dataset has not been proved yet. Could you show your opinions about this or suggest some related papers?
I think it is very complicated to determine and draw such general statements over all algorithms in general. I would suggest that even with the changes to the data characteristic, things such as concept drifts still have an impact and that therefore properties might be subject to change in the future. Since your application domain of ML models is mostly to predict this future, it should be taken into account that theoretical this infinite space will not be available since the future and its implication on algorithmic properties remain hidden.
Some further readings and discussions on the NFL with reagrd to ML:
Chapter No Free Lunch Theorem: A Review
Article An Empirical Overview of the No Free Lunch Theorem and Its E...