I'd like to train a model with about 10^10 (10 billion) samples. The number of features is quite low (~100).

Do you know of any machine learning method that can deal with this amount of data (within my lifetime)?

EDIT:

Do you know of any dataset size reduction method that can deal with this amount of data?

Similar questions and discussions