If I understood the question right, there are plenty of things that fit your description. One is random forest, for example. In random forest it's called "random feature selection". But I doubt that random feature selection as it is might increase accuracy. On my mind, It's done mostly for robustification of predictions, not for improving accuracy. Systematic selection of features with good historical prediction accuracy might lead to overfitting the model.
It depends what you mean by input, and whether you are selecting samples / rows or features / columns.
For samples / rows, the word you are looking for is "bagging" - you select overlapping subsets of your samples, and train a classifier on each subset, then combine the classifiers.
For features, it is indeed feature selection or feature resampling techniques, such as random subspace / random forest, as Alexander already mentioned.
Perhaps you are looking for the term: dimension reduction? Ideally, you want to select a subset of dimensions that are orthogonal to each other to get the best results.
if you read further, you approach is actually applying embedded/wrapper feature selection approach which integrate classifier/predictor for the search and evaluation of the feature subset. The training in this case is for assist in evaluating the relevancy of the features & selection of features for subsequent steps which is the real classification.