Here are some points and references that can help you:
(1) "RF shows a strong dependence on the used dataset,
especially while the dataset is small." (M. Ließ, B. Glaser, and B. Huwe, “Uncertainty in the spatial prediction of soil texture: comparison of regression tree and
Random Forest models,” Geoderma, vol. 170, pp. 70–79, 2012.)
(2) The general aim is to choose the fewest number of predictor features that provide the best predictive result.
(3) You can choose the following variable reduction method "Variable reduction based on Random Forest’s variable importance measure is a potential way to optimize the Random Forest algorithm. Ramón and Sara proposed a method of gene selection in classification problems based on random forest [12]. The algorithm first ranks the variables (genes) according their importance measure. Then, iteratively fit random forests, at each iteration building a new forest after discarding those variables (genes) with the smallest variable importance; the selected set of genes is the one that yields the smallest OOB error rate. The removal of irrelevant variables may improve the performance of the algorithm upon retraining and may help improve the interpretability of the model"
cited from
Zhou, Q., Hong, W., Luo, L., & Yang, F. (2010). Gene selection using random forest and proximity differences criterion on DNA microarray data. Journal of Convergence Information Technology, 5(6), 161-170.
(4) RF contains several tuning parameters that control internal random processes.
Four tuning strategies of RF's parameters are suggested in
Ließ, M., Hitziger, M., & Huwe, B. (2014). The sloping mire soil-landscape of southern ecuador: Influence of predictor resolution and model tuning on random forest predictions. Applied and Environmental Soil Science, 2014.
(5) A "size sampling (weighted sampling) method" to optimize the RF method is suggested by
Bharathidason, S., & Jothi Venkataeswaran, C. (2015). Improving prediction accuracy based on optimized random forest model with weighted sampling for regression trees. Int J Comput Trends Tech, 21(1), 23-8.
The most important parameter to tweak is the size of the feature subsets. All of the other parameters aren't nearly as important.
In KNIME with the Tree Ensemble node, you can vary the parameters and check the relative importance of each of them, but I'll think you'll find that the feature subset size is by far the most important.
I trained random forest models with different forest sizes (aka tree count in the forest) and tree depth. I checked the accuracy and the memory usage in the function of these hyperparameters and draw Figure 3. When you compare the two diagrams in Figure 3, you can spot the optimal parameters because the forest tends to overfit after growing too big without additional gain in the accuracy. It is explained in the text. Obviously, my results are bound to my dataset, but if you can draw similar diagrams with your data, you can choose the optimal parameters with the same principles.
Article Rigidity-Based Surface Recognition for a Domestic Legged Robot