Data overfitting is a common problem in FIS parameter optimization. When overfitting occurs, the tuned FIS produces optimized results for the training data set but performs poorly for a test data set. Due to overturning, the optimized FIS parameter values pick up noise from the training data set and lose the ability to generalize to new data sets. The difference between the training and test performance increases with the increased bias of the training data set. To overcome the data overfitting problem, a tuning process can stop early based on an unbiased evaluation of the model using a separate validation dataset.

How to do such Parameter optimization of a fuzzy inference system (FIS) ?

More Sourangshu Ghosh's questions See All
Similar questions and discussions