I have considered 3 datasets and 4 classifiers & used the Weka Experimenter for running all the classifiers on the 3 datasets in one go.

When I Analyze the results, considering say classifier (1) as the base classifier, the results that I see are :

Dataset (1) functions.Linea | (2)functions.SM (3) meta.Additiv   (4) meta.Additiv

--------------------------------------------------------------------------------------------------

'err_all' (100)  65.53(9.84) |    66.14(9.63)          65.53(9.84) *      66.14(9.63)

'err_less' (100) 55.24(12.54) | 62.07(18.12) v    55.24(12.54) v    62.08(18.11) v

'err_more' (100) 73.17(20.13) | 76.47(16.01)     73.17(20.13) *    76.47(16.02)

--------------------------------------------------------------------------------------------------

                            (v/ /*) |              (1/2/0)                 (1/0/2)                 (1/2/0)

As far as I know:

v - indicates that the result is significantly more/better than base classifier

* -  indicates that the result is significantly less/worse than base classifier

Running multiple classifiers on single database is easy to interpret, but now for multiple datasets, I am not able to interpret which is better or worse as the values indicated do not seem to match the interpretation.

Can someone pls. help interpret the above result as I wish to find which classifier performs the best & for which dataset.

Also what does (100) next to each dataset indicate?

'err_all' (100), 'err_less' (100),  'err_more' (100)

More Renu Balyan's questions See All
Similar questions and discussions