I am trying to learn some algorithms that extract decision rules from data sets, particularly by using rough sets. I need to know whether they are reliable and efficient (like LEM2) for real datasets?
In my opinion when you taking the real datasets is very good, for sanitizing the system what it doing in real environments. Then we can compare reaction of your system with what can do it in real-Time.
I hope that be Clair and helpful. @Tanzeela Shaheen
In my opinion when you taking the real datasets is very good, for sanitizing the system what it doing in real environments. Then we can compare reaction of your system with what can do it in real-Time.
I hope that be Clair and helpful. @Tanzeela Shaheen
My experience shows that different algorithms and corresponding models can produce the best results on different data. Therefore, you should always have several models in your pocket and choose the best one. You can try to predict which model will be the best model. You can combine the results of different models in a weighted way. Moreover, there are also different criteria for evaluating the quality of the model. I use van Riesbergen's F-measure. But not the original one, but my generalization. It is invariant with respect to the sample size, multiCLASS, and fuzzy. All this is in my work here on RG
There are many decision rule algorithms based on rough sets. The best one can be dependent upon the application. All rule algorithms cannot be compared as their domains may be different. One which is suitable for an application may not fare well for other applications. also, rule reduction algorithms are available in rough sets. One can find the optimal set of rules basing upon core and reduct. There are special models called decision theoretic rough sets. Prof. Y. Y. Yao is a pioneer in this direction. You can read his papers. I assure you that it is a pleasure to read the papers of Prof. Yao.
If you want i can provide the details of some of his algorithms.