As always, "measures" alone are not necessarily the only thing one should look at. It all depends on the context and application. That is also the reason why I am skeptical about machine learning studies that compare only certain measures. A 100% accuracy of a classifier on unseen data may mean little in scientific applications, for example.
In my opinion, machine learning research should place more emphasis on assessing methods in the context of a concrete application. In modeling and simulation the saying is "All models are wrong, some are useful". The usefulness of a model or analysis may not necessarily be a particular objective measure.
We need more work on experimental design and validation/evaluation of machine learning studies and algorithms. The time of teasing out another 1% in accuracy or any other measure will not make a big difference.
I recommend the following papers:
Pat Langley, The changing science of machine learning, Machine Learning (2011) 82: 275–279.
Pat Langley, Machine Learning as an Experimental Science, Machine Learning 3: 5-8, 1988.
By the way, I have seen it many times in using machine learning in a biological sciences context. One frequent observation is that that highest scoring results (interestingness measures) are typically the things that are already well-known in science. The analogy I typically use to highlight this is using Google result ranking. In science, people are often interested in Google results that start after 300 or 400, because the top 300 are typically the things that they already know (not interesting, not useful).
Let's first debate the request for optimization. I strongly believe that optimization can't be performed in Association Rules, since they usually apply to unsupervised large data sets, where because of the nature of data, the optimum is unknown. Instead of optimized association rules, it seems good enough to define "provable and effective" ones.
My suggestion is keep it as simple as possible:
(a) First do clustering, by any method of your preference.
(b) Find the most prevalent couples of attributes within clusters, by calculation.
(c) Focus on associations that are unique to specific clusters, also just calculation.
(d) Finally, take the finding to an expert for validation, refining, and possible logical expansion!
Association rules that are mined but haven't used all itemsets in consideration.I.e. applied Sampling technique for mining,this can issue accuracy,so that optimization required.Hash Based technique like double hash technique,Apriori Algorithm with boolean matrix ,FP-Tree can give accurate result.But using genetic algorithm the optimization of association rules even with use of sampling technique can give you optimum result.
If we assume that an association rule could be similar to a production rule. The method of Consensual production rule generation could optimize association rules.
Please read this paper.
M. M. Gammoudi and Sofiane Labidi. An automatic generation of consensual rules between experts using rectangular decomposition of a binary relation. In Proc. of the XI Brazilian Symposium on Artificial Intelligence (SBIA'94), pages 441-455, Fortaleza, Brazil, October 17-20, 1994. SBIA'94.
I agree with Dr. Edith Ohri, we can not optimize the rules. My intention was to ask, how can we devise a strategy to find optimal weight of various interestingness parameters to filter uninteresting association rules. Please give me some references/pointers, where such work is going on.