I have written a paper on integration testing. In this paper I have proposed a new technique for integration testing. How can I evaluate my approach with existing approaches?
I would begin by looking at what are the key measures that you would like to take about both of the techniques. What are the measures that are taken in the field? Based on those, you will need to run experiments collecting data about your methodology and the other methodology under different conditions. Then based on the data, you will need to determine if there are real differences in the measures between the two (i.e. run appropriate statistical tests comparing the measures).
Don't evaluate your testing technique. Evaluate product quality instead, in terms of number of faults reported from the field after release. It's possible that your patterns of fault behaviour are best solved by improving your requirements process: bad requirements that make it both into code and tests won't be helped by better testing.
If someone puts a gun to your head and insists that you evaluate the testing technique, the only metric that makes any sense and which cannot be gamed is the number of postpartum faults. Just educate people that better testing is not always the answer.
On a lower level, see: http://www.rbcs-us.com/documents/Why-Most-Unit-Testing-is-Waste.pdf
if you have different methods of testing and you want to evaluate them then the best thing is to test a single software that you have developed using different testing techniques that you want to evaluate. Based on the output in terms of number of errors found, fault behavior, output and other attributes you can evaluate different testing techniques or methods.