Honestly, if your signal-to-noise ratio (QRS complexes over whatever noise) is good enough to recognize what you want, no need to denoise! (Btw, what DO you want to see?)
However: Does this assumption hold for all cases? Or only in those particularly nice example signals you are dealing with? Perhaps in a real world, one of the leads is a little loose, or an MR system downstairs is faulty or whatever... Is your wavelet algorithm still able to handle that?
Why not just use your nice and successful signals, spoil them by adding different levels of noise and find out how good your algorithm seems to be?
Btw: Wavelet decomposition was used many times for denoising... ;-)
If one can achieve the adequate results, there is no need of denoising... However some noise is always there so denoising is needed.. Wavelet decomposition can also be used for denoising at times..
If you can recognize the ECG signals, so no need to add denoising technique, but to be more accurate and to generalized your approach it is better to add denosing.
Generally denoising is used as the first preprocessing of the input signal, the real ECG signals are offen marred by noise and strongly depend of the source body. It seems to me that it is mandatory to use a denoising procedure to get out a robust recognition model.
As Dr. Ulrich and Dr. Fernando rightly mention, is 99.8% really a correct number? That result is ideally a spurious one as it was extracted from a noisy signal. Perhaps it was really the noise that modified the nature of the feature to yield higher accuracies. If you do not denoise the signal, I would say that any results you have obtained are not trustworthy ! It may perhaps happen that you get even higher accuracies or even lower accuracy after extracting features from a denoised signal. However this result is far more trustworthy in comparison to extracting features from a noisy signal.. Hope this clarifies your doubt.Please let me know if you need any further clarification regarding the same.