Early instruments had big power supplies (and often a scanning electromagnet) so choosing scanning parameters was very important. Dwell time is the time that the instrument acquires a mass ion, switching time is the time to travel between masses, and settle time is the period that the instrument has to wait to get a stable reading. With an old MS a 10 ms dwell time would only allow a small amount of an eluting mass ion to be collected - Typically the switching time and settle time would have been ~100ms, so you would only measure 30 scans for each ion across a narrow GC peak. Modern instruments can run well at ~20 ions in a typical SIM segment, you might get better results with a full mass scan.
A higher dwell time will enable you to capture more ion counts. But if you are looking at an analytical method you shouldn't need too, if you do you need to change the method (extraction, concentration, chromatography etc etc). The noise is also higher with the higher dwell time so as Bruce says for an method the LOQ is what you really need to worry about and increasing the dwell time is very unlikely to help you out there. If you are doing an infusion and looking for great grand daughter ions to analyse a structure maybe useful......
Yes it matters, as larger the dwell time, broader are the band/peak and lesser is the resolution therefore system having less dwell time/volume should be preferred. Also hardware can be modified for to reduce dwell time.
Early instruments had big power supplies (and often a scanning electromagnet) so choosing scanning parameters was very important. Dwell time is the time that the instrument acquires a mass ion, switching time is the time to travel between masses, and settle time is the period that the instrument has to wait to get a stable reading. With an old MS a 10 ms dwell time would only allow a small amount of an eluting mass ion to be collected - Typically the switching time and settle time would have been ~100ms, so you would only measure 30 scans for each ion across a narrow GC peak. Modern instruments can run well at ~20 ions in a typical SIM segment, you might get better results with a full mass scan.
Which system are you using? What is the goal of comparing the dwell times? As Tim so beautifully pointed out, the total dwell time needs to be significantly smaller than the peak width. For Agilent systems changing the dwell time does NOT change the total ion counts, so there is very little advantage gained by going to longer dwell times. Other systems don't work that way; this is why we need you to provide us with more information.
My standard practice for quantitative multiresidue GC/MS/MS method development is to start with the fastest/shortest dwell time possible on the model tandem quadrupole being used. Then run a mid level standard for the analysis. Next, evaluate the number of sampling points across each chromatographic peak. There is often a desired minimum number of points ('scans') across a chromatographic peak that is considered necessary for accurate and precise quantitation. This ranges from 7 to 20 points across the peak base (sometimes measured at 5% the height of the peak to reduce effects of less than perfect peak shape) with 10 to 15 being more common. If the initial fastest acquisition rate gives 40 points across peaks for every analyte then you can change the dwell time from 10ms to 20ms or whatever longer dwell achieves the points per peak goal which will depend on the number of analytes and whether the coelute or are in separate acquisition windows based on RT. The reason I start with the fastest time is that on some old systems slower times give better sensitivity and it always just makes me feel better to be seeing improvements with each method development change I make. This may be less true on newer model instruments but the approach is still applicable since adjusting the MS dwell time based on the chromatography is a common method development need for quantitative GC/MS/MS methods.
If all you need to do is compare 100ms to 10ms to evaluate performance such as sensitivity, precision and accuracy then do keep in mind that when processing the data for signal to noise evaluation or peak integration, the smoothing parameters ought to be optimized for each acquisition rate. Some may advocate that no smoothing be used for that evaluation but effectively the 100ms dwell is a real time smoothing of the chromatogram. Think of it as bunching/buffering 10 x 10ms acquisitions. So the 100ms data may not need post acquisition smoothing but the 10ms data will need to be smoothed for an equitable comparison to the 100ms data for both signal to noise evaluation and detection limit evaluation if you'll be evaluating a dilution series down to the LOD for each acquisition rate.
I am agree with Ravi Orugunty. As dwell time increase, no of data point for chromatographic peak is decrease. As more no of data point in a peak, bad chromatography may be observed.