There are a good number of contigs in the hg19 release. The coordinates of this contigs are a bit confusing. Now if anybody want to keep these contigs out of reference before doing alignment, what are the probable problems he/she can face?
Actually I am doing that exercise. I was little curious that somebody might already have performed the task. Even in the case exome sequencing, I have seen that in the bed file and there not at all any such confusing contigs (It can't be). My strong feeling is even someone align reads after removing all such contigs, no effect will be reflected in the final result. I am not sure though.
Aligning against a reduced genome makes some reads from repeat sequences in the unknown/umplaced contigs be most well mapped to repeats in the well-mapped genome. This isn't such as issue with longer reads. With reduced-representation approaches such as RRBS, the stacking of reads at peri-centromere or sub-telomeric sites can be problematic for differential tests requiring normalisation.
One approach is to let the reads align to the entire genome, then disregard that set of unknown/unplaced chr aligned reads for all the downstream work. In this fashion you've avoided alignment artefacts as much as possible, but haven't made your downstream analysis more confusing as a result. However, as Rohan has stated, the burden is that alignment times balloon.
One caveat, including all the "hap" chromosome 6 MHC regions will make some reads multi-mappers when in fact, they may not be. Alignment for immunology applications requires special consideration.
In general I recommend aligning your reads to whole reference hg19 instead of focusing on your region of interest.
First thing you need to consider is how specific is your kit to pull down the region of your interest? I am not aware of any kit that is 100% specific which means it will pull regions/DNA that you never wanted. Or it will pull regions that were very similar to target e.g. very closely related genes.
Now if you have reduced reference:
1. As said before aligners are not 100% perfect, that means they might align these unwanted pull downs to your target space adding noise to signal.
2. Also these reads will be tagged as uniquely aligned reads having good alignment quality etc. Keeping whole reference allows you to keep track of ambiguous/multi-mapping reads, aligners can be directed to keep such reads or discard from the final alignment files. Also I think mapping quality will be lower now, giving you an option to discard such reads or tell your variant caller to disregard reads with low mapping and base quality.
As suggested by Jason, align reads to whole reference and then at the time to downstream analysis concentrate on your target space. This will definitely help in removing FP cases.