I believe that the exome sequencing is my major focus point... how start from the raw data the clues for a correct analysis, determine errors and a final discrimination of the data.
Reema said a very important thing. Quality control is the basic.
Independently from the platform and what is your higher-level analysis, the quality control of the reads that you want to align is very important.
Remember that exists three common passages that are quality check of the reads, trimming and masking of the bad nucleotides.
It is also important understand what are the limits and/or the possibilities associated to the technology. I'll suggest you a couple of papers that afford some questions regarding exome sequencing.
Which sequencer platform data are you studing: 454, Solid, Ion, Ilumina or others?. Well, i'm doing this question because it is very important to know where came from the raw data. Depending the platform used, one that generate long reads (250 bp ~ 450 bp) or other that generate short reads (25 bp ~ 150 bp), you will use a different algorithm to process this data.
For example, on genome assembly we need to decide if we will use a Bruijin graph (short reads) based algorithm or a OLC (long reads) based algorithm.
Once you have your variants I'm a big fan of ensembl's variant effect predictor (VEP) for determining what deleterious effects the variants might have: