Roughly, the main fundamental steps are the following:
- Get the face crop (maybe using a face detector) and align the face images (similarity or whatever technique you prefer) in order for the LBP you extract to be meaningful.
- Bring all the images to the same size or the resulting feature vectors will have different lengths and will not be matchable
- Decide the dimension of the cell size where the LBP will be calculated.
- Run an LBP extraction algorithm on the aligned face images. Concatenate all the LBPs extracted from the image cells to form a unique face feature vector
- Now that you have a feature vector for each face image, you can decide what to do. Usually at least a dimensionality reduction step is always performed (usual descriptors range from 10K to 100K dimensions). Than you can apply the learning or matching algorithm you prefer.
I think there is a common abuse of the term LBP for face recognition. What you should do is compute LBP histograms or histogram sequences. A good introduction into that is given by Ahonen et al. A free version can be found here: