Mallick also briefly discusses the papers which presented each of the three methods in the first post above.
I think, however, that employing these feature-based approaches is not state-of-the-art anymore. They are all very sensitive to variations in the image conditions and work only satisfactorily when image acquisition conditions are optimal.
The research in this area has long turned to Deep Learning/Convolutional Neural Networks - CNNs.
If you want to get acquainted with this field, look here for a simple-to-run example that works astonishingly well:
Back in August 2017, OpenCV 3.3 was officially released, bringing it with it a highly improved “deep neural networks” ( dnn ) module. This module supports a number of deep learning frameworks, including Caffe, TensorFlow, and Torch/PyTorch. The primary contributor to the dnn module, Aleksandr Rybnikov, has put a huge amount of work into making this module possible. However, what most OpenCV users do not know is that Rybnikov has included a more accurate, deep learning-based face detector included in the official release of OpenCV (although it can be a bit hard to find if you don’t know where to look). The tutorial from pyimagesearch will show you this detector.
A gentle introduction to Deep Learning/Convolutional Neural Networks you'll find here:
Mallick also briefly discusses the papers which presented each of the three methods in the first post above.
I think, however, that employing these feature-based approaches is not state-of-the-art anymore. They are all very sensitive to variations in the image conditions and work only satisfactorily when image acquisition conditions are optimal.
The research in this area has long turned to Deep Learning/Convolutional Neural Networks - CNNs.
If you want to get acquainted with this field, look here for a simple-to-run example that works astonishingly well:
Back in August 2017, OpenCV 3.3 was officially released, bringing it with it a highly improved “deep neural networks” ( dnn ) module. This module supports a number of deep learning frameworks, including Caffe, TensorFlow, and Torch/PyTorch. The primary contributor to the dnn module, Aleksandr Rybnikov, has put a huge amount of work into making this module possible. However, what most OpenCV users do not know is that Rybnikov has included a more accurate, deep learning-based face detector included in the official release of OpenCV (although it can be a bit hard to find if you don’t know where to look). The tutorial from pyimagesearch will show you this detector.
A gentle introduction to Deep Learning/Convolutional Neural Networks you'll find here:
maybe that problem you are trying to solve is simple enough that you need not to go DNN direction at all. Please elaborate on the 'performance' definition you are after and what is the problem description:
If you search for papers make sure to skip most modern ones as they contain a lot of NN and require a lot of images.
With this size of data set you will not get good results with (non pretrained) DNN or neural networks in general (there are techniques to generate more images from what you already have though). Even if you will have a pretrained NN the chance is great that set that was trained on is of caucuasians.
This is however a perfect size for the standard statistical approaches like PCA , LDA. Look into key words like Fisherfaces, statistical methods for face recognition etc. http://www.cosy.sbg.ac.at/~uhl/face_recognition.pdf
Article A Survey of Face Recognition Techniques
I had similair algorithm based on PCA using mahalanobis distance in openCV older version though. You could even make a voting- use 3 algorithms and majority result wins.
The problem to tackle is that most methods are based on separating images (and are given samples from all classes). If I understood well you just have a set of 40 images of class A. In that case you are basically learning the distinctive features of face A - instead learning of features that separates A from all the rest.
Without realistic class B samples I see no simple way to even test the performance of your application.
You could introduce perturbations on images of A like spreading eyes further apart, longer nose etc. - all to produce different faces that are somehow similar to A (these are people that almost look like A and you definitely want to separate them from A).
If you still want to look at neural networks, you could look into matching networks or Siamese neural networks or other deep metric learners. They learn the features that are important to *match* input data. That is, you feed the network 2 data instances (pictures of faces) and it tells you "match" or "no match". It means that you can do face recognition with few samples.