Hello,
I am looking for a way to compare the feature vectors of two images that contain a person in it. The application should be able to output a high similarity score for the same person and a lower score if the people in the photos are different. What I have tried so far: I have extracted two feature vectors using the VGG19 pretrained network. I gave as an input a picture of myself and the same picture but mirrored. I have tried using cosine similarity between these two feature vectors but it gives me a score of only 0.3 similarity. I have tried changing the layer from which I extract the activation map (after each max pool layer) but I can get only 0.5 for the same person but mirrored. It would even give me a higher score for different persons than for the picture with myself compared to the same picture but mirrored. Here you can see an example of an output after the block3_pool/MaxPool layer for both pictures: https://imgur.com/a/NZefxhL. What other metric I could use? I have also tried using Euclidean distance but works even worse than cosine. It gives me a bigger distance between the photos with myself than between photos of other people.
Thank you!