Definitely yes. Having big set of the same size images, you train deep autoencoder (deep neural network from convolutional and pooling layers which gradually reduces size of input image to less latent space and then gradually enlarge it to the original size) to produce on some input the same ouput. Then the latent space represent the code, the first part of DNN represents encoder and the second part decoder. However this encoding has little bit different features as usual. Usually we have variable size of code and the same accuracy of coding. But here we have same size of code and variable accuracy of coding (depending on how much the coded data differs from the training samples)