01 January 1970 4 7K Report

Hi All,

I am using variational autoencoders as machine learning model for UI layout generation. Those UI layouts are given as images (matrices) to the machine learning model as input.

The problem is after the model learn the data and it is supposed to generate new examples, the data that is generated is a bit continous.

So my professor suggested to apply Quantize pooling as a post processing transformation to get discontinous images.

It helps also to remove the blurring or the closely graduated pixels in the image.

Anyone can help me with a useful link or paper on how this Quantization works on images ?

I will implement it later using Python.

More Jaoua Emna's questions See All
Similar questions and discussions